Add boilerplate frontend support

Before you add any specific backend implementation for a new operator, you must first add basic boilerplate front-end support.

Add a descriptor

A descriptor enables operators to use unique configurable parameters. Arm NN uses a descriptor class to describe the parameters. For example, to describe the parameters that are commonly found in the convolution operator, you create a descriptor called Convolution2dDescriptor and initialize it with padding, stride, and dilation parameters.

It is not always necessary to implement a descriptor. For example, the Abs operator takes one input and outputs the output as an absolute value. Therefore, the Abs operator does not need any extra parameters and the corresponding AbsLayer does not require a descriptor.

To add a descriptor:

  1. Define a new descriptor at armnn/include/armnn/Descriptors.hpp. The new descriptor must have unique parameters.
  2. Include the new descriptor in the list in DescriptorsFwd.hpp. This list is used for forward declaration to reduce build times.

Add default layer support

A layer is a representation of an operator in the Arm NN network graph. To represent an operator as a layer in an Arm NN network graph, first add default layer support.

To add default layer support:

  1. Define Is<LayerName>Supported() as a virtual function in the ILayerSupport interface at armnn/include/armnn/ILayerSupport.hpp. In the Is<LayerName>Supported() function, include a layer-specific descriptor, to check parameters to verify that a layer is supported. The following example shows the Is<LayerName>Supported() function for a SoftmaxLayer operator:
  2. virtual bool IsSoftmaxSupported(const TensorInfo& input,
                                        const TensorInfo& output,
                                        const SoftmaxDescriptor& descriptor,
                                        Optional<std::string&> reasonIfUnsupported = EmptyOptional()) const = 0;
  3. Add Is<LayerName>Supported() to backends/backendsCommon/LayerSupportBase.hpp to update the class LayerSupportBase. This addition extends IlayerSupport.
  4. Enable basic front-end support by using the DefaultLayerSupport() function to implement LayerSupportBase.cpp. The DefaultLayerSupport() returns a false value. The following code shows an example implementation of LayerSupportBase.cpp for a SoftmaxLayer operator.
bool LayerSupportBase::IsSoftmaxSupported(const TensorInfo&, // input
                                          const TensorInfo&, // output
                                          const SoftmaxDescriptor&, // descriptor
                                          Optional<std::string&> reasonIfUnsupported) const
    return DefaultLayerSupport(__func__, __FILE__, __LINE__, reasonIfUnsupported);

Add a layer class implementation

Arm NN represents a new operator by using a <LayerName>Layer class.

To add your layer class implementation:

  1. Add a layer class to armnn/src/armnn/layers/<LayerName>Layer.hpp and an implementation to armnn/src/armnn/layers/<LayerName>Layer.cpp. Depending on the type of layer, the layer can extend any of the following layer classes:
    • Layer. This means that operator layers that do not have configurable parameters extend the Layer class.
    • LayerWithParameters<<LayerType>Descriptor>. This means that operator layers that have configurable parameters like Convolution2dLayer, DepthwiseConvolution2dLayer, or FullyConnectedLayer, extend the LayerWithParameters<<layertype>Descriptor> layer class. For example, LayerWithParameters<depthwiseconvolution2ddescriptor> extends the LayerWithParameters<<layertype>Descriptor> layer class. These layers require a descriptor.
    • ElementwiseBaseLayer. This means that operator layers that encapsulate element by element operations extend this ElementwiseBaseLayer class. For example, the Maximum, Minimum, Multiplication, Division, Subtraction, or Addition operator layers extend the ElementwiseBaseLayer class.
  2. Add the layer to the list of layer types in armnn/src/armnn/InternalTypes.hpp. The following code shows the InternalTypes.hpp:
    /// This list uses X macro technique.
    /// See for more info
    #define LIST_OF_LAYER_TYPE \
        X(Activation) \
        X(Addition) \
        X(ArgMinMax) \
        X(BatchNormalization) \
    Include <LayerName>Layer.hpp and declare the layer in armnn/src/armnn/LayersFwd.hpp for forward declaration and to reduce build times. The following code includes <LayerName>Layer.hpp:
    #include "layers/SoftmaxLayer.hpp"
  3. If you are adding Android support, add <LayerName>Layer.cpp to the list of layers in the armnn/ file.
  4. Add <LayerName>Layer.hpp and <LayerName>Layer.cpp to the list of layers in the armnn/CMakeLists.txt file.
  5. Add the virtual function Visit<LayerName>Layer() to armnn/include/armnn/ILayerVisitor.hpp. The following code shows the inclusion of an example SoftmaxLayer to the ILayerVisitor.hpp:
        /// Function that a softmax layer should callback to when its Accept(ILayerVisitor&) function is invoked.
        /// @param layer - pointer to the layer which is calling back to this visit function.
        /// @param softmaxDescriptor - SoftmaxDescriptor to configure the softmax.
        /// @param name - Optional name for the layer.
        virtual void VisitSoftmaxLayer(const IConnectableLayer* layer,
                                       const SoftmaxDescriptor& softmaxDescriptor,
                                       const char* name = nullptr) = 0;
  6. Add Visit<LayerName>Layer() to the armnn/include/armnnLayerVisitorBase.hpp implementation. The following code shows this addition for an example SoftmaxLayer:
        void VisitSoftmaxLayer(const IConnectableLayer*,
                               const SoftmaxDescriptor&,
                               const char*) override { DefaultPolicy::Apply(__func__); }
  7. Add Visit<LayerName>Layer() to src/armnnSerializer/Serializer.hpp. Implement the visit function of the new operator or layer so that it throws an unimplemented exception in Serializer.cpp. The following code shows this addition for an example SoftmaxLayer:
    void VisitSoftmaxLayer(const armnn::IConnectableLayer* layer,
                               const armnn::SoftmaxDescriptor& softmaxDescriptor,
                               const char* name = nullptr) override;
  8. Add the virtual function Add<LayerName>Layer() to armnn/include/armnn/INetwork.hpp.
  9. Add the implementation of Add<layername>Layer() to armnn/src/armnn/Network.hpp and Network.cpp. The following code shows this addition for an example SoftmaxLayer:
    IConnectableLayer* AddSoftmaxLayer(const SoftmaxDescriptor& softmaxDescriptor,
            const char* name = nullptr) override;

Add no-op factory implementations for all backends

During optimization, Arm NN assigns a backend to each layer, depending on the information that the application running the network gives. The layer is executed on the assigned backend. The following are the backends:

  • CpuRef for the reference backend
  • CpuAcc for the Neon backend
  • GpuAcc for the compute library backend

Each layer is executed as a workload. Workloads are bits of work that are put in an execution queue. The runtime consumes these workloads from the queue. To create the workload for a layer and backend pair, each layer uses a backend-specific factory. The backend-specific factory is an implementation of IWorkloadFactory.

To add a no-op implementation:

  1. Define <LayerName>QueueDescriptor in armnn/src/backends/backendsCommon/WorkloadData.hpp. If a layer requires unique parameters, you must use the QueueDescriptor to extend the QueueDescriptorWithParameters<<LayerName>Descriptor> class. If a layer does not require unique parameters, you must use the <LayerName>QueueDescriptor class to extend the QueueDescriptor. Both types of QueueDescriptor must provide a Validate() function. The following code shows this definition for an example SoftmaxLayer layer:
    // Softmax layer workload data.
    struct SoftmaxQueueDescriptor : QueueDescriptorWithParameters<SoftmaxDescriptor>
        void Validate(const WorkloadInfo& workloadInfo) const;
  2. Implement the <LayerName>QueueDescriptor::Validate() function at armnn/src/backends/backendsCommon/WorkloadData.cpp. The specific validation checks that you must implement depend on the layer. You can find guidance on what you must check in the technical documentation of the different frameworks that provide a description of the operator. Some example checks are:
    • Validate the number of inputs.
    • Validate the number of outputs.
    • Validate that the number of elements in the input tensor and output tensor match.
    • Validate that the data type is supported.
    • Validate the number of dimensions in the input and output tensor.
    • Validate the input width and height.
  3. Add a virtual Create<LayerName>() function to the IWorkloadFactory in armnn/src/backends/backendsCommon/WorkloadFactory.hpp. The following code shows this addition for an example SoftmaxLayer:
    virtual std::unique_ptr<IWorkload&t; CreateSoftmax(const SoftmaxQueueDescriptor& descriptor,
                                                         const WorkloadInfo&           info) const;
  4. Add a LayerType switch case for the new layer in the IsLayerSupprted() function, in armnn/src/backends/backendsCommon/WorkloadFactory.cpp. The following code shows this addition for an example SoftmaxLayer:
            case LayerType::Softmax:
                auto cLayer = PolymorphicDowncast<const SoftmaxLayer*>(&layer);
                const TensorInfo& input = layer.GetInputSlot(0).GetConnection()->GetTensorInfo();
                const TensorInfo& output = layer.GetOutputSlot(0).GetTensorInfo();
                result = layerSupportObject->IsSoftmaxSupported(OverrideDataType(input, dataType),
                                                                OverrideDataType(output, dataType),
  5. Add a default implementation of the Create<LayerName>() function in armnn/src/backends/backendsCommon/WorkloadFactory.cpp. The following code shows this addition for an example SoftmaxLayer:
    std::unique_ptr<iworkload> IWorkloadFactory::CreateSoftmax(const SoftmaxQueueDescriptor& /*descriptor*/,
                                                               const WorkloadInfo& /*info*/) const
        return std::unique_ptr<IWorkload>();

Add layer visitor unit tests

You must add layer visitor unit tests to IsLayerSupportedTestImpl.hpp. These unit tests must check that the Arm NN network graph supports the layer or operator. To do this check, use random values to create the relevant workload for the individual layer or operator.

To add unit tests:

  1. Add your layer to armnn/src/backends/backendsCommon/test/IsLayerSupportedTestImpl.hpp. The following code shows an example for a layer with constructors that take one parameter, name:
    // Every entry in the armnn::LayerType enum must be accounted for below.
    The following code shows an example for a layer with constructors that take two parameters, descriptor and name:
    // Every entry in the armnn::LayerType enum must be accounted for below.
  2. Add your layer to src/armnn/test/TestNameOnlyLayerVisitor.hpp or src/armnn/test/TestNameAndDescriptorLayerVisitor.hpp, depending on your layer constructor parameters.
    1. If your layer has no descriptor, use the following form to add it to TestNameOnlyLayerVisitor.hpp:
    2. If your layer has any descriptors, use the following form to add it to src/armnn/test/TestNameAndDescriptorLayerVisitor.hpp:
    Arm recommends using the *_1_PARAM macros for layers that do not have a descriptor and the *_2_PARAM macros for layers that do have a descriptor.
Previous Next