Now that you have built your environment and parsers for Arm NN, you are ready to begin programming with Arm NN and using it with your models.
Arm also provides Python bindings for our parsers and Arm NN. We call the result PyArmNN. Therefore, you have the convenience of writing your application in either C++ using the Arm NN library or Python using PyArmNN. You can find tutorials on how to use our parsers in our Arm NN documentation. The latest version can be found in the wiki section of the Arm NN Github repository. If you would like a further challenge, you can follow the Accelerating ML Inference on Raspberry Pi With PyArmNN tutorial to learn how to classify an image as Fire or Non-Fire.
Arm NN also provides the
armnnDelegate library for accelerating certain TensorFlow Lite operators on Arm hardware. The library performs this acceleration by providing the TensorFlow Lite interpreter with an alternative implementation of the operators via its delegation mechanism. This library is our recommended way to accelerate TensorFlow Lite models.
Arm NN also provides a very basic example of how to use the Arm NN SDK API at
For any questions on the Arm NN guides, post your request with reference to the specific guide on the Arm NN GitHub repository.