Arm NN is an inference engine for Arm CPUs, GPUs, and NPUs. Arm NN supports models created with TensorFlow Lite and ONNX frameworks.
This guide shows you how to download and configure Arm NN from start to finish so that you can use the TensorFlow Lite and ONNX frameworks with Arm NN. Alternatively, you can use our Debian package, which is the easiest way to install Arm NN and does not require you to build everything from the source. You can find more details about installing Arm NN using the Debian package in the
InstallationViaAptRepository.md file in Arm NN. The guide does not currently cover how to include Arm NN as part of the TensorFlow Lite delegate runtime.
Note: Arm NN deprecated support for the Quantizer, Caffe parser and TensorFlow parser in the 21.02 release. These will be removed in the 21.05 release.
Before you begin
Your platform or board must have:
- An Armv7-A or Armv8-A CPU, and optionally an Arm Mali GPU using the OpenCL driver
- At least 4GB of RAM
- At least 1GB of free storage space
Before you configure and build your environment, you must install the following tools on your platform or board:
- A Linux distribution
- Git. Arm tests Git 2.17.1. Other versions might work
- SCons. Arm tests SCons 2.4.1 on Ubuntu 16.04 and SCons 2.5.1 on Debian. Other versions might work
- CMake. Arm tests CMake 3.5.1 on Ubuntu 16.04 and CMake 3.7.2 on Debian. Other versions might work
- GNU Wget. Arm tests GNU Wget 1.17.1 built on GNU/Linux. Other versions might work
- UnZip. Arm tests UnZip 6.00. Other versions might work
- xxd. Arm tests xxd V1.10. Other versions might work
We estimate that you need about 3-4 hours to complete the instructions in this guide.