Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently, without modification, across Arm Cortex-A CPUs, Arm Mali GPUs and Arm Ethos NPUs.
Arm NN is free of charge.
About Arm NN SDK
Arm NN SDK is a set of open-source Linux software and tools that enables machine learning workloads on power-efficient devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs, Arm Mali GPUs and Arm Ethos NPUs.
The latest release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX. Arm NN takes networks from these frameworks, translates them to the internal Arm NN format and then, through the Compute Library, deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs such as the Mali-G71 and Mali-G72.
Arm NN for Android
Also available is Arm NN for NNAPI, Google’s interface for accelerating neural networks on Android devices, made available in Android O. By default, NNAPI runs neural network workloads on the device’s CPU cores, but also provides a Hardware Abstraction Layer (HAL) that can target other processor types, such as GPUs. Arm NN for Android NNAPI provides this HAL for Mali GPUs. A further release adds support for Arm Ethos-N NPUs.
Arm support for Android NNAPI gives >4x performance boost.
Download Arm NN for Android sources.
Arm NN performance relative to other NN frameworks
- Arm NN open-source collaboration enables optimal third-party implementations
- Deployed in multiple production devices (>250Mu)
Support for Cortex-M CPUs
Machine learning support for Cortex-M microcontrollers is provided by TensorFlow Lite Micro. Further optimization is available via CMSIS-NN, a collection of efficient neural network kernels developed to maximize the performance and minimize the memory footprint of neural networks on Cortex-M processor cores.Download CMSIS-NN
Arm NN future roadmap
Future releases of Arm NN will enable support for other machine learning frameworks as inputs, and other forms of processor cores as targets. This includes processor cores and accelerators from Arm’s partners, assuming availability of suitable extensions.
Webinar - Project Trillium: Optimizing ML Performance for any Application
Project Trillium is a suite of Arm IP designed to deliver scalable ML and neural network functionality at any point on the performance curve, from sensors, to mobile, and beyond.
|Answered||Forum FAQs||0 votes||72 views||0 replies||Started 8 days ago by Annie Cracknell||Answer this|
|Answered||Forum FAQs||0 votes||56 views||0 replies||Started 8 days ago by Annie Cracknell||Answer this|
|Answered||Forum FAQs||0 votes||933 views||0 replies||Started 8 days ago by Annie Cracknell||Answer this|
|Answered||Forum FAQs||0 votes||75 views||0 replies||Started 8 days ago by Annie Cracknell||Answer this|
|Not answered||Trustzone with Wifi||0 votes||22 views||0 replies||Started 8 hours ago by beze||Answer this|
|Not answered||ARM cortex A53 PMU read from another core||0 votes||39 views||0 replies||Started yesterday by Nikos Pol||Answer this|
|Answered||Forum FAQs Started 8 days ago by Annie Cracknell||0 replies 72 views|
|Answered||Forum FAQs Started 8 days ago by Annie Cracknell||0 replies 56 views|
|Answered||Forum FAQs Started 8 days ago by Annie Cracknell||0 replies 933 views|
|Answered||Forum FAQs Started 8 days ago by Annie Cracknell||0 replies 75 views|
|Not answered||Trustzone with Wifi Started 8 hours ago by beze||0 replies 22 views|
|Not answered||ARM cortex A53 PMU read from another core Started yesterday by Nikos Pol||0 replies 39 views|