Entitlements could not be checked due to an error reaching the service. Showing non-confidential search results only.

Build Edge AI Applications on Arm

This page helps developers get started fast — with quick start guides, tools, guides, tutorials, and example applications to build secure, reliable, and scalable AI at the edge.

Dev Icon Quick Step

Quick Start Guide

Prototype and run edge AI on Arm-based devices.

Get Started
Orange Scalable icon

Developer Launchpad

Run real-time AI on Cortex-M and
Ethos-U.

Explore Launchpad
Orange Tools For Dev icon

Libraries and Tools

Access tools and libraries for edge AI.

Access Libraries and Tools
Dev Icon Sharper Graphics

Example Applications

See edge AI in action across devices.

Access Example Applications

Explore the Right IP

From ultra-efficient MCUs to powerful CPUs and NPUs, build, test, and deploy your AI workloads faster with Arm’s processors.

Arm Cortex-A Processor

Cortex-A CPU

Cortex-M Processors

Cortex-M CPU

Join the Arm Developer Program

Share ideas, get expert support, and level up your projects with tools, resources, and direct access to Arm’s global developer community.

Explore the Program

Frequently Asked Questions

What is an NPU?

An NPU (Neural Processing Unit) is a specialized hardware accelerator designed to efficiently run neural network inference by handling matrix and tensor operations, offloading this compute-intensive work from the CPU. In Arm-based systems such as those using Ethos-U NPUs, it enables faster and lower power on-device AI processing at the edge.

Is it possible to test my edge AI application without hardware?

Yes, Arm provides several ways to test Edge AI applications without physical hardware. You can use the Arm Fixed Virtual Platform (FVP) to simulate Cortex-M and Ethos-U NPU behavior.

What development boards can I use to get started?

Arm's extensive ecosystem offers multiple development boards for Edge AI. Popular options include the Alif Ensemble E8 Devkit, (Cortex A-32, Cortex-M55, and Ethos-U85 and Ethos-U55 NPUs), Grove Vision AI V2, OpenMV, and Raspberry Pi 5.

What ML frameworks are supported on Arm platforms?

Arm platforms support models from PyTorch (via ExecuTorch), LiteRT (formerly TensorFlow Lite), ONNX, and PaddlePaddle, with expanding compatibility to other machine learning frameworks as ecosystem support grows.

What's the difference between Cortex-M and Cortex-A for AI?

Cortex-M processors are optimized for ultra-low-power AI tasks such as sensor processing and always-on inference, typically running on an RTOS. They are designed to extract maximum performance within constrained memory and compute resources, especially when paired with Ethos-U NPUs for efficient neural network acceleration. In contrast, Cortex-A processors provide higher performance and larger memory capacity, making them better suited for more complex AI workloads like vision or speech processing under Linux.

Can I run my existing ML models on Arm devices?

Many existing ML models can run on Arm devices, as Arm platforms are compatible with major frameworks like PyTorch (via ExecuTorch), LiteRT (formerly TensorFlow Lite), ONNX, and PaddlePaddle. However, deployment depends on factors such as model size, memory footprint, supported operators, and available compute resources, which may require optimization or quantization for efficient execution on embedded and edge devices.