Edge AI on Arm Cortex-M and Ethos-U
This page explores how developers can run efficient AI on constrained devices using Cortex-M processors, Ethos-U NPUs, and Helium vector extensions. It’s designed for application and embedded developers who want to build and deploy high-performance, low-power AI at the edge.
Get Started
Quick Start Guide (Zephyr RTOS)
Deploy edge AI on Cortex-M with Zephyr.
GitHub Setup Guide (FreeRTOS)
Run edge AI on Corstone with FreeRTOS.
Learn and Code
Apply your learnings by exploring the SSCMA (Seeed’s SenseCraft Model Assistant) Model Zoo, where you can train models for your applications and follow tutorials tailored to your interests and project needs.
Take the next step by exploring these Cortex-M Learning Paths, where you can dive into YOLO, TinyML, OCR (optimal character recognition), and image classification examples, building on your applications and expanding your edge AI expertise.
YOLO on Himax
Run object detection with YOLO on a low-power Himax board.
AVH PPOCR
Deploy OCR recognition with PaddlePaddle on Arm Virtual Hardware.
TinyML on Arm
Build and train your own TinyML model, step by step.
Image Classification with STM32Cube.AI
Implement neural networks for image classification tasks on STM32 MCUs.
LiteRT with STM32Cube.AI
Deploy LiteRT models on STM32 microcontrollers.
More ML on Cortex-M
Browse the full catalog of Cortex-M machine learning paths.
Tools
Take the next step by optimizing, transforming, and evaluating your edge AI models with tools for Cortex-M and Ethos-U NPUs.
Ethos-U Vela Compiler
Optimizes models for Ethos-U NPUs (U55/U65/U85) and converts LiteRT to NPU-friendly format.
ML Embedded Evaluation Kit
Turnkey kit to evaluate ML workloads on Cortex-M with Ethos-U NPUs.
What's Next?
- MORE RESOURCES
- ARM DEVELOPER PROGRAM
- Courses and Labs
- DEVELOPER RESEARCH

Arm Developer Program
Join the Arm Developer Program and connect with a global community of developers and Arm engineers to build better apps on Arm. Get early access to tools, technical content, workshops, and support to help you debug, optimize, and ship your projects.
Course: Optimizing Gen AI on Arm Processors
Learn to optimize generative AI workloads on Arm for mobile, edge, and cloud through hands-on labs and lectures.
Arm Developer Labs
Tackle real-world Edge AI challenges with hands-on projects – perfect for building, learning, and prototyping.

Arm Developer Council
Join the Arm Developer Council to share feedback, help shape the tools and platforms you use — and receive a voucher for your time.
Frequently Asked Questions
Running machine learning models on low-power, resource-constrained devices is difficult because traditional cloud AI workflows don’t scale efficiently to the edge. Developers must balance power, performance, and memory limits while ensuring real-time inference. Explore how Arm solves these challenges in the launchpad.
Embedded developers can upskill in areas like model optimization, quantization, and deployment using frameworks such as LiteRT and ExecuTorch. Arm provides tutorials and tools to make this transition easier. Explore Resources.
Arm’s Cortex-M processors with Helium enable high-performance vector processing for AI, DSP, and vision tasks, while Ethos-U NPUs deliver efficient on-device AI acceleration for real-time inference. Explore Arm’s Edge AI Resources.
Developers can use the Vela Compiler to optimize LiteRT models for Ethos-U NPUs. Arm’s ecosystem also includes Keil MDK, VS Code extensions, ExecuTorch, LiteRT, and the Arm Model Zoo for quick experimentation and deployment. Explore Arm’s Edge AI Tools and the launchpad above.
Start with Arm’s Edge AI quick start guide to set up your environment. Take your next step by exploring all Edge AI resources – guides, learning paths, tools and libraries, and example applications – to support your build.
