Running Advanced AI on Arm Embedded Systems
This workshop is a practical, code-along introduction to running AI at the edge on Arm. Using the Arm-based Alif Ensemble E8 DevKit with Ethos-U NPUs, we’ll explore how AI models and frameworks are prepared, optimized, and executed on embedded hardware. Along the way, you see how ExecuTorch, optimized for Arm platforms, fits into the deployment workflow to enable efficient inference on constrained devices, with an emphasis on portability, performance, and real-world deployment considerations.
In this webinar, you learn:
- How AI models are prepared and deployed for edge inference on Arm-based embedded systems using Cortex CPUs and Ethos-U NPUs
- How ExecuTorch fits into the Arm AI software stack to enable efficient, portable inference on constrained devices
- Key considerations for optimizing performance and deploying AI workloads on real embedded hardware, including acceleration, resource constraints, and portability.
Watch the on-demand session below.
Speakers
Gabriel Peterson
Senior ML Engineer and Arm Developer Evangelist
@gabriel_nr on Discord
Gabriel Peterson is a Developer Evangelist at Arm focused on helping developers adopt machine learning and AI on Arm platforms. He brings experience across gaming, graphics, and AI to enable practical, high-performance solutions.
Dominica Amanfo
Edge AI Developer Strategist
@dominica_aa on Discord
Dominica works across product, engineering, and ecosystem partners to make complex Arm platforms more accessible and efficient for developers. She streamlines the path from prototype to production, enabling developers to confidently build and scale edge AI applications and position Arm as a trusted foundation for edge AI development.
