Join our series of AI Virtual Tech Talks on developing the latest AI technologies. AI Virtual Tech Talks are hosted by experts from Arm and speakers from partner companies in the AI Partner Program. Each session features a different speaker discussing the latest trends in AI development, including deployment, optimization, best practices, and new technologies shaping the industry.

Register for the live talks or jump to previous recordings.

AI Virtual Tech Talks schedule

Demystify artificial intelligence on Arm MCUs

Date: July 14, 2020
Time: 8am PST / 4pm BST / 11pm CST

Francois de Rochebouet, CTO and co-founder of Cartesiam


Cartesiam will demystify artificial intelligence and all the complexity that comes with it, to the embedded developers community. Even without any prior AI skills, developers can use Cartesiam’s NanoEdge AI Studio to quickly and easily integrate ML algorithms with a broad range of applications on Arm-based low-power microcontrollers. AI Studio also enables on-device learning of a nominal behavior and is capable of detecting any variation of this behavior, even in a complex and “noisy” environment. In this session, we will demonstrate how to generate a custom learning library and use it to perform anomaly detection on an STmicroelectronics development board.


  • Cartesiam solution overview
  • Creation of a machine learning library using NanoEdge AI Studio
  • Integration into the main program running on an ST development board with Arm MCU
  • Live fan vibration analysis and anomaly detection


Speech recognition on Arm Cortex-M

Date: July 28, 2020
Time: 8am PST / 4pm BST / 11pm CST

Vikrant Tomar, founder and CTO,
Sam Myer, lead developer,

Vikrant is founder and CTO of Inc. He is a scientist and executive with nearly 10 years of experience in speech recognition and machine / deep learning. He obtained his PhD in automatic speech recognition at McGill University, Canada, where he worked on manifold learning and deep learning approaches for acoustic modeling. In the past, he has also worked at Nuance Communications Inc. and Vestec Inc. as a Research Scientist.

Sam Myer Sam is the lead developer at Inc., where his responsibilities include Fluent's embedded speech recognition engine. He has a M.Sc. in signal processing from Queen Mary University of London and a B.Sc. in computer science from McGill University. Sam has extensive software development experience encompassing nearly 15 years and multiple cities including New York, Berlin and Montreal.


In this tech talk, we will cover how go from training models in high level libraries such as Pytorch and then run the models on Arm Cortex-M4 and Cortex-M7 boards. We will talk about optimizations achieved using the Arm CMSIS-NN library, as well as, neural network optimizations that we use extensively such as quantization, unique model architectures, etc.


Getting started with Arm Cortex-M software development and Arm Development Studio

Date: August 11, 2020
Time: 8am PST / 4pm BST / 11pm CST

Pareena Verma, Principal Solutions Architect, Arm
Ron Synnott, Principal Solution Architect, Arm

Pareena is a Solutions Architect in the Development Solutions Group at Arm. As a solutions architect at Arm and Carbon Design Systems prior to that, she has worked with Arm’s partners around the world to design system-level virtual prototyping solutions for early IP evaluation, performance analysis and software development. She has worked with software developers and SoC architects on numerous projects involving cycle models, fast models, compilers, debuggers and performance analysis tools for virtual prototyping.

Ronan Synnott is a Solutions Architect within Arm’s Development Solutions Group, educating developers on all aspects Arm’s development tool offerings, ensuring that they select the right portfolio of products for their specific needs, and enabling the most productive use of the tools together. With 22+ years of experience in a variety of software focused positions around the globe, Ronan has seen first-hand how software development, and the tools used for this purpose, have evolved over time.


Advances in processing power and machine learning algorithms enable us to run machine learning models on tiny far edge devices. Arm’s SIMD, DSP extension and vector processing technologies processor bring an unprecedented uplift in energy-efficient machine learning and signal processing performance for next-generation voice, vision or vibration use cases.

Arm also provides a unified software toolchain for a frictionless and fast developer experience. Join this webinar to learn how to get started today to port optimized ML code for the Arm microcontrollers. In this session, we will cover:

  • How to develop and debug your software with Arm Development Studio
  • How to get started with MCU software development on Arm Fast Model Systems
  • How to build and deploy a ML application with TensorFlow Lite and CMSIS-NN kernels to a Cortex-M7 device
  • How to migrate your ML code to Arm Cortex-M55, Arm’s most AI-capable Cortex-M processor


Efficient ML across Arm from Cortex-M to Web Assembly

Date: August 25, 2020
Time: 8am PST / 4pm BST / 11pm CST

Jan Jongboom, CTO , Edge Impulse


In this talk, Jan Jongboom talks about techniques for enabling efficient and explainable TinyML. Learn about efficient optimization from dataset to deployment across the whole spectrum of Arm hardware from Cortex-M at 100 uW to Cortex-A mobile and edge devices with Web Assembly. We’ll include a live tutorial, demonstrating how to easily get started with sensor data collection and TinyML algorithm deployment on your smartphone.


Machine learning for embedded systems at the edge

Speakers: Anthony Huereca, Systems Engineer, Edge Processing, NXP; Kobus Marneweck, Senior Product Manager, Arm

Machine learning inference is impacting a wide range of markets and devices, especially low power microcontrollers and power-constrained devices for IoT applications. These devices can often only consume milliwatts of power, and therefore not achieve the traditional power requirements of cloud-based approaches. By performing inference on-device, ML can be enabled on these IoT endpoints delivering greater responsiveness, security and privacy while reducing network energy consumption, latency and bandwidth usage.

This talk between Arm and NXP's MCU product managers and engineers will explain how developers can efficiently implement and accelerate ML on extremely low-power, low-area Cortex-M based devices with open-source software libraries and tools. The discussion will include a demo on the i.MX RT1060 crossover MCU to show how to create and deploy ML applications at the edge.

tinyML development with Tensorflow Lite for Microcontrollers using CMSIS-NN and Ethos-U55

Speakers: Fredrik Knutsson, Arm; Felix Johnny, Arm

Deep Neural Networks are becoming increasingly popular in always-on endpoint devices. Developers can perform data analytics right at the source, with reduced latency as well as energy consumption. We will introduce how Tensorflow Lite for Microcontrollers (TFLu) and its integration with CMSIS-NN maximize the performance of machine learning applications. Developers can now run larger, more complex neural networks on Arm MCUs and micro NPUs while reducing memory footprint and inference time.