Join our series of AI Virtual Tech Talks on developing the latest AI technologies. AI Virtual Tech Talks are hosted by experts from Arm and speakers from partner companies in the AI Partner Program. Each session features a different speaker discussing the latest trends in AI development, including deployment, optimization, best practices, and new technologies shaping the industry.

Register for the live talks or jump to previous recordings.

Schedule


Deeplite logo

Small is big: Making Deep Neural Nets faster and energy-efficient on low power hardware

Date: November 3, 2020
Time
: 8am PST / 4pm BST / 11pm CST
Davis Sawyer, co-founder and CPO of Deeplite

Register

Abstract

Deep Neural Networks (DNNs) are the de facto tool for solving complex problems, from DNA sequencing to facial recognition. However, DNNs require extensive computing power, memory and expertise to deploy in real world environments. We will discuss various approaches to DNN model optimization for multiple objectives like inference latency, accuracy, and model size to make deep learning more applicable for ARM microcontrollers and low-power devices. Please come with your questions and we'll have an open mic session to let you engage with the speaker.


Smart Cities

The Smart City in Motion - AI in intelligent transportation 

Date: November 17, 2020
Time
: 8am PST / 4pm BST / 11pm CST
Philip Bockrath, Vice-President Wireless Technology, Clever Devices
Markus Levy, Director of Machine Learning Technologies, NXP Semiconductor
David Steele, Director of Innovation, Arcturus

Register

Abstract

Smart cities move millions of people every day and they do it by relying on Intelligent Transportation Systems (ITS). This AI Tech Talk brings together a powerhouse team including NXP, Arcturus and Clever Devices, North America’s largest ITS solution provider. The session will focus on edge and AI in public transportation including real-world use cases in public safety and operations. Learn first-hand from the experts about how solutions are implemented and catch a glimpse of where edge AI is heading. The talk will also offer a sneak peek into Arm’s latest offering focused on powering this new wave of edge AI. Join us during this Tech Talk for a rare combination of implementation insights and industry vision


Bringing Spatial AI to embedded devices

Date: December 8, 2020
Time
: 8am PST / 4pm BST / 11pm CST
Owen Nicholson & Paul Brasnett, SLAMCore

Register

Abstract
    Spatial AI is the term we use to describe the technologies necessary to give an autonomous robot or drone full awareness of its pose combined with the geometry and semantic understanding of the world around it. At the core of spatial AI is VI-SLAM (visual-inertial simultaneous localisation and mapping), which fuses image and IMU sensor data to understand the world. The algorithms to perform VI-SLAM are computationally demanding and are challenging to implement on embedded processors such as Arm A-class processors.

    In this talk we will:
  • Explore the need and opportunities for Spatial AI in robotics and drones applications
  • The key algorithm stages involved in a VI-SLAM system
  • The processing requirements of the key stages in VI-SLAM
  • Getting up and running with SLAMcore Spatial AI SDK on Arm based systems
  • Demonstration of Spatial AI on Arm processors

Machine learning for embedded systems at the edge

Speakers: Anthony Huereca, Systems Engineer, Edge Processing, NXP; Kobus Marneweck, Senior Product Manager, Arm

Machine learning inference is impacting a wide range of markets and devices, especially low power microcontrollers and power-constrained devices for IoT applications. These devices can often only consume milliwatts of power, and therefore not achieve the traditional power requirements of cloud-based approaches. By performing inference on-device, ML can be enabled on these IoT endpoints delivering greater responsiveness, security and privacy while reducing network energy consumption, latency and bandwidth usage.

This talk between Arm and NXP's MCU product managers and engineers will explain how developers can efficiently implement and accelerate ML on extremely low-power, low-area Cortex-M based devices with open-source software libraries and tools. The discussion will include a demo on the i.MX RT1060 crossover MCU to show how to create and deploy ML applications at the edge.

tinyML development with Tensorflow Lite for Microcontrollers using CMSIS-NN and Ethos-U55

Speakers: Fredrik Knutsson, Arm; Felix Johnny, Arm

Deep Neural Networks are becoming increasingly popular in always-on endpoint devices. Developers can perform data analytics right at the source, with reduced latency as well as energy consumption. We will introduce how Tensorflow Lite for Microcontrollers (TFLu) and its integration with CMSIS-NN maximize the performance of machine learning applications. Developers can now run larger, more complex neural networks on Arm MCUs and micro NPUs while reducing memory footprint and inference time.

Demystify artificial intelligence on Arm MCUs

Speaker: Francois de Rochebouet, Cartesiam.ai

Cartesiam will demystify artificial intelligence and all the complexity that comes with it, to the embedded developers community. Even without any prior AI skills, developers can use Cartesiam’s NanoEdge AI Studio to quickly and easily integrate ML algorithms with a broad range of applications on Arm-based low-power microcontrollers. AI Studio also enables on-device learning of a nominal behavior and is capable of detecting any variation of this behavior, even in a complex and “noisy” environment. In this session, we will demonstrate how to generate a custom learning library and use it to perform anomaly detection on an STmicroelectronics development board.

Speech recognition on Arm Cortex-M

Speakers: Vikrant Tomar and Sam Myer, Fluent.ai

In this tech talk, we will cover how Fluent.ai go from training models in high level libraries such as Pytorch and then run the models on Arm Cortex-M4 and Cortex-M7 boards. We will talk about optimizations achieved using the Arm CMSIS-NN library, as well as, neural network optimizations that we use extensively such as quantization, unique model architectures, etc.

Getting started with Arm Cortex-M software development and Arm Development Studio

Speakers: Pareena Verma and Ronan Synnott, Arm

Advances in processing power and machine learning algorithms enable us to run machine learning models on tiny far edge devices. Arm’s SIMD, DSP extension and vector processing technologies processor bring an unprecedented uplift in energy-efficient machine learning and signal processing performance for next-generation voice, vision or vibration use cases.

Arm also provides a unified software toolchain for a frictionless and fast developer experience. Join this webinar to learn how to get started today to port optimized ML code for the Arm microcontrollers. In this session, we will cover:

  • How to develop and debug your software with Arm Development Studio
  • How to get started with MCU software development on Arm Fast Model Systems
  • How to build and deploy a ML application with TensorFlow Lite and CMSIS-NN kernels to a Cortex-M7 device
  • How to migrate your ML code to Arm Cortex-M55, Arm’s most AI-capable Cortex-M processor

Efficient ML across Arm from Cortex-M to Web Assembly

Speakers: Jan Jongboom, CTO , Edge Impulse

In this talk, Jan Jongboom talks about techniques for enabling efficient and explainable TinyML. Learn about efficient optimization from dataset to deployment across the whole spectrum of Arm hardware from Cortex-M at 100 uW to Cortex-A mobile and edge devices with Web Assembly. We’ll include a live tutorial, demonstrating how to easily get started with sensor data collection and TinyML algorithm deployment on your smartphone.


Running Accelerated ML Applications on Mobile and Embedded Devices using Arm NN

Speaker: James Conroy, Arm

Increasingly, devices are performing AI at the furthest point in the system – on the edge or endpoint devices. As the industry-leading foundation for intelligent computing, Arm’s ML technologies give developers the comprehensive hardware IP, software, and ecosystem platform.

Arm NN is an accelerated inference engine for Arm CPUs, GPUs, and NPUs. It executes ML algorithms on-device to make predictions based on input data. Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, TensorFlow, ONNX, and Caffe, allowing them to run efficiently and without modification across Arm Cortex-A CPUs, Mali GPUs, and Ethos-N NPUs.

In this technical session, we are going to give you an overview of Arm NN with a focus on its plug-in framework. The audience will walk away with a working knowledge of using Arm NN to run ML models with accelerated performance on a mobile phone or a Raspberry Pi, and writing plug-ins to extend support for new neural network processing units.

How To Reduce AI Bias with Synthetic Data for Edge Applications

Speaker: Dr. Nitin Gupta, Dori AI

Every enterprise has blind spots that impact their business because of a lack of real-time visibility into people, assets, and workflows. The majority of the tasks still go unmonitored and thus improving productivity is a high priority for all enterprises. AI and computer vision have enabled many enterprises to gain actionable insights through a mechanism called visual process automation (VPA). Enterprises are looking to embedded computer vision to provide the visual intelligence required for their IoT applications. The challenge with most VPA solutions is that they are inherently biased.

Recent analyses of public domain ML models have revealed that many of them are inherently biased. This bias originates from the image and video datasets that were used to originally train the model. The lack of access to volumes of quality data is the reason why many enterprises are seeing biased results from their computer vision models. Enterprises are looking for solutions to enable their ML solutions to be more accurate, robust, and unbiased.
In this talk, we will introduce a structured methodology to address the issue of data and model bias that may be inherent in the ML models you are building. We will provide insights into techniques that help to answer the following questions:

    How do you prepare an unbiased dataset?
    What metrics are used to analyze data bias for CV datasets?
    How do you rebalance datasets?
    Can introducing data bias remove model bias?
    How can synthetic data or data augmentation be used to enhance existing datasets?
    What explainability metrics are used to analyze model bias for CV models?
    How do you properly benchmark to reveal data and model bias?

Optimizing Power and Performance For Machine Learning at the Edge - Model Deployment Overview

Speakers: Naveen Suda, Arm; Lingchuan Meng, Arm

Neural Networks have seen tremendous success in interpreting the signals from our physical world, such as vision, voice, and vibration. However, the unprecedented intelligence brought by the neural networks only becomes significant when we deploy them on ubiquitous mobile and IoT devices. Resource constraints of hardware and diverse ML features call for a new and critical component on top of the traditional end-to-end development process: model optimization and deployment. We envision that efficient ML inference lies in the co-development between models, algorithms, software, hardware. Model optimizations serve as an introduction to this process. The first in a planned series, this talk from our Arm engineers provides an overview of model optimization and deployment.