Join our series of AI Virtual Tech Talks to discover the latest in AI trends, technologies and best practices. AI Virtual Tech Talks are hosted by experts from Arm and partner companies in the AI Partner Program. Each session features a different speaker discussing the latest trends in AI development, including deployment, optimization, best practices, and new technologies shaping the industry. 

Navigate to section:

Register for the live talks | View previous recordings

Schedule


Introducing NetsPresso by Nota: For Really Fast Inference on Cortex-A devices

Date: June 22, 2021
Time: 8am PST / 4pm BST / 11pm CST
Tae-Ho Kim, Founder & CTO of Nota

Register

This talk will take you step-by-step through how Nota was able to compress different deep learning models on ARM devices including ResNet18 on Cortex-A72. By leveraging NetsPressso, a model compression platform built by Nota, you can optimize deep learning models in a range of environments and use cases with no learning curve. 

You will also get a detailed look at how this technology is a key component of commercialized lightweight deep learning models for facial recognition, intelligent transportation system, etc.

If you need to deploy deep learning models on low-end devices, this talk is for you. Register today! 


Bringing Edge AI to Life - from Concept to Production

Date: July 13, 2021
Time: 8am PST / 4pm BST / 11pm CST
Ali Osman Örs – Director, AI and ML Strategy and Technologies, Edge Processing, NXP Semiconductors
David Steele – Director of Innovation, Arcturus 

Register

Often the first step in developing an edge AI/ML application is a proof-of-concept using a simple classification task. In reality, most real-world applications require the intelligence gained from multiple models and algorithms running concurrently in real-time. For a developer, this means the path from proof-of-concept to production can change dramatically, causing a ripple effect in processing requirements, architecture and design. 

This tech talk will use practical real-world applications to illustrate how key aspects of acceleration, enablement and tooling help address scalability to futureproof edge AI investment. Join us for an enlightened session with practical guidance from top industry experts at NXP and Arcturus.


OpenMV logo

Easy TinyML with Arduino: taking advantage of machine learning right where things are happening

Date: July 20, 2021
Time: 8am PST / 4pm BST / 11pm CST
Massimo Banzi, Arduino

Register

Building a small intelligent device that recognizes spoken keywords, gestures, or even people or animals is now easier than ever thanks to the combination of new hardware, out-of-the-box libraries, and support from the tinyML community. During this tech talk, Co-founder of Arduino, Massimo Banzi will show how you can easily create tinyML projects with Arduino through a series of practical examples.

  • DarwinAI: Leveraging DarwinAI’s Deep Learning Solution to Improve Production Efficiency in Manufacturing

    Manufacturing environments are actively turning to advanced automation for the purpose of increasing productivity and maximizing product quality. Industry 4.0 represents the next industrial revolution in advanced manufacturing and smart, interconnected, collaborative factories. 

    In this talk, Alex and Jake from DarwinAI will share a case study in which a global aerospace and defense manufacturer improved their circuit board production efficiency with an intelligent vision-inspection system. You’ll learn more about the technology that enabled the solution and explore the results of the manufacturer. 

    Watch video    Download slides

  • Imagimob: Optimized C Code Generation for Ultra-efficient tinyML Applications

    Writing C code for embedded systems is known to be slow and error prone. The solution? Imagimob Studio. It can turn a .h5-file containing a machine learning model into C code with only a click of a button. Optimizations for new AI accelerators and new network layers are easily added to Imagimob Studio. The C code can run on any platform since no external runtime libraries are needed. This talk uses different applications from real customers as examples, such as audio applications, condition monitoring and gesture control.

    Watch video    Download slides

  • AITS: Bringing PyTorch Models to Cortex-M Processors

    This talk will take you step-by-step through how to develop a deep learning model with PyTorch. Rohit Sharma from AI Tech Systems (AITS), a leading firmware, software and services provider for low power IoT, Endpoint and tinyML devices, will demonstrate how PyTorch can be used on Arm Cortex-M based systems. Get an in-depth look at how to bring on-device AI applications to a multitude of verticals, including Industrial IoT, smart space and much more as AITS showcase a live demo using Arduino and STM32F07 boards.

    Watch video    Download slides

  • SensiML: Accelerate Edge AI Creation for a Competitive Edge

    SensiML enables product development teams to create truly innovative Arm Cortex-M-based smart IoT devices like never before possible.

    Unlike other AutoML dev tools for the edge, SensiML is the only one built with production scale product development in mind. In this presentation we will explore the typical challenges, common but avoidable AI project failure modes, and key tools requirements that development teams seeking to implement AI at the edge need understand.

    What you will learn:

    • How to best align your AI tools with your team’s current skillset and planned investment in data science expertise
    • The importance of comprehensive data collection and how to properly construct labeled train/test datasets
    • How to achieve the benefits of complex AI IoT models while retaining needed code transparency and control for supportable products

    Watch video    Download slides

  • Picovoice: The First No-Code Voice AI Platform for Microcontrollers

    Speaker: Alireza Kenarsari, CEO & Founder, Picovoice

    Join Picovoice in this tech talk to learn about the first no-code platform for building voice interfaces on microcontrollers – Shepherd. Shepherd creates voice experiences that run on microcontrollers, without writing a single line of code! Picovoice AI runs entirely on-device, without requiring internet connectivity. Endpoint voice interfaces built with Picovoice are private, reliable, zero-latency, and cost-effective, distinguishing it from cloud-based alternatives. 

    Watch video    Download slides

  • Qeexo: Automate tinyML Development & Deployment

    Speakers: Tina Shyuan, Director of Product Marketing at Qeexo

     What’s the smallest machine learning model you’ve built? Qeexo’s is so small, it can even run on an Arm Cortex M0+! Experience how easy it is to automate "tinyML" machine learning development for sensor modules – without having to write a single line of code!

    In this workshop-style Tech Talk, we will:

    • Give an overview of the benefits and challenges of running machine learning at the edge.
    • Walk attendees through the installation and setup of Qeexo AutoML. Please come with a Windows or Mac laptop with the Google Chrome browser installed.
    • Demo the simple data collection and visualization with our Qeexo AutoML interface for attendees to follow along to build their own classifiers.
    • Build multiple machine learning models and deploy them to an Arm Cortex-M4-powered sensor module for live-testing with just a few clicks.

    Watch video    Download slides

  • Arm Workshop: Hands on with PyArmNN for object detection 

    Speakers: Karl Fezer, Arm AI Ecosystem Evangelist & Henri Woodcock, Graduate AI Technical Evangelist

    Join the Arm AI Evangelism team for a live hands-on session with object detection: no setup needed, just a browser. Learn how to run object detection through PyArmNNs API.

    Watch video    Download slides

  • Supercharge your development time with the new Arm NN 20.11 release

    Speakers: Ronan Naughton, Senior Product Manager, Arm

    The use of machine learning in endpoint applications is becoming increasingly prevalent across a wide range of industries. As model complexity increases, achieving the best results on the underlying Arm architectures is a fundamental necessity for application developers.

    The latest release of Arm NN and Arm Compute Library – pivotal parts of the Arm software stack, includes key new features designed to accelerate our industry-leading performance and substantially reduce your development time:

    • Arm NN Delegate for TensorFlow Lite
    • Debian packages for Arm ML software
    • Python Bindings for Arm NN
    • Model Zoo
    • Updated ML examples and documentation

    Join Arm engineers to learn how to best leverage these new features for your ML projects, including demonstrations to help you get the best ML performance, no matter what Arm device you support. This session will be followed on Feb 23rd with a deep-dive workshop where you can get hands-on with this new release. Be sure not to miss either of these opportunities to hear from and engage with Arm engineers.

    Watch video    Download slides

  • Reality AI: Building products using TinyML on Arm MCUs

    Speaker: Stuart Feffer, co-founder and CEO, Reality AI

    Reality AI is one of the leading product development environments for TinyML on Arm MCUs.  Tools that generate TinyML models without code have become commonplace, but there is so much more you can do.

    We’ll show you how to use AI to drive sensor selection and placement and determine minimum component specifications. We will also look at how to minimize the cost of data collection, and also how to generate sophisticated, explainable models based on sensor data for Arm architectures - automatically.  We will use case studies to explore the difference between doing projects and building products, showing examples from Reality AI Tools 4.0.

    • How to generate sophisticated TinyML models automatically
    • How to use AI to optimize sensor selection and placement
    • How to use AI to set minimum component specifications
    • How to minimize the cost of data collection.

    Watch video    Download slides

  • SLAMCore: Bringing Spatial AI to embedded devices

    Speakers: Owen Nicholson & Paul Brasnett, SLAMCore

    Spatial AI is the term we use to describe the technologies necessary to give an autonomous robot or drone full awareness of its pose combined with the geometry and semantic understanding of the world around it. At the core of spatial AI is VI-SLAM (visual-inertial simultaneous localisation and mapping), which fuses image and IMU sensor data to understand the world. The algorithms to perform VI-SLAM are computationally demanding and are challenging to implement on embedded processors such as Arm A-class processors.

    In this talk we will:

    • Explore the need and opportunities for Spatial AI in robotics and drones applications
    • The key algorithm stages involved in a VI-SLAM system
    • The processing requirements of the key stages in VI-SLAM
    • Getting up and running with SLAMcore Spatial AI SDK on Arm based systems
    • Demonstration of Spatial AI on Arm processors

    Watch video    Download slides

  • NXP, Clever Devices and Arcturus: The Smart City in Motion - AI in intelligent transportation

    Speakers: Philip Bockrath, Clever Devices; Markus Levy, NXP Semiconductor; David Steele,  Arcturus

    Smart cities move millions of people every day and they do it by relying on Intelligent Transportation Systems (ITS). This AI Tech Talk brings together a powerhouse team including NXP, Arcturus and Clever Devices, North America’s largest ITS solution provider. The session will focus on edge and AI in public transportation including real-world use cases in public safety and operations. Learn first-hand from the experts about how solutions are implemented and catch a glimpse of where edge AI is heading. The talk will also offer a sneak peek into Arm’s latest offering focused on powering this new wave of edge AI. Join us during this Tech Talk for a rare combination of implementation insights and industry vision.

    Watch video    Download slides

  • Deeplite: Small is big: making deep neural nets faster and energy-efficient on low power hardware

    Speaker: Davis Sawyer, Deeplite

    Deep Neural Networks (DNNs) are the de facto tool for solving complex problems, from DNA sequencing to facial recognition. However, DNNs require extensive computing power, memory and expertise to deploy in real world environments. We will discuss various approaches to DNN model optimization for multiple objectives like inference latency, accuracy, and model size to make deep learning more applicable for ARM microcontrollers and low-power devices. Please come with your questions and we'll have an open mic session to let you engage with the speaker.

    Watch video    Download slides

  • Arm: Optimising power and performance for ML at the Edge - model deployment overview

    Speakers: Naveen Suda, Arm; Lingchuan Meng, Arm

    Neural Networks have seen tremendous success in interpreting the signals from our physical world, such as vision, voice, and vibration. However, the unprecedented intelligence brought by the neural networks only becomes significant when we deploy them on ubiquitous mobile and IoT devices. Resource constraints of hardware and diverse ML features call for a new and critical component on top of the traditional end-to-end development process: model optimization and deployment. We envision that efficient ML inference lies in the co-development between models, algorithms, software, hardware. Model optimizations serve as an introduction to this process. The first in a planned series, this talk from our Arm engineers provides an overview of model optimization and deployment.

    Watch video    Download slides

  • Dori.AI: How to reduce AI bias with synthetic data for edge applications?

    Speaker: Dr. Nitin Gupta, Dori AI

    Every enterprise has blind spots that impact their business because of a lack of real-time visibility into people, assets, and workflows. The majority of the tasks still go unmonitored and thus improving productivity is a high priority for all enterprises. AI and computer vision have enabled many enterprises to gain actionable insights through a mechanism called visual process automation (VPA). Enterprises are looking to embedded computer vision to provide the visual intelligence required for their IoT applications. The challenge with most VPA solutions is that they are inherently biased.

    Recent analyses of public domain ML models have revealed that many of them are inherently biased. This bias originates from the image and video datasets that were used to originally train the model. The lack of access to volumes of quality data is the reason why many enterprises are seeing biased results from their computer vision models. Enterprises are looking for solutions to enable their ML solutions to be more accurate, robust, and unbiased.

    In this talk, we will introduce a structured methodology to address the issue of data and model bias that may be inherent in the ML models you are building. We will provide insights into techniques that help to answer the following questions:

    • How do you prepare an unbiased dataset?
    • What metrics are used to analyze data bias for CV datasets?
    • How do you rebalance datasets?
    • Can introducing data bias remove model bias?
    • How can synthetic data or data augmentation be used to enhance existing datasets?
    • What explainability metrics are used to analyze model bias for CV models?
    • How do you properly benchmark to reveal data and model bias?

    Watch video    Download slides

  • Arm: Running Accelerated ML Applications on Mobile and Embedded Devices using Arm NN

    Speaker: James Conroy, Arm

    Increasingly, devices are performing AI at the furthest point in the system – on the edge or endpoint devices. As the industry-leading foundation for intelligent computing, Arm’s ML technologies give developers the comprehensive hardware IP, software, and ecosystem platform.

    Arm NN is an accelerated inference engine for Arm CPUs, GPUs, and NPUs. It executes ML algorithms on-device to make predictions based on input data. Arm NN enables efficient translation of existing neural network frameworks, such as TensorFlow Lite, TensorFlow, ONNX, and Caffe, allowing them to run efficiently and without modification across Arm Cortex-A CPUs, Mali GPUs, and Ethos-N NPUs.

    In this technical session, we are going to give you an overview of Arm NN with a focus on its plug-in framework. The audience will walk away with a working knowledge of using Arm NN to run ML models with accelerated performance on a mobile phone or a Raspberry Pi, and writing plug-ins to extend support for new neural network processing units.

    Watch video    Download slides

  • Edge Impulse: Efficient ML across Arm from Cortex-M to Web Assembly

    Speakers: Jan Jongboom, CTO , Edge Impulse

    In this talk, Jan Jongboom talks about techniques for enabling efficient and explainable TinyML. Learn about efficient optimization from dataset to deployment across the whole spectrum of Arm hardware from Cortex-M at 100 uW to Cortex-A mobile and edge devices with Web Assembly. We’ll include a live tutorial, demonstrating how to easily get started with sensor data collection and TinyML algorithm deployment on your smartphone.

    Watch video    Download slides

  • Arm: Getting started with Arm Cortex-M software development and Arm Development Studio

    Speakers: Pareena Verma and Ronan Synnott, Arm

    Advances in processing power and machine learning algorithms enable us to run machine learning models on tiny far edge devices. Arm’s SIMD, DSP extension and vector processing technologies processor bring an unprecedented uplift in energy-efficient machine learning and signal processing performance for next-generation voice, vision or vibration use cases.

    Arm also provides a unified software toolchain for a frictionless and fast developer experience. Join this webinar to learn how to get started today to port optimized ML code for the Arm microcontrollers. In this session, we will cover:

    • How to develop and debug your software with Arm Development Studio
    • How to get started with MCU software development on Arm Fast Model Systems
    • How to build and deploy a ML application with TensorFlow Lite and CMSIS-NN kernels to a Cortex-M7 device
    • How to migrate your ML code to Arm Cortex-M55, Arm’s most AI-capable Cortex-M processor

    Watch video    Download slides

  • Fluent.ai: Speech recognition on Arm Cortex-M

    Speakers: Vikrant Tomar and Sam Myer, Fluent.ai

    In this tech talk, we will cover how Fluent.ai go from training models in high level libraries such as Pytorch and then run the models on Arm Cortex-M4 and Cortex-M7 boards. We will talk about optimizations achieved using the Arm CMSIS-NN library, as well as, neural network optimizations that we use extensively such as quantization, unique model architectures, etc.

    Watch video    Download slides

  • Cartesiam.ai: Demystify artificial intelligence on Arm MCUs

    Speaker: Francois de Rochebouet, Cartesiam.ai

    Cartesiam will demystify artificial intelligence and all the complexity that comes with it, to the embedded developers community. Even without any prior AI skills, developers can use Cartesiam’s NanoEdge AI Studio to quickly and easily integrate ML algorithms with a broad range of applications on Arm-based low-power microcontrollers. AI Studio also enables on-device learning of a nominal behavior and is capable of detecting any variation of this behavior, even in a complex and “noisy” environment. In this session, we will demonstrate how to generate a custom learning library and use it to perform anomaly detection on an STmicroelectronics development board.

    Watch video    Download slides

  • Arm: tinyML development with Tensorflow Lite for Microcontrollers with CMSIS-NN and Ethos-U55

    Speakers: Fredrik Knutsson, Arm; Felix Johnny, Arm

    Deep Neural Networks are becoming increasingly popular in always-on endpoint devices. Developers can perform data analytics right at the source, with reduced latency as well as energy consumption. We will introduce how Tensorflow Lite for Microcontrollers (TFLu) and its integration with CMSIS-NN maximize the performance of machine learning applications. Developers can now run larger, more complex neural networks on Arm MCUs and micro NPUs while reducing memory footprint and inference time.

    Watch video    Download slides

  • NXP and Arm: Machine learning for embedded systems at the edge

    Speakers: Anthony Huereca, Systems Engineer, Edge Processing, NXP; Kobus Marneweck, Senior Product Manager, Arm

    Machine learning inference is impacting a wide range of markets and devices, especially low power microcontrollers and power-constrained devices for IoT applications. These devices can often only consume milliwatts of power, and therefore not achieve the traditional power requirements of cloud-based approaches. By performing inference on-device, ML can be enabled on these IoT endpoints delivering greater responsiveness, security and privacy while reducing network energy consumption, latency and bandwidth usage.

    This talk between Arm and NXP's MCU product managers and engineers will explain how developers can efficiently implement and accelerate ML on extremely low-power, low-area Cortex-M based devices with open-source software libraries and tools. The discussion will include a demo on the i.MX RT1060 crossover MCU to show how to create and deploy ML applications at the edge.

    Watch video    Download slides