Specifications

Highest inference performance and efficiency

Based on a new, class-leading architecture, the Arm Ethos-N77 processor’s optimized design enables new features, enhances user experience and delivers innovative applications for a wide array of market segments including mobile, IoT, embedded, automotive, and infrastructure. It provides a up to 64x uplift in efficiency compared to CPUs, GPUs and DSPs through efficient convolution, sparsity and compression.

Arm Ethos-N Block Diagram

Ethos-N77 premium ML inference processor contains 16 compute engines

Key features Performance (at 1GHz)
4 TOP/s
MACs (8x8) 2048
Data types Int-8 and Int-16
Network support CNN and RNN
Efficient convolution Winograd support
Sparsity Yes
Secure mode TEE or SEE
Multicore capability 8 NPUs in a cluster
64 NPUs in a mesh
Memory system Embedded SRAM 1-4 MB
Bandwidth reduction
Extended compression technology, layer/operator fusion
Main interface
1xAXI4 (128-bit), ACE-5 Lite
Development platform Neural frameworks TensorFlow, TensorFlow Lite, Caffe2, PyTorch, MXNet, ONNX
Neural operator API Arm NN, AndroidNN
Software components Arm NN, neural compiler, driver and support library
Debug and profile Layer-by-layer visibility
Evaluation and early prototyping Arm Juno FPGA systems and cycle models


Key features
  • Highest Performance: Delivers up to 4 TOPS of performance (2048 8-Bit MACs), scaling to 100s of TOPs in multicore deployments.
  • Optimized Design: Up to 225% convolution performance uplift using Winograd on 3x3 kernels, delivering up to 90% MAC utilization. 
  • Highest Efficiency: Achieving 5 TOPs/W through internally distributed SRAM, storing data close to the compute elements to save power and reduce DRAM access.
  • Futureproof: Supports a wide range of existing Machine Learning (ML) operations and future innovations through firmware updates and compiler technology.

Key benefits
  • Supports a variety of popular neural networks, including CNNs and RNNs, for classification, object detection, image enhancements, speech recognition and natural language understanding
  • Reduces system memory bandwidth by 1.5-3x through clustering sparsity and workload tiling, with lossless compression for weights and activations on select networks
  • Maximizes the number of parameters stored on-chip by storing compressed weights and activations in local SRAM and decompressing them on the fly
  • Leverages sparse power gating techniques to reduce power by up to 50%
  • Improves performance and extends battery life through intelligent data management techniques to minimize memory movement with up to 90% of accesses on chip
  • Supports TrustZone system security to safeguard sensitive data with support for secure and non-secure modes

Ethos-N comparison table
    Ethos-N77
Ethos-N57
Ethos-N37
Key features Performance (at 1GHz)
4 TOP/s 2 TOP/s 1 TOP/s
MAC/Cycle (8x8) 2048 1024
512
Data types
Int-8 and Int-16
Network support CNN and RNN
Efficient convolution
Winograd support
Sparsity Yes
Secure mode
TEE or SEE
Multicore capability 8 NPUs in a cluster
64 NPUs in a mesh
Memory system Embedded SRAM 1-4 MB 512 KB 512 KB
Bandwidth reduction Extended compression technology, layer/operator fusion, clustering, and workload tilling
Main interface 1xAXI4 (128-bit), ACE-5 Lite
Development platform Neural frameworks TensorFlow, TensorFlow Lite, Caffe2, PyTorch, MXNet, ONNX
  Neural operator API Arm NN, AndroidNN
  Software components Arm NN, neural compiler, driver and support library
  Debug and profile Layer-by-layer visibility
  Evaluation and early prototyping Arm Juno FPGA systems and cycle models