Entitlements could not be checked due to an error reaching the service. Showing non-confidential search results only.
Overview

Connect, Learn, and Build Across the Full AI Stack

Modern AI runs on heterogeneous infrastructure. On Grace, DGX, and Jetson, Arm CPUs handle orchestration, memory movement, and system control, while NVIDIA GPUs accelerate parallel compute.

As workloads scale from cloud to edge, the real challenges are cross-stack: CPU–GPU partitioning, Arm64 + CUDA validation, bandwidth tuning, and performance-per-watt optimization.

This community is built for engineers solving those system-level problems – with direct access to Arm and NVIDIA expertise in one focused technical space, as well as the latest learning resources, and early access to news and live workshops.

Benefits

Explore Community Benefits

Training icon

Arm64 + CUDA Validation Support

Get guidance on validating CUDA, drivers, containers, and AI frameworks on Arm architecture – including migration patterns and troubleshooting best practices.

learning path icon

Cross-Stack Learning Paths

Deep, hands-on tutorials covering LLM customization, RAG pipelines, quantization, and heterogeneous workload optimization across Arm CPUs and NVIDIA GPUs.

code icon

CPU–GPU Optimization Guidance

Learn how to design efficient workload partitioning, reduce data transfer overhead, optimize NUMA and memory access patterns, and tune for performance-per-watt.

diaglogue icon

Joint Developer Discord

Work directly with Arm and NVIDIA engineers to troubleshoot Arm64 + CUDA issues, profile bottlenecks, validate frameworks, and debug cross-stack performance problems.

tool icon

Live Workshops & Hackathons

Participate in live code sessions, architecture deep-dives, workshops, and hackathons with runnable repositories and recordings.

spotlight icon

Project
Spotlights

Showcase your builds across Grace, DGX, and Jetson platforms and gain visibility across the joint Arm + NVIDIA ecosystem.

Join Now

Join the Community

Learning Resources

Start Learning With Tutorials and Live Code Sessions

Unlock quantized LLM performance on Arm-based NVIDIA DGX Spark.

Start Learning Path

Build a RAG pipeline on Arm-based NVIDIA DGX Spark.

Start Learning Path

Get started with object detection using a NVIDIA Jetson Orin Nano.

Start Learning Path

Build an offline voice chatbot with faster-whisper and vLLM on DGX Spark.

Start Learning Path

Fine-tune PyTorch models on DGX Spark.

Start Learning Path
Video icon

Code-Along: Customize your AI with Model Fine-Tuning on Nvidia DGX Spark

Watch On-Demand
FAQs

Frequently Asked Questions

What is the Arm x NVIDIA Developer Community?

A joint technical community focused specifically on building, deploying, and optimizing workloads across Arm CPUs and NVIDIA GPUs. It connects developers directly with engineers to solve real heterogeneous system challenges, including NVDIA Grace, Jetson, and DGX systems.

Who is this community for?
  • AI/ML engineers training and deploying models
  • HPC developers optimizing large workloads
  • DevOps and platform engineers managing heterogeneous systems
  • Robotics and embedded developers
  • Developers building across Grace, DGX, or Jetson 

If you work with CPUs and GPUs together, this community is for you.

How is this different from the Arm or NVIDIA Developer Programs?

This community focuses specifically on the boundary between Arm architecture and NVIDIA acceleration. It brings cross-stack expertise into one shared space – instead of requiring developers to navigate two ecosystems separately.

What does “heterogeneous computing on Arm x NVIDIA” mean?

Heterogeneous computing on Arm x NVIDIA means designing systems where Arm CPUs handle orchestration, preprocessing, and memory-intensive tasks, while NVIDIA GPUs accelerate parallel workloads. Developers learn how to minimize CPU–GPU transfer overhead, optimize NUMA and memory access patterns, and improve performance-per-watt across the stack.

What kind of learning resources are available?
  • Self-paced learning paths
  • Live technical workshops
  • Hackathons
  • Project showcases
  • Peer and Arm x NVIDIA expert Q&A
  • Live Discord support from Arm and NVIDIA 
Can I use my existing CUDA or AI frameworks on Arm?

Yes. Most CUDA-based workflows and containerized AI frameworks run on Arm-based NVIDIA systems. The community provides guidance on Arm64 validation, container setup, dependency resolution, and performance tuning.

Why should I join instead of learning independently?

To be completely honest, it totally depends on what you’re looking for and how you like to learn.

By joining the community, you can connect with fellow developers, ask architecture-specific questions directly to Arm and NVIDIA engineers, validate implementation approaches, and compare patterns with developers running similar workloads. The learning paths, workshops, and discussions are focused on Grace, DGX, and Jetson.

Additionally, rather than navigating Arm and NVIDIA ecosystems separately, joining the joint community gives you a single technical space focused on building and running workloads across Arm CPUs and NVIDIA GPUs.

So, in short, it’s down to personal preference whether you join… but if you want a faster, more informed path to building on Arm-powered NVIDIA systems, joining the community gives you that leverage.