HomeCommunityAI blog
Today

Build smarter and faster: Hands-on Kubernetes and AI workshop on Google Arm-based Axion CPUs

Heading to KubeCon + CloudNativeCon North America? Join Arm and Google Cloud engineers for a hands-on Kubernetes and AI workshop on Google Arm-based Axion CPUs

By Michael Hall

Share
Reading time 2 minutes

Heading to KubeCon + CloudNativeCon North America? Join Arm and Google Cloud engineers for a hands-on workshop. You will explore how to build, deploy, and optimize AI workloads across multi-architecture Kubernetes clusters powered by Google’s new Arm-based Axion CPUs.

Where and when

Monday, November 10th, 12–5 p.m. EST

Georgia World Congress Center, Atlanta (Room A407)

Register here

Seats are limited. Register early to secure yours.

Why you should join

As multi-architecture computing becomes the norm, developers and platform engineers need to deploy portable, efficient workloads that perform consistently across CPU architectures. In this workshop, we will go beyond diagrams and theory to build and run workloads optimized for Google Arm-based Axion CPUs, built on Arm Neoverse technology.

  • Run multi-architecture Kubernetes deployments: Deploy microservices seamlessly across x86 and Arm-based nodes using Google Kubernetes Engine (GKE).
  • Build and automate multi-arch container workflows: Explore container image best practices and CI/CD automation for heterogeneous clusters.
  • Accelerate AI inference on CPU: See how Axion’s Arm Neoverse cores enable efficient, low-latency inference directly on CPU—no GPU required.
  • Optimize LLMs and ML models for Arm: Get practical tips on optimizing models for performance and cost efficiency on Arm infrastructure.

What you will work with

You will walk through a real-world software stack, DevOps orchestration, and explore open-source tooling for optimization, automation, and model deployment. All in a collaborative, hands-on environment. By the end, you will have working examples to adapt to your own infrastructure.

Why it matters

The future of cloud-native AI is multi-architecture, and it is already here. Developers and platform engineers need to build portable, scalable, and energy-efficient applications that can run anywhere. Google Axion CPUs, built on Arm Neoverse technology, combine high performance with superior efficiency to deliver:

  • Improved cost-to-performance ratios for compute-heavy workloads.
  • Sustainable scaling of AI and microservices.
  • Faster token generation for real-time inference.

Who should attend

This workshop is ideal for:

  • Kubernetes platform engineers and SREs managing mixed-architecture clusters.
  • AI/ML engineers deploying models at scale (edge, cloud, or hybrid).
  • Cloud native developers and architects building containerized microservices.
  • DevOps engineers running CI/CD pipelines across heterogeneous environments.

If you have ever faced performance inconsistencies between Arm and x86, or wondered how to optimize LLM inference on CPU, this workshop is for you.

Learn. Build. Connect.

Throughout the afternoon, you will have the opportunity to:

  • Collaborate with other engineers and cloud native developers.
  • Exchange ideas with experts from Arm and Google Cloud.
  • Enjoy lunch, coffee, and informal networking in a relaxed, technical setting.

Join us to see how the future of cloud native AI is being built on Arm-powered Google Axion and take your Kubernetes and AI skills to the next level.

Reminder: Seats are limited. Register early to secure yours:

Register here


Log in to like this post
Share

Article text

Re-use is only permitted for informational and non-commerical or personal use only.

placeholder