HomeCommunityAnnouncements
October 9, 2025

Autoscaling Smarter: How Kedify and Arm are powering the next wave of cloud optimization

From hybrid to multi-cloud, see how Kedify on Arm transforms autoscaling into a performance advantage.

By Eric Sondhi

Share
Reading time 3 minutes

This blog post is published on behalf of Zbyněk Roubalík, Founder and CTO at Kedify.


As cloud workloads become more dynamic, distributed, and performance-sensitive, many engineering teams are rethinking the way they scale infrastructure. Autoscaling is no longer a nice-to-have. It is a core requirement for delivering reliable, cost-efficient services.

At Kedify, we help teams move beyond CPU-based autoscaling. Our scaling engine is event and HTTP traffic driven, production-ready, and built on KEDA (Kubernetes Event-Driven Autoscaling). Increasingly, our customers ask: Can we do this more efficiently? Can we cut costs and power usage without sacrificing performance?

The answer: yes, with Arm.

Why autoscaling needs an upgrade

Most teams start their Kubernetes journey with basic autoscaling tied to CPU or memory thresholds. However, as services mature, and architectures shift toward microservices, event-driven patterns, and bursty workloads, these default settings fall short. Overprovisioning increase cloud cost. Under provisioning risks outages and poor user experience.

That is where Kedify comes in. We extend open-source KEDA with production features designed for high-scale environments:

  • Custom scalers for HTTP, real-time traffic, and proprietary queues.
  • Dynamic vertical autoscaling for optimizing pod resource requests.
  • Scaling tailored for AI workloads.
  • Support for hybrid, multi-cluster, and multi-cloud setups.

Our mission is to help lean infrastructure teams scale like cloud-native giants, without needing to build it all themselves.

Why we are excited about Arm

When our customers run Kedify on Arm-based infrastructure, they see major gains in both cost and energy efficiency.

We began incorporating Arm into our multi-architecture strategy in early 2024, as part of a broader push to support efficient scaling across diverse cloud footprints. Our own CI/CD pipeline runs multi-arch builds, It uses GitHub Actions with self-hosted runners backed by Arm-based instances to improve performance and cost savings.

We currently support both x86 and Arm64 builds. Docker Buildx and manifest lists maintain our multi-arch containers. Our internal release tooling cross-compiles key binaries for Arm during our CI workflow. As a result, when customers deploy Kedify agents on Arm-based Kubernetes clusters, everything just works.

From a developer perspective, the transition to Arm was smooth, but not without learnings. Early challenges included:

  • Updating our observability stack for full Arm compatibility.
  • Verifying third-party dependencies and base images supported Arm64.
  • Optimizing our Golang builds and validating performance consistency.

Now, everything in Kedify runs natively on Arm. This includes our HTTP scaler, OpenTelemetry scaler, and agent.

Real-world example: Scaling HTTP the smart way

One Kedify customer is a developer platform that runs thousands of test runs daily. These jobs arrive in unpredictable spikes, a classic case for autoscaling.

Using Kedify’s HTTP scaler, they auto-adjust capacity based on request queue depth. Combined with moving to Arm-based compute on AWS they cut cloud costs by around 40% while maintaining their SLA guarantees.

More importantly, their engineers reported:

● Near-zero cold starts under load.
● Better performance stability on smaller instance sizes.
● Faster job turnaround in CI pipelines.

What this means for platform teams

If you manage cost, performance, and engineering velocity, while navigating a hybrid or multi-cloud architecture, Arm and Kedify offer a compelling combination.

Arm brings the performance-per-watt and cost advantages. Kedify brings the intelligent scaling layer to unlock that value in production.

No rewrites. No compromises. Just scalable, sustainable infrastructure built to fit your team.

Looking ahead: Arm x Kedify at Kubecon North America 2025

We are just getting started. As part of the Arm Cloud Migration Initiative, we are teaming up to share more of these stories through webinars, technical blogs, and at KubeCon North America this fall.

Expect to see:
● Live code-alongs and multi-arch tutorials.
● Deep dives on Kedify scaling and advanced features.
● Learning paths on learn.arm.com.

If you are building cloud-native apps and want to scale smarter on Arm, we would love to hear from you. You can also take our learning path here:

Autoscale HTTP applications on Kubernetes with KEDA and Kedify

Learn more:

Kedify HTTP Scaler overview
Kedify partner catalog entry
Case study: Tao testing
Kedify + KEDA overview
GitHub: Kedify
Getting started with Kedify (documentation)
Kedify blog


Log in to like this post
Share

Article text

Re-use is only permitted for informational and non-commerical or personal use only.

placeholder