Kubernetes has come a long way, from early pain points around cluster upgrades to more streamlined processes enforced by managed cloud providers. But one area still lagging behind in maturity, security, and operational best practices? Add-ons.
In this post, we’ll unpack why add-ons matter, how they affect security, and what’s standing in the way of continuous maintenance. And then, we'll share strategies, tools, and lessons we’ve learned at Fairwinds to manage add-ons effectively.
When you spin up a Kubernetes cluster, you start with a baseline: control plane components to schedule workloads and store cluster state and a data plane where your applications run. You also get some basic networking and service discovery components.
Some of these essential components, like Container Network Interface (CNI) plugins and Domain Name System (DNS), are technically called add-ons, even though your cluster can’t run without them. Beyond that, as you grow into production-grade use cases, you begin to layer on true add-ons:
Add-ons are critical for realizing the full value of Kubernetes: scalability, reliability, security, and automation. But they also present a real source of risk.
These components often run with elevated permissions. Indeed, many have access to all secrets in your cluster or run with privileged: true, giving them root-like capabilities. That means if an add-on is compromised, the damage can be severe: leaked credentials, bypassed controls, even full system compromise.
We’re seeing the impact of this at scale. According to Black Duck’s 2025 Open Source Security and Risk Analysis Report report, 86% of analyzed software shipped with vulnerable open source components. Sonatype’s 2024 State of the Software Supply Chain research showed a 463% increase in CVEs over the past decade, and Kubernetes clusters aren’t immune.
And yet, we’re getting slower at remediating. According to Sonatype's research, the average CVE fix timeline has stretched from 30–50 days to over a year.
Meanwhile, regulations like the U.S. Executive Order on Improving the Nation's Cybersecurity, the EU Cyber Resilience Act, and the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) 2.0 are making clear demands: publish software bills of materials (SBOMs), respond to Common Vulnerabilities and Exposures (CVEs), and patch infrastructure quickly.
There are a few big reasons:
Not long ago, upgrading Kubernetes itself was terrifying. Engineers stayed on old versions for years to avoid the complexity. But that’s changing, thanks in part to cloud providers like Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), and Microsoft Azure Kubernetes Service (AKS) forcing the issue.
Datadog’s 2023 Container Report found that 40% of clusters were 12 months old or less, and the Cloud Native Computing Foundation (CNCF) Annual Survey showed that 46% of users run Kubernetes through managed providers. These platforms now automatically upgrade control planes or charge you extra for lagging behind, pushing teams toward compliance.
So how do we bring that same predictability and velocity to add-ons?
“If it hurts, do it more frequently, and bring the pain forward.”
― Jez Humble, Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation
We’ve internalized that mindset at Fairwinds. Just as we release our software continuously, we believe we should patch our infrastructure continuously.
Here’s how we do it.
There's a lot of great open source tooling available to make this easier, including the ones we've built at Fairwinds:
Update in lower environments (dev and staging) first. Validate. Then deploy to production.
Please, don’t test in prod.
At Fairwinds, we update every add-on in every cluster once a month. It’s time-consuming, but with repeatable processes, it gets easier over time. Regular upgrades prevent cascading breakage and make urgent patching less risky. The last thing you want is to need a critical security patch and realize you’re many versions behind.
Add-ons are deeply embedded in Kubernetes operations. They’re essential to delivering secure, scalable, observable platforms. But they’re also high risk and often neglected.
If we apply the same rigor and automation to our add-ons as we do to our Kubernetes clusters, we can reduce risk, improve agility, and bring cluster maintenance into a more modern, manageable rhythm.
Let’s stop treating add-ons as an afterthought. They need love too.
Want help managing Kubernetes the right way?
Check out Fairwinds' open source tools or get in touch with our team to learn how we can architect, build, and manage your Kubernetes infra.