Kubernetes’ community ingress‑nginx controller is officially retiring in March 2026, and that deadline is now squarely on the radar for platform and DevOps teams. This post distills what’s happening and what real teams are doing into a concrete, opinionated guide you can walk through with your stakeholders.
If you’re wondering why this is happening at all, the Kubernetes post is blunt: ingress‑nginx became too widely used and security‑sensitive for the current maintainer team to handle safely. The community couldn’t keep pace with the volume of CVEs and the expectations around a critical path networking component, so they chose a controlled retirement rather than letting it quietly decay.
The retirement affects the community ingress‑nginx controller maintained under Kubernetes SIG Network, not every NGINX‑based ingress on the planet.
The Kubernetes project is clear that users should plan to migrate to alternative controllers sooner rather than later.
Many teams are moving to Gateway API, a modern replacement for many Ingress use cases because it has more expressive routes, better extension points, and multi‑tenant support.
Typical plan:
Just be aware that not all of the more exotic ingress‑nginx features have clean Gateway equivalents yet, so you may need to redesign a few edge cases instead of converting them one‑for‑one. The appeal of this option: do one major migration now, onto the model the Kubernetes community is investing in for the next generation of L4/L7 traffic management.
If you expect these clusters to be around in 3–5 years, this is the only option that doesn’t just push the problem onto your future self with yet another migration or another end‑of‑life controller. If you want a deeper dive on why this shift is happening, check out this CNCF talk from 2024: The State of Ingress: Why Do We Need Gateway API?
If you are comparing Gateway implementations, the community‑run Gateway API Benchmarks project is a good way to see how different controllers behave under load, at scale, and in multi‑tenant setups, well beyond what the basic conformance tests cover. Option 2. Drop‑in replacement now, Gateway later. Others want to reduce immediate risk without changing how manifests look. This is a good option if you’re under time pressure and can’t absorb a Gateway learning curve this quarter.
Common strategy:
Very few Gateway API implementations also support the ingress API, so a drop-in replacement for ingress will still likely need to be swapped out in the future for a Gateway implementation, so you’ll almost certainly be doing a swap twice with this option.
Some teams aren't ready for Gateway API at all. They plan to:
A typical example is a single‑tenant internal cluster that will be decommissioned in the next 12–18 months, where the cost of a Gateway migration clearly outweighs the remaining lifetime of the environment.
This path makes sense if you have a short cluster lifetime or heavy investment in Ingress semantics you can't easily translate. Another reason people might stay on ingress is simply that they don’t need the advanced features of Gateway API. They don’t have complicated routing rules, so moving to Gateway API is not only an operational burden (training your team and migrating your assets), it’s also not necessary. For those folks, choosing to stay on ingress makes sense, but there is still the risk that one day the ingress API itself might be removed.
Unlike the drop‑in option, there is no second‑phase "Gateway later" plan here; you’re explicitly choosing to use Ingress for the lifetime of the cluster. Don’t use this for any cluster without a firm end‑of‑life date. Over the long term, you should assume fewer new features and slower innovation for Ingress‑only controllers compared to Gateway‑based options. You’re also taking on a higher risk of deprecations or ecosystem drift in the surrounding ecosystem.
In many managed environments like GKE, EKS, and AKS, teams are already fronting workloads with cloud‑native L7 load balancers and controllers, and ingress‑nginx is only used in pockets or legacy clusters. In these environments, the cloud‑native L7 controller already fills the same role a Gateway implementation would for north‑south traffic, and cloud providers are increasingly integrating Gateway API directly into these services.
For these teams, the work is mostly:
For this option, the priority is simplification and removal of ingress‑nginx, not a wholesale redesign of how traffic enters the cluster. The main tradeoff in this option is that you’re leaning harder on provider‑specific L7 features, which is fine for a single‑cloud strategy but makes true multi‑cloud or on‑prem portability harder later.
Regardless of which camp you fall into, you can structure the remaining time so you actually finish the migration instead of letting it drift.
To quickly check if a cluster is running ingress‑nginx, Kubernetes suggests:
kubectl get pods --all-namespaces
--selector app.kubernetes.io/name=ingress-nginx
Start with visibility:
Then look for special usage patterns that make migration harder:
Those are your high‑risk, high‑effort candidates.
Be explicit in your design decision:
You can even mix your approach: adopt Gateway API for new apps while giving older environments a simpler, Ingress‑only migration path.
You’ve already picked an option; the patterns below are just three ways to execute that choice. Pattern A usually fits smaller Gateway‑first or Ingress‑to‑Ingress moves, Pattern B fits most large or shared clusters. Pattern C is the concrete implementation of the Gateway‑first option.
Pros: Clean, fast, easy to reason about, especially if you have a non‑prod environment that looks like prod.
Cons: High blast radius; demands excellent test coverage and rollback, plus a very clear sequencing plan (staging → low‑risk prod → high‑risk prod).
Caution: If you don’t have a realistic pre‑prod, this pattern can be hard to execute safely.
Pros: Great for large legacy clusters; allows careful, incremental cutover. For example, if you have 100+ Ingress objects hanging off a single shared ingress‑nginx in a multi‑tenant cluster, Pattern B is almost always safer than a big‑bang cutover.
Cons: More moving pieces; you are running two critical path ingress stacks for a while and you need discipline to actually turn the old one off.
Pros: Steers you directly onto the future Kubernetes networking model; good for organizations standardizing on Gateway API.
Cons: Requires more upfront learning and design work.
From a risk and governance perspective, retired infrastructure is a compliance problem as much as an engineering risk.
Concretely:
That way, when the first post‑retirement NGINX or Kubernetes CVE hits, you aren't scrambling to patch a stack that no one maintains anymore. This is also the time to codify ingress policies and checks in CI, so new workloads can’t quietly re‑introduce ingress‑nginx into your clusters.
If you do nothing else this week, do this:
The retirement of ingress‑nginx is not a sign that Kubernetes networking is collapsing; it is a signal that a hugely popular community project hit the limits of sustainable stewardship. The upside is that there is now a clearer path forward:
If you treat March 2026 as an opportunity to standardize, simplify, and modernize how traffic reaches your clusters, the ingress‑nginx sunset can leave your platform in a better place than it found it.
If you look at this and know your team doesn’t have the time or capacity to own all of it, Fairwinds helps teams plan and execute these migrations. From scoping and sequencing the work to choosing controllers and designing migration patterns that fit your environments, you get a Kubernetes platform and ingress stack that are fully operated for you as part of a managed Kubernetes engagement.