Fairwinds | Blog

Kubernetes Is Moving On From ingress‑nginx: How Are You Planning Your 2026 Migration?

Written by Stevie Caldwell | Jan 20, 2026 5:37:48 PM

Kubernetes’ community ingress‑nginx controller is officially retiring in March 2026, and that deadline is now squarely on the radar for platform and DevOps teams. This post distills what’s happening and what real teams are doing into a concrete, opinionated guide you can walk through with your stakeholders.

What’s Happening to ingress‑nginx?

If you’re wondering why this is happening at all, the Kubernetes post is blunt: ingress‑nginx became too widely used and security‑sensitive for the current maintainer team to handle safely. The community couldn’t keep pace with the volume of CVEs and the expectations around a critical path networking component, so they chose a controlled retirement rather than letting it quietly decay.

The retirement affects the community ingress‑nginx controller maintained under Kubernetes SIG Network, not every NGINX‑based ingress on the planet.

  • The Kubernetes community has put ingress‑nginx into best‑effort maintenance and plans to retire it in March 2026.
  • After that, the repository will be archived (read‑only) and no new releases, bugfixes, or security patches will be produced.
  • Your clusters will still route traffic, but you’ll be running an unmaintained, internet‑facing component and accumulating security and compatibility risk over time.

The Kubernetes project is clear that users should plan to migrate to alternative controllers sooner rather than later.

What Are the Options?

Option 1. Gateway API first

Many teams are moving to Gateway API, a modern replacement for many Ingress use cases because it has more expressive routes, better extension points, and multi‑tenant support.

Typical plan:

  • Pick a Gateway implementation that fits your platform, ingress routing needs, and skills.
  • Stand it up alongside ingress‑nginx.
  • Gradually migrate from Ingress plus nginx annotations to Gateway and HTTPRoute objects.

Just be aware that not all of the more exotic ingress‑nginx features have clean Gateway equivalents yet, so you may need to redesign a few edge cases instead of converting them one‑for‑one. The appeal of this option: do one major migration now, onto the model the Kubernetes community is investing in for the next generation of L4/L7 traffic management.

If you expect these clusters to be around in 3–5 years, this is the only option that doesn’t just push the problem onto your future self with yet another migration or another end‑of‑life controller. If you want a deeper dive on why this shift is happening, check out this CNCF talk from 2024: The State of Ingress: Why Do We Need Gateway API? 

If you are comparing Gateway implementations, the community‑run Gateway API Benchmarks project is a good way to see how different controllers behave under load, at scale, and in multi‑tenant setups, well beyond what the basic conformance tests cover. Option 2. Drop‑in replacement now, Gateway later. Others want to reduce immediate risk without changing how manifests look. This is a good option if you’re under time pressure and can’t absorb a Gateway learning curve this quarter.

Common strategy:

  • Swap ingress‑nginx for another Ingress controller that still supports the Ingress API and many nginx‑style semantics (for example, vendor NGINX controllers or Traefik with compatibility for common annotations).
  • Keep manifests mostly unchanged for now.
  • Once the platform is stable, schedule a follow‑up project to move from Ingress to Gateway API.

Very few Gateway API implementations also support the ingress API, so a drop-in replacement for ingress will still likely need to be swapped out in the future for a Gateway implementation, so you’ll almost certainly be doing a swap twice with this option.

Option 3. Stay on Ingress controllers

Some teams aren't ready for Gateway API at all. They plan to:

  • Pick a maintained Ingress controller.
  • Port their existing Ingress resources and annotations as directly as possible.

A typical example is a single‑tenant internal cluster that will be decommissioned in the next 12–18 months, where the cost of a Gateway migration clearly outweighs the remaining lifetime of the environment.

This path makes sense if you have a short cluster lifetime or heavy investment in Ingress semantics you can't easily translate. Another reason people might stay on ingress is simply that they don’t need the advanced features of Gateway API. They don’t have complicated routing rules, so moving to Gateway API is not only an operational burden (training your team and migrating your assets), it’s also not necessary. For those folks, choosing to stay on ingress makes sense, but there is still the risk that one day the ingress API itself might be removed.

Unlike the drop‑in option, there is no second‑phase "Gateway later" plan here; you’re explicitly choosing to use Ingress for the lifetime of the cluster. Don’t use this for any cluster without a firm end‑of‑life date. Over the long term, you should assume fewer new features and slower innovation for Ingress‑only controllers compared to Gateway‑based options. You’re also taking on a higher risk of deprecations or ecosystem drift in the surrounding ecosystem.

Option 4. Using cloud L7

In many managed environments like GKE, EKS, and AKS, teams are already fronting workloads with cloud‑native L7 load balancers and controllers, and ingress‑nginx is only used in pockets or legacy clusters. In these environments, the cloud‑native L7 controller already fills the same role a Gateway implementation would for north‑south traffic, and cloud providers are increasingly integrating Gateway API directly into these services.

For these teams, the work is mostly:

  • Auditing clusters to find any remaining ingress‑nginx installs.
  • Cleaning it out of old Helm charts, Terraform modules, and CI/CD templates.
  • Standardizing on one approved north‑south path per cloud.
  • Avoiding multiple north–south paths that make it easy for teams to bypass your standard controller. That fragmentation weakens security controls and observability and leaves exposed entry points you may not be monitoring.

For this option, the priority is simplification and removal of ingress‑nginx, not a wholesale redesign of how traffic enters the cluster. The main tradeoff in this option is that you’re leaning harder on provider‑specific L7 features, which is fine for a single‑cloud strategy but makes true multi‑cloud or on‑prem portability harder later.

Practical Migration Playbooks for 2026

Regardless of which camp you fall into, you can structure the remaining time so you actually finish the migration instead of letting it drift.

To quickly check if a cluster is running ingress‑nginx, Kubernetes suggests:

kubectl get pods --all-namespaces

--selector app.kubernetes.io/name=ingress-nginx

1. Inventory and Risk‑Rank Your Ingresses

Start with visibility:

  • Enumerate all clusters and environments where ingress‑nginx is installed (Helm releases, operators, manifests).
  • For each cluster, capture:
    • Ingress‑nginx version and Kubernetes version
    • Number of Ingress objects
    • Critical paths (public customer traffic vs. internal tools)

Then look for special usage patterns that make migration harder:

  • Extensive use of ingress‑nginx annotations (auth, rate limits, custom timeouts, proxy buffers).
  • Configmaps that contain extra configuration (for example, ssh or udp configmaps).
  • Canary routing, blue‑green setups, or advanced rewrite rules.

Those are your high‑risk, high‑effort candidates.

2. Choose Your Target: Gateway API vs. Ingress Controller

Be explicit in your design decision:

  • If you expect these clusters and apps to live 3–5 years or more, favor the Gateway‑first option and treat Gateway API as your primary model. Just be aware that some advanced ingress‑nginx annotations may not have direct equivalents yet, so budget time to redesign those edge cases. Pattern C below is the concrete implementation of this option.
  • If your clusters are short‑lived or your main objective is to get off ingress‑nginx in the next 6–9 months with minimal change, the drop‑in Ingress‑to‑Ingress option might be enough, but keep in mind that this is uncommon. Traefik is one of the few options that also supports Gateway API.
  • If you truly have short‑lived or isolated clusters, the Ingress‑only option can work. What doesn’t work is pretending that deciding later is a plan. That’s how you end up running three ingress patterns for the next five years. If you don’t consciously pick one of these options, you’re effectively in the "ride it until it breaks" option by default.

You can even mix your approach: adopt Gateway API for new apps while giving older environments a simpler, Ingress‑only migration path.

3. Pick a Migration Pattern

You’ve already picked an option; the patterns below are just three ways to execute that choice. Pattern A usually fits smaller Gateway‑first or Ingress‑to‑Ingress moves, Pattern B fits most large or shared clusters. Pattern C is the concrete implementation of the Gateway‑first option.

Pattern A: Big‑Bang Per Cluster

  • Deploy your new Gateway or Ingress controller.
  • Recreate routing configuration (Ingress or HTTPRoute) in the new stack.
  • Flip traffic over at the load balancer/DNS layer.
  • Remove ingress‑nginx.

Pros: Clean, fast, easy to reason about, especially if you have a non‑prod environment that looks like prod.
Cons: High blast radius; demands excellent test coverage and rollback, plus a very clear sequencing plan (staging → low‑risk prod → high‑risk prod).

Caution: If you don’t have a realistic pre‑prod, this pattern can be hard to execute safely.

Pattern B: Brownfield Parallel Ingress

  • Run ingress‑nginx and the new controller side by side.
  • Put the new controller behind a separate LoadBalancer or hostname.
  • Migrate apps one hostname/path at a time, comparing behavior and metrics.
  • When everything is moved, shut down ingress‑nginx.

Pros: Great for large legacy clusters; allows careful, incremental cutover. For example, if you have 100+ Ingress objects hanging off a single shared ingress‑nginx in a multi‑tenant cluster, Pattern B is almost always safer than a big‑bang cutover.

Cons: More moving pieces; you are running two critical path ingress stacks for a while and you need discipline to actually turn the old one off.

Pattern C: Gateway‑First, Auto‑Convert

  • Introduce Gateway API with your chosen implementation.
  • Use tools such as ingress2gateway and similar converters to generate Gateway and HTTPRoute resources from existing Ingress manifests as a starting point, but expect to review and adjust the generated config before it’s production‑ready.
  • Iterate on the converted config until behavior matches; then retire ingress‑nginx.

Pros: Steers you directly onto the future Kubernetes networking model; good for organizations standardizing on Gateway API.
Cons: Requires more upfront learning and design work.

  • Pattern C works best when you have a large existing Ingress footprint and want to accelerate a Gateway‑first shift, rather than designing every Gateway and HTTPRoute by hand from scratch.

5. Treat March 2026 as a Compliance Deadline

From a risk and governance perspective, retired infrastructure is a compliance problem as much as an engineering risk.

Concretely:

  • Set March 2026 as the latest allowed date for ingress‑nginx in production; many teams are targeting much earlier internal dates.
  • Update risk registers and security policies to explicitly call out unmaintained ingress controllers as unacceptable.
  • Add monitoring/alerts that flag any ingress‑nginx deployment so it can't quietly persist after the deadline.

That way, when the first post‑retirement NGINX or Kubernetes CVE hits, you aren't scrambling to patch a stack that no one maintains anymore. This is also the time to codify ingress policies and checks in CI, so new workloads can’t quietly re‑introduce ingress‑nginx into your clusters.

If you do nothing else this week, do this:

  1. Run the kubectl check on every cluster.
  2. Decide which option you’re in: Gateway‑first, Ingress‑to‑Ingress, or cloud‑L7‑only.
  3. Put an internal “no more ingress‑nginx in prod after <date>” decision on the calendar.

Don’t Panic, But Don’t Drift

The retirement of ingress‑nginx is not a sign that Kubernetes networking is collapsing; it is a signal that a hugely popular community project hit the limits of sustainable stewardship. The upside is that there is now a clearer path forward:

  • Gateway API as your long‑term model, plus a healthy ecosystem of open‑source and vendor controllers.
  • A real deadline that forces organizations to address networking tech debt that has been easy to postpone.

If you treat March 2026 as an opportunity to standardize, simplify, and modernize how traffic reaches your clusters, the ingress‑nginx sunset can leave your platform in a better place than it found it.

If you look at this and know your team doesn’t have the time or capacity to own all of it, Fairwinds helps teams plan and execute these migrations. From scoping and sequencing the work to choosing controllers and designing migration patterns that fit your environments, you get a Kubernetes platform and ingress stack that are fully operated for you as part of a managed Kubernetes engagement.