Why Ingress NGINX Is Dead
On November 11, 2025, Kubernetes SIG Network and the Security Response Committee published a retirement notice
for the community-maintained Ingress NGINX Controller (kubernetes/ingress-nginx). Best-effort
maintenance continues until March 2026. After that: zero releases, zero bugfixes, zero security patches.
The Kubernetes Steering Committee followed up in January 2026 with an unusually blunt statement: remaining on Ingress NGINX after retirement leaves you and your users vulnerable to attack. They weren't hedging.
kubernetes/ingress-nginx (community) is what's dying. nginxinc/kubernetes-ingress (F5/NGINX Inc.)
remains actively maintained under Apache 2.0. Different project. Different team. Different security posture. Confusing them will cost you.
How did we get here? The project was maintained by one or two volunteers working nights and weekends. The codebase accumulated what the Kubernetes Security Response Committee called “insurmountable technical debt”—particularly the snippets annotation system that allowed arbitrary NGINX config injection. The planned replacement, InGate, never shipped.
Any future vulnerability in the NGINX config parsing, admission webhook, or Lua modules stays open forever. You're on your own.
Go modules, NGINX base image, OpenSSL—all freeze at last release. Known CVEs in transitive deps accumulate silently.
Running unpatched ingress fails most compliance frameworks. SOC 2, PCI-DSS, and FedRAMP all require timely patching of internet-facing components.
Helm chart maintainers, cloud providers, and platform teams are dropping support. The ecosystem is already moving.
IngressNightmare: The CVEs That Sealed It
In March 2025, Wiz Research disclosed a chain of five vulnerabilities in Ingress NGINX collectively dubbed IngressNightmare. CVSS 9.8. Unauthenticated RCE leading to full cluster takeover. 43% of cloud environments were vulnerable, with over 6,500 clusters exposing the admission webhook to the public internet.
auth-url annotation injection. Unsanitized input into NGINX config. Chained with CVE-2025-1974 for RCE.auth-tls-match-cn annotation injection. Same pattern—unsanitized input enables arbitrary code execution.mirror-target and mirror-host annotation injection via unsanitized UID.The Kill Chain
The attack exploits a fundamental design flaw: the admission controller runs inside the same pod as the NGINX
reverse proxy, and validates Ingress objects by running nginx -t on generated configs—without
sandboxing. The attacker uploads a malicious shared library via NGINX's client-body-buffer, then injects
an ssl_engine directive through annotation injection to load it during config validation.
NGINX Ingress Annotation Risk Radar
This radar maps every dangerous annotation in the Ingress NGINX config injection surface. Each spoke is a real
annotation. The radial axis scores impact from Info Leak (center) through DoS and Privilege Escalation to
RCE (outer edge). The angular width represents exploitability. Every annotation shown here has at least one
CVE tied to it—and server-snippet and configuration-snippet sit at the outer ring with
direct RCE impact via CVE-2021-25742 and CVE-2025-1974.
server-snippet,
configuration-snippet, and auth-snippet still allow arbitrary NGINX directive
injection by design. The vulnerability is architectural, not a bug. Any user with Ingress RBAC can inject
ssl_engine, proxy_pass, or load_module directives into your NGINX config.
The fix the K8s team applied to CVE-2025-1974 was to remove config validation entirely—not to sandbox it.
Attack Surface Anatomy
Your ingress controller is the most exposed component in the cluster. It terminates TLS, parses HTTP headers, evaluates routing rules, and holds credentials to interact with the Kubernetes API.
Ingress Controller Comparison: NGINX vs HAProxy vs Traefik vs Cilium
Four contenders. Each fills a different niche. HAProxy for raw performance and drop-in Ingress API compatibility. Traefik for automated service discovery and Let's Encrypt. Cilium for eBPF-native networking with built-in L7 visibility and zero sidecar overhead. This table scores them on what matters for a security-first migration.
| Dimension | Ingress NGINX (community) | HAProxy Ingress | Traefik | Cilium (Gateway API) |
|---|---|---|---|---|
| Status | EOL March 2026 | Active, dedicated team | Active, Traefik Labs | Active, Isovalent/Cisco |
| Proxy Engine | NGINX (C) | HAProxy (C) | Traefik (Go) | Envoy (C++) + eBPF |
| Config Injection | Snippets = arbitrary directives. Architectural flaw. | No snippets. CRDs with validation. | No snippet model. Middleware CRDs. | No config injection. Gateway API + CiliumNetworkPolicy. |
| Admission Webhook | In-pod, unsandboxed nginx -t |
Separate validation path | No nginx -t equivalent |
Envoy xDS, no config exec |
| RBAC Scope | Cluster-wide Secrets (default) | Namespace-scoped (configurable) | Namespace-scoped (configurable) | eBPF agent — kernel-level, no Secret access for datapath |
| Protocols | HTTP/S, HTTP/2, gRPC, WS, TCP (partial) | HTTP/S, HTTP/2, HTTP/3, gRPC, WS, TCP, TCP+TLS | HTTP/S, HTTP/2, HTTP/3, gRPC, WS, TCP, UDP | HTTP/S, HTTP/2, gRPC, WS, TCP, UDP + kernel-level L3/L4 |
| Rate Limiting | L4 + L7 | L4 + L7 + DDoS | L7 via middleware | L3/L4 (eBPF) + L7 (Envoy) |
| WAF | ModSecurity | ModSecurity (SPOE) | Plugin-based | No built-in WAF |
| mTLS | Supported | Granular cert control | Native Let's Encrypt + mTLS | Transparent mTLS (SPIFFE) |
| Performance | ~11.7k RPS | ~42k RPS (3.5x) | ~19k RPS | eBPF bypasses iptables — near-wire L4 |
| Network Policy | None — separate CNI required | None — separate CNI required | None — separate CNI required | Built-in — IS the CNI. L3-L7 policy enforcement. |
| Gateway API | None (InGate dead) | Unified Gateway | Native support | Native — GatewayClass, HTTPRoute, TLSRoute |
| Observability | Requires exporters | Native Prometheus + stats | Dashboard + Prometheus + tracing | Hubble — L3-L7 flow visibility, service map, DNS |
| License | Apache 2.0 (archived) | Apache 2.0 | MIT | Apache 2.0 |
Hardened Migration Playbook
Step 1: Inventory
bash# Find all ingress-nginx pods across namespaces
kubectl get pods --all-namespaces \
--selector=app.kubernetes.io/name=ingress-nginx -o wide
# Check controller version
kubectl exec -n ingress-nginx deploy/ingress-nginx-controller \
-- /nginx-ingress-controller --version
# List all Ingress resources using NGINX class
kubectl get ingress --all-namespaces \
-o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name} class={.spec.ingressClassName}{"\n"}{end}' \
| grep -i nginx
# Count annotation types — identify snippet usage
grep -oP 'nginx\.ingress\.kubernetes\.io/\K[^:]+' ingress-export.yaml \
| sort | uniq -c | sort -rn
server-snippet and configuration-snippet inject raw NGINX config.
They won't translate to any replacement. Audit each one and convert to the target controller's
structured equivalent or eliminate them.
Step 2: Deploy Side-by-Side
bashhelm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update
# Install in separate namespace with its own IngressClass
helm install haproxy-ingress haproxytech/kubernetes-ingress \
--namespace haproxy-ingress --create-namespace \
--set controller.ingressClass=haproxy \
--set controller.kind=DaemonSet \
--set controller.service.type=LoadBalancer
# Migrate a single Ingress to test
kubectl patch ingress my-app -n production \
--type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'
Step 3: Harden the New Controller
yamlapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-ingress-admission
namespace: haproxy-ingress
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: kubernetes-ingress
policyTypes:
- Ingress
ingress:
# External traffic on HTTP/HTTPS
- ports:
- port: 80
- port: 443
# Admission webhook ONLY from API server
- from:
- ipBlock:
cidr: <api-server-cidr>/32
ports:
- port: 8443
# Prometheus from monitoring namespace
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- port: 10254
yamlapiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-controller-role
namespace: production
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"] # Only this namespace
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
Migration Checklist
Four phases. Each gate must be validated before moving to the next.
kubectl get pods --all-namespaces --selector=app.kubernetes.io/name=ingress-nginx -o widekubectl exec -n ingress-nginx deploy/ingress-nginx-controller -- /nginx-ingress-controller --versionkubectl get ingress --all-namespaces -o yaml > ingress-export.yamlgrep -oP 'nginx\.ingress\.kubernetes\.io/\K[^:]+' ingress-export.yaml | sort | uniq -c | sort -rnAny hit on
server-snippet, configuration-snippet, auth-snippet, stream-snippet = migration blocker. Map each to CRD equivalent or eliminate.kubectl get ingress --all-namespaces -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name} tls={.spec.tls[*].secretName}{"\n"}{end}'Document cert-manager issuers, wildcard certs, and any external certificate automation.
kubectl get svc -n ingress-nginx -o json | jq '.items[] | select(.spec.ports[]? | .port == 8443) | {name: .metadata.name, type: .spec.type}'kubectl get networkpolicy -n ingress-nginx — empty result = webhook reachable from all pods.helm repo add haproxytech https://haproxytech.github.io/helm-charts && helm repo updatehelm install haproxy-ingress haproxytech/kubernetes-ingress --namespace haproxy-ingress --create-namespace --set controller.ingressClass=haproxy --set controller.kind=DaemonSetVerify:
kubectl get ingressclass — should show both nginx and haproxy.ClusterRoleBinding with per-namespace RoleBinding. Controller must never have get/list/watch on Secrets cluster-wide. See the RBAC manifest in the Hardened Playbook above.restrict-ingress-admission NetworkPolicy from the Hardened Playbook. Test: kubectl run probe --rm -it --image=curlimages/curl --restart=Never -- curl -k https://<controller-svc>:8443/healthz — must timeout.nginx.ingress.kubernetes.io/ssl-redirect → haproxy.org/ssl-redirectnginx.ingress.kubernetes.io/proxy-body-size → haproxy.org/request-set-headerSnippets → rewrite as CRDs (
VirtualServer, IngressRoute, or HTTPRoute). Git-commit each change.kubectl patch ingress my-app -n staging --type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'Validate:
curl -v -H "Host: app.example.com" https://<new-lb-ip>/healthz — check TLS cert, response headers, backend connectivity.hey -n 10000 -c 50 -h2 https://<new-lb-ip>/api/v1/statusCompare p50/p95/p99 latency, error rate, and CPU usage against the old controller. Flag any regression >15% at p99.
curl -k https://<controller-svc>.<ns>.svc:8443/Attempt annotation injection on a test Ingress with
auth-url: "http://evil.com; ssl_engine /tmp/evil.so;" — must be rejected by the new controller.curl http://<controller-pod-ip>:10254/metrics | head -20Confirm
haproxy_frontend_http_requests_total (or equivalent) appears in Prometheus targets. Validate Grafana dashboard and PagerDuty/Slack alert routing.for ing in $(kubectl get ingress -n production -o name); do kubectl patch $ing -n production --type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'; sleep 30; doneMonitor 5xx rate after each patch. Keep old controller running as idle fallback.
kubectl logs -f -n haproxy-ingress -l app.kubernetes.io/name=kubernetes-ingress --tail=100Alert on: p99 latency regression, cert renewal failures (
cert-manager events), any 5xx >0.1% of traffic.helm uninstall ingress-nginx -n ingress-nginxkubectl delete namespace ingress-nginxkubectl delete validatingwebhookconfiguration ingress-nginx-admissionkubectl delete clusterrole,clusterrolebinding -l app.kubernetes.io/name=ingress-nginx
Running v1.12.1 doesn't save you. The snippet annotations—server-snippet,
configuration-snippet, auth-snippet—inject arbitrary directives
into your NGINX config by design. Any user with Ingress RBAC can drop ssl_engine,
proxy_pass, or load_module into your production proxy.
That's not a bug. That's the architecture. And it ships with cluster-wide Secret access.
k8sec continuously monitors ingress controller security posture, detects IngressNightmare exploitation patterns, and automates migration validation across your fleet.
Explore k8sec Platform