Why Ingress NGINX Is Dead

On November 11, 2025, Kubernetes SIG Network and the Security Response Committee published a retirement notice for the community-maintained Ingress NGINX Controller (kubernetes/ingress-nginx). Best-effort maintenance continues until March 2026. After that: zero releases, zero bugfixes, zero security patches.

The Kubernetes Steering Committee followed up in January 2026 with an unusually blunt statement: remaining on Ingress NGINX after retirement leaves you and your users vulnerable to attack. They weren't hedging.

Disambiguation: Two NGINX Controllers Exist kubernetes/ingress-nginx (community) is what's dying. nginxinc/kubernetes-ingress (F5/NGINX Inc.) remains actively maintained under Apache 2.0. Different project. Different team. Different security posture. Confusing them will cost you.

How did we get here? The project was maintained by one or two volunteers working nights and weekends. The codebase accumulated what the Kubernetes Security Response Committee called “insurmountable technical debt”—particularly the snippets annotation system that allowed arbitrary NGINX config injection. The planned replacement, InGate, never shipped.

No CVE Patches

Any future vulnerability in the NGINX config parsing, admission webhook, or Lua modules stays open forever. You're on your own.

No Dependency Updates

Go modules, NGINX base image, OpenSSL—all freeze at last release. Known CVEs in transitive deps accumulate silently.

Compliance Drift

Running unpatched ingress fails most compliance frameworks. SOC 2, PCI-DSS, and FedRAMP all require timely patching of internet-facing components.

Community Exodus

Helm chart maintainers, cloud providers, and platform teams are dropping support. The ecosystem is already moving.

IngressNightmare: The CVEs That Sealed It

In March 2025, Wiz Research disclosed a chain of five vulnerabilities in Ingress NGINX collectively dubbed IngressNightmare. CVSS 9.8. Unauthenticated RCE leading to full cluster takeover. 43% of cloud environments were vulnerable, with over 6,500 clusters exposing the admission webhook to the public internet.

CVE-2025-1974
CVSS 9.8. Unauthenticated RCE via admission webhook. Any pod on the network can exploit it—no K8s credentials required. Full cluster-admin access to all secrets across all namespaces.
CVE-2025-24514
CVSS 8.8. auth-url annotation injection. Unsanitized input into NGINX config. Chained with CVE-2025-1974 for RCE.
CVE-2025-1097
CVSS 8.8. auth-tls-match-cn annotation injection. Same pattern—unsanitized input enables arbitrary code execution.
CVE-2025-1098
CVSS 8.8. mirror-target and mirror-host annotation injection via unsanitized UID.
CVE-2025-24513
CVSS 4.8. Directory traversal in auth secret file path. Lower severity alone, amplifies the others when chained.

The Kill Chain

The attack exploits a fundamental design flaw: the admission controller runs inside the same pod as the NGINX reverse proxy, and validates Ingress objects by running nginx -t on generated configs—without sandboxing. The attacker uploads a malicious shared library via NGINX's client-body-buffer, then injects an ssl_engine directive through annotation injection to load it during config validation.

IngressNightmare Attack Chain 1. Attacker Pod Any pod on cluster network (no creds) HTTP 2. Upload .so POST to NGINX :80/443 client-body-buffer → /tmp/ 3. Inject Config AdmissionReview to :8443 ssl_engine directive injected 4. RCE Achieved nginx -t loads .so Code runs as controller SA 5. CLUSTER TAKEOVER ALL secrets · ALL namespaces · cluster-admin Why This Works 1. Admission webhook reachable from pod network (default) 2. NGINX proxy + admission controller share same pod 3. Config validation via nginx -t executes unsandboxed directives 4. Controller SA has cluster-wide Secret read (default RBAC) 5. No NetworkPolicy isolates the admission webhook by default
IngressNightmare kill chain: from any pod to full cluster compromise without credentials

NGINX Ingress Annotation Risk Radar

This radar maps every dangerous annotation in the Ingress NGINX config injection surface. Each spoke is a real annotation. The radial axis scores impact from Info Leak (center) through DoS and Privilege Escalation to RCE (outer edge). The angular width represents exploitability. Every annotation shown here has at least one CVE tied to it—and server-snippet and configuration-snippet sit at the outer ring with direct RCE impact via CVE-2021-25742 and CVE-2025-1974.

Patching Is Not Enough Even if you upgraded to the last release (v1.12.1), annotations like server-snippet, configuration-snippet, and auth-snippet still allow arbitrary NGINX directive injection by design. The vulnerability is architectural, not a bug. Any user with Ingress RBAC can inject ssl_engine, proxy_pass, or load_module directives into your NGINX config. The fix the K8s team applied to CVE-2025-1974 was to remove config validation entirely—not to sandbox it.
NGINX Ingress Annotation Risk Radar — Config Injection Surface mapped by Impact (Info Leak → RCE) × Exploitability
NGINX Ingress Annotation Risk Radar — Config injection surface mapped by Impact (Info Leak → RCE) × Exploitability. Includes CVE-2026-4342 path injection.

Attack Surface Anatomy

Your ingress controller is the most exposed component in the cluster. It terminates TLS, parses HTTP headers, evaluates routing rules, and holds credentials to interact with the Kubernetes API.

Ingress Controller External Traffic (L7) Admission Webhook K8s API / RBAC TLS Cert Secrets Config Injection Pod Network Access Lua / Plugin Layer Metrics / Prometheus
Attack surface map of a typical Kubernetes ingress controller deployment

Ingress Controller Comparison: NGINX vs HAProxy vs Traefik vs Cilium

Four contenders. Each fills a different niche. HAProxy for raw performance and drop-in Ingress API compatibility. Traefik for automated service discovery and Let's Encrypt. Cilium for eBPF-native networking with built-in L7 visibility and zero sidecar overhead. This table scores them on what matters for a security-first migration.

DimensionIngress NGINX
(community)
HAProxy IngressTraefikCilium
(Gateway API)
Status EOL March 2026 Active, dedicated team Active, Traefik Labs Active, Isovalent/Cisco
Proxy Engine NGINX (C) HAProxy (C) Traefik (Go) Envoy (C++) + eBPF
Config Injection Snippets = arbitrary directives. Architectural flaw. No snippets. CRDs with validation. No snippet model. Middleware CRDs. No config injection. Gateway API + CiliumNetworkPolicy.
Admission Webhook In-pod, unsandboxed nginx -t Separate validation path No nginx -t equivalent Envoy xDS, no config exec
RBAC Scope Cluster-wide Secrets (default) Namespace-scoped (configurable) Namespace-scoped (configurable) eBPF agent — kernel-level, no Secret access for datapath
Protocols HTTP/S, HTTP/2, gRPC, WS, TCP (partial) HTTP/S, HTTP/2, HTTP/3, gRPC, WS, TCP, TCP+TLS HTTP/S, HTTP/2, HTTP/3, gRPC, WS, TCP, UDP HTTP/S, HTTP/2, gRPC, WS, TCP, UDP + kernel-level L3/L4
Rate Limiting L4 + L7 L4 + L7 + DDoS L7 via middleware L3/L4 (eBPF) + L7 (Envoy)
WAF ModSecurity ModSecurity (SPOE) Plugin-based No built-in WAF
mTLS Supported Granular cert control Native Let's Encrypt + mTLS Transparent mTLS (SPIFFE)
Performance ~11.7k RPS ~42k RPS (3.5x) ~19k RPS eBPF bypasses iptables — near-wire L4
Network Policy None — separate CNI required None — separate CNI required None — separate CNI required Built-in — IS the CNI. L3-L7 policy enforcement.
Gateway API None (InGate dead) Unified Gateway Native support Native — GatewayClass, HTTPRoute, TLSRoute
Observability Requires exporters Native Prometheus + stats Dashboard + Prometheus + tracing Hubble — L3-L7 flow visibility, service map, DNS
License Apache 2.0 (archived) Apache 2.0 MIT Apache 2.0
Benchmark Context RPS numbers from HAProxy Technologies benchmarks (50 concurrent workers, 5 injector pods, default configs). Cilium's eBPF datapath bypasses iptables entirely at L3/L4, so traditional RPS benchmarks don't capture its advantage — the win is in latency reduction and CPU efficiency at scale, not just raw throughput. Run your own tests against your actual traffic patterns.

Hardened Migration Playbook

Step 1: Inventory

Discover ingress-nginx deployments
bash# Find all ingress-nginx pods across namespaces
kubectl get pods --all-namespaces \
  --selector=app.kubernetes.io/name=ingress-nginx -o wide

# Check controller version
kubectl exec -n ingress-nginx deploy/ingress-nginx-controller \
  -- /nginx-ingress-controller --version

# List all Ingress resources using NGINX class
kubectl get ingress --all-namespaces \
  -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name} class={.spec.ingressClassName}{"\n"}{end}' \
  | grep -i nginx

# Count annotation types — identify snippet usage
grep -oP 'nginx\.ingress\.kubernetes\.io/\K[^:]+' ingress-export.yaml \
  | sort | uniq -c | sort -rn
Snippet Annotations = Migration Blockers server-snippet and configuration-snippet inject raw NGINX config. They won't translate to any replacement. Audit each one and convert to the target controller's structured equivalent or eliminate them.

Step 2: Deploy Side-by-Side

HAProxy Ingress side-by-side install
bashhelm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update

# Install in separate namespace with its own IngressClass
helm install haproxy-ingress haproxytech/kubernetes-ingress \
  --namespace haproxy-ingress --create-namespace \
  --set controller.ingressClass=haproxy \
  --set controller.kind=DaemonSet \
  --set controller.service.type=LoadBalancer

# Migrate a single Ingress to test
kubectl patch ingress my-app -n production \
  --type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'

Step 3: Harden the New Controller

NetworkPolicy: Restrict admission webhook
yamlapiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-ingress-admission
  namespace: haproxy-ingress
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: kubernetes-ingress
  policyTypes:
    - Ingress
  ingress:
    # External traffic on HTTP/HTTPS
    - ports:
        - port: 80
        - port: 443
    # Admission webhook ONLY from API server
    - from:
        - ipBlock:
            cidr: <api-server-cidr>/32
      ports:
        - port: 8443
    # Prometheus from monitoring namespace
    - from:
        - namespaceSelector:
            matchLabels:
              name: monitoring
      ports:
        - port: 10254
Least-privilege RBAC (namespace-scoped)
yamlapiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-controller-role
  namespace: production
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]  # Only this namespace
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses/status"]
    verbs: ["update"]

Migration Checklist

Four phases. Each gate must be validated before moving to the next.

Phase 1 — Audit & Plan
Inventory all ingress-nginx deployments
kubectl get pods --all-namespaces --selector=app.kubernetes.io/name=ingress-nginx -o wide
kubectl exec -n ingress-nginx deploy/ingress-nginx-controller -- /nginx-ingress-controller --version
Export and audit all annotations — flag snippet usage
kubectl get ingress --all-namespaces -o yaml > ingress-export.yaml
grep -oP 'nginx\.ingress\.kubernetes\.io/\K[^:]+' ingress-export.yaml | sort | uniq -c | sort -rn
Any hit on server-snippet, configuration-snippet, auth-snippet, stream-snippet = migration blocker. Map each to CRD equivalent or eliminate.
Map TLS secrets and certificate dependencies
kubectl get ingress --all-namespaces -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name} tls={.spec.tls[*].secretName}{"\n"}{end}'
Document cert-manager issuers, wildcard certs, and any external certificate automation.
Check admission webhook exposure
kubectl get svc -n ingress-nginx -o json | jq '.items[] | select(.spec.ports[]? | .port == 8443) | {name: .metadata.name, type: .spec.type}'
kubectl get networkpolicy -n ingress-nginx — empty result = webhook reachable from all pods.
Phase 2 — Configure
Deploy new controller side-by-side (separate IngressClass)
helm repo add haproxytech https://haproxytech.github.io/helm-charts && helm repo update
helm install haproxy-ingress haproxytech/kubernetes-ingress --namespace haproxy-ingress --create-namespace --set controller.ingressClass=haproxy --set controller.kind=DaemonSet
Verify: kubectl get ingressclass — should show both nginx and haproxy.
Apply namespace-scoped RBAC (not ClusterRole)
Replace default ClusterRoleBinding with per-namespace RoleBinding. Controller must never have get/list/watch on Secrets cluster-wide. See the RBAC manifest in the Hardened Playbook above.
Deploy NetworkPolicy: lock admission webhook to API server
Apply the restrict-ingress-admission NetworkPolicy from the Hardened Playbook. Test: kubectl run probe --rm -it --image=curlimages/curl --restart=Never -- curl -k https://<controller-svc>:8443/healthz — must timeout.
Convert Ingress manifests — annotation-by-annotation
For each NGINX annotation, find the target equivalent:
nginx.ingress.kubernetes.io/ssl-redirecthaproxy.org/ssl-redirect
nginx.ingress.kubernetes.io/proxy-body-sizehaproxy.org/request-set-header
Snippets → rewrite as CRDs (VirtualServer, IngressRoute, or HTTPRoute). Git-commit each change.
Phase 3 — Test
Canary one Ingress to the new controller in staging
kubectl patch ingress my-app -n staging --type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'
Validate: curl -v -H "Host: app.example.com" https://<new-lb-ip>/healthz — check TLS cert, response headers, backend connectivity.
Load test: baseline comparison
hey -n 10000 -c 50 -h2 https://<new-lb-ip>/api/v1/status
Compare p50/p95/p99 latency, error rate, and CPU usage against the old controller. Flag any regression >15% at p99.
Security validation: IngressNightmare pattern test
From a non-privileged pod, attempt to reach the admission webhook: curl -k https://<controller-svc>.<ns>.svc:8443/
Attempt annotation injection on a test Ingress with auth-url: "http://evil.com; ssl_engine /tmp/evil.so;" — must be rejected by the new controller.
Verify Prometheus scraping and alerting
curl http://<controller-pod-ip>:10254/metrics | head -20
Confirm haproxy_frontend_http_requests_total (or equivalent) appears in Prometheus targets. Validate Grafana dashboard and PagerDuty/Slack alert routing.
Phase 4 — Deploy & Decommission
Migrate production Ingress resources service-by-service
for ing in $(kubectl get ingress -n production -o name); do kubectl patch $ing -n production --type=json -p='[{"op":"replace","path":"/spec/ingressClassName","value":"haproxy"}]'; sleep 30; done
Monitor 5xx rate after each patch. Keep old controller running as idle fallback.
48h monitoring window after full cutover
Watch: kubectl logs -f -n haproxy-ingress -l app.kubernetes.io/name=kubernetes-ingress --tail=100
Alert on: p99 latency regression, cert renewal failures (cert-manager events), any 5xx >0.1% of traffic.
Decommission ingress-nginx
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
kubectl delete validatingwebhookconfiguration ingress-nginx-admission
kubectl delete clusterrole,clusterrolebinding -l app.kubernetes.io/name=ingress-nginx

Final Word

Running v1.12.1 doesn't save you. The snippet annotations—server-snippet, configuration-snippet, auth-snippet—inject arbitrary directives into your NGINX config by design. Any user with Ingress RBAC can drop ssl_engine, proxy_pass, or load_module into your production proxy. That's not a bug. That's the architecture. And it ships with cluster-wide Secret access.

The latest version of an insecure design is still an insecure design.
Riad DAHMANI — k8sec Security Research

k8sec continuously monitors ingress controller security posture, detects IngressNightmare exploitation patterns, and automates migration validation across your fleet.

 Explore k8sec Platform