kubectl get pods -A -o yaml and found database credentials in plaintext environment variables, probes hitting internal services with hardcoded IPs, containers running as root with no seccomp profile, and a mounted Docker socket. The pod spec undid months of infrastructure hardening in 47 lines of YAML.01Secrets in Environment Variables — The Most Common Mistake
Open any Kubernetes deployment tutorial. Find the database configuration. You will almost certainly see this:
Here's who can read those plaintext credentials:
yamlspec:
containers:
- name: app
env:
- name: PORT
value: "8080"
- name: CURRENCY_SERVICE_ADDR
value: "currencyservice:7000"
- name: SHIPPING_SERVICE_ADDR
value: "shippingservice:50051"
# ⚠ These should NEVER be here
- name: DATABASE_ADDR
value: "postgres:5432"
- name: DATABASE_USER
value: "secret_user_name"
- name: DATABASE_PASSWORD
value: "the_secret_password"
- name: DATABASE_NAME
value: "users"
The YAML containing your credentials is almost certainly committed to version control. Every developer, every CI runner, and every GitHub Actions log that echoes the manifest has your database password.
Environment variables are readable from /proc/1/environ inside the container — and from the host node for privileged processes. Any container exec gives an attacker your full env.
kubectl describe pod prints environment variables. Any user with pod read access in the namespace sees every injected credential in plaintext.
Application crashes, OOM kills, and debug profiles can capture the full environment at time of failure. Your credentials end up in your logging pipeline, your monitoring system, and your pager alert.
yamlspec:
containers:
- name: app
env:
- name: PORT
value: "8080"
# Reference Kubernetes Secret for sensitive values
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
# Even better: mount secrets as files
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
env:
- name: MY_SECRET_FILE
value: "/mnt/secrets/foo.toml"
volumes:
- name: secrets
secret:
secretName: db-credentials
Mounted secret files are only accessible to the process that reads them. They do not appear in kubectl describe, do not echo in logs, and are not captured in most crash dumps. Rotation does not require a pod restart — the volume is updated in place.
02Seccomp Profiles — Blocking the Syscalls Attackers Need
Every container escape, every privilege escalation, every kernel exploit requires specific Linux system calls. Seccomp (Secure Computing Mode) filters which syscalls a container can make. No profile means the container can use all ~300+ syscalls. A proper profile allows only the 40-60 your application actually needs.
Unconfined — no filtering at all. This is what you get when you don't set a seccomp profile. The container can make any syscall the kernel supports, including the ones used for container escape.
unshare syscall. CVE-2021-4034 (Polkit Pwnkit) required execve with specific arguments. A seccomp profile that does not include these syscalls makes these CVEs unexploitable — regardless of whether you have patched the kernel.The Three Seccomp Levels
seccompProfile. The container can make any of the 300+ syscalls the kernel supports — including the ones used for container escape and privilege escalation.yamlspec:
securityContext:
seccompProfile:
type: RuntimeDefault # Absolute minimum for every production pod
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop: ["ALL"] # Drop every Linux capability
seccompProfile:
type: RuntimeDefault # Can also set per-container
yamlapiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: ProfileRecording
metadata:
name: my-app-recording
spec:
kind: SeccompProfile
recorder: bpf # eBPF-based recording (recommended)
podSelector:
matchLabels:
app: my-app
03SELinux — Mandatory Access Control for Containers
Where seccomp filters WHAT syscalls a process can make, SELinux controls what RESOURCES a process can access — files, ports, other processes, devices. They're complementary, not alternatives.
In a Kubernetes context, SELinux prevents a compromised container from accessing host files, other containers' filesystems, or kernel interfaces that should be off-limits. It enforces these restrictions even if the container is running as root.
yamlspec:
securityContext:
seLinuxOptions:
type: container_t # Standard container SELinux type
level: "s0:c123,c456" # Multi-Category Security — isolates between containers
yamlapiVersion: security-profiles-operator.x-k8s.io/v1alpha2
kind: SelinuxProfile
metadata:
name: my-app-selinux
spec:
allow:
container_t:
tcp_socket: [listen, accept, bind]
dir: [read, open, search]
file: [read, open, getattr]
04Information Leakage from Pod Specs
A pod spec is a goldmine for an attacker performing reconnaissance — and most teams don't realize how much they're exposing.
Namespace inference from DNS resolver
From within a pod, an attacker doesn't need API access to determine what namespace they're in. The DNS resolver configuration tells them: # Inside any pod, the namespace is in /etc/resolv.conf $ grep -o "search [^ ]*" /etc/resolv.conf search secret-namespace .svc.cluster.local The namespace name is
Pod start time and scheduling metadata
When you dump a pod spec from the API server (using kubectl get pod -o yaml ), it includes the pod's start time, the node it's scheduled on, the service account it uses, all environment variables, volume mounts, and the full container image reference including registry. For an attacker with read acc
Rogue pods and selector drift
Pod specs that don't match expected selectors create several security issues: Exclusions from network policy: If a NetworkPolicy targets pods by label and your pod doesn't have the expected label, it falls outside the policy — receiving no restriction on ingress or egress traffic. Unexpected routing
05Probes — The Security Risk Nobody Talks About
Liveness and readiness probes are essential for Kubernetes orchestration. They're also a potential information disclosure and SSRF vector.
yamllivenessProbe:
httpGet:
host: 172.31.6.71 # Hardcoded internal IP — why?
path: /
port: 8000
httpHeaders:
- name: CustomHeader
value: Awesome # Custom header — credentials?
Bypasses DNS and points directly at an internal service that should be resolved by name. If the pod spec is leaked or the probe fails, attackers learn your internal IP topology.
Custom probe headers often carry API keys, auth tokens, or internal service keys — hardcoded into the pod spec. These appear in kubectl describe pod for anyone with pod read access.
An HTTP probe with a configurable host field and controlled endpoints can be used as a server-side request forgery primitive inside the cluster network.
yamllivenessProbe:
httpGet:
path: /healthz
port: 8080
# No host field — probes the pod's own IP (default, correct behavior)
# No custom headers — no credential exposure
initialDelaySeconds: 15
periodSeconds: 10
06The Docker Socket Mount — Instant Cluster Takeover
Some legacy configurations mount the Docker socket into containers for CI/CD purposes (building images inside Kubernetes). This is the single most dangerous volume mount possible.
yamlvolumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
/etc/kubernetes/kubeconfig; access every other container on the node; and escape to the broader cluster network.The Docker socket should never be mounted in a production container. If you are building images inside Kubernetes, use rootless Kaniko, Buildah, or img — none of which require socket access.
07The Complete Hardened Pod Spec
Every line of this spec is deliberate. Each setting maps to a specific attack surface closed.
yamlapiVersion: v1
kind: Pod
metadata:
name: hardened-app
namespace: production
labels:
app: my-app # Consistent labeling for NetworkPolicy/monitoring
spec:
# ── Pod-level security ─────────────────────────
serviceAccountName: my-app-sa # Dedicated SA, NOT default
automountServiceAccountToken: false # No API token unless needed
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault # Or Localhost with custom profile
seLinuxOptions:
type: container_t
# ── Container-level security ───────────────────
containers:
- name: app
image: registry.company.com/app@sha256:a3ed95c... # Digest, not tag
imagePullPolicy: Always # Prevent cache poisoning
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true # Immutable container
capabilities:
drop: ["ALL"] # Drop every capability
# add: ["NET_BIND_SERVICE"] # Only if needed for port <1024
# ── Secrets via files, never env vars ──────────
env:
- name: APP_CONFIG
value: "/mnt/config/app.toml"
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
- name: tmp
mountPath: /tmp # Writable temp via emptyDir
# ── Secure probes ─────────────────────────────
livenessProbe:
httpGet:
path: /healthz
port: 8080 # No custom host, no custom headers
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
# ── Resource limits (prevent noisy neighbor / cryptomining) ──
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
# ── Volumes ───────────────────────────────────
volumes:
- name: secrets
secret:
secretName: app-credentials
- name: tmp
emptyDir:
sizeLimit: "100Mi" # Bounded temp storage
08The Security Profiles Operator — Automating All of This
Manually writing and maintaining seccomp and SELinux profiles across a fleet of microservices doesn't scale. The Security Profiles Operator (SPO) from kubernetes-sigs automates the entire lifecycle:
09The Pod Security Checklist
Run this against every deployment in your cluster. Each failure is a gap between your security posture and an attacker’s minimum viable exploit.
bash# Flag pods missing critical security settings
kubectl get pods --all-namespaces -o json | \
jq '.items[] | select(
(.spec.securityContext.seccompProfile == null) or
(.spec.containers[].securityContext.allowPrivilegeEscalation != false) or
(.spec.automountServiceAccountToken != false)
) | "\(.metadata.namespace)/\(.metadata.name)"'
You can harden the API server. You can encrypt etcd. You can configure RBAC down to individual verbs on individual resources. And none of it matters if the pod spec gives the attacker a privileged container, the Docker socket, secrets in environment variables, and no seccomp profile.
The pod is the execution boundary. It’s where your code runs, where attackers land, and where the next escalation either succeeds or fails. The cluster perimeter is only as strong as the weakest pod spec running inside it. Secure it with the same rigor you apply to the control plane — it is the control plane for your workload’s blast radius.
K8SEC audits every pod spec in your cluster against this checklist and prioritizes findings by exploitability — not just compliance score.
Scan Your Cluster Free