Debug Pod Eviction Reasons
Investigate why pods were evicted. Check node pressure, resource limits, priority classes, and preemption events.
π‘ Quick Answer: Check
kubectl describe pod <evicted-pod>for the eviction reason β usuallyThe node was low on resource: memoryorThe node had condition: [DiskPressure]. Then checkkubectl describe node <node>for pressure conditions and resource allocation.
The Problem
Pods are being evicted from nodes unexpectedly. They restart on other nodes but the instability disrupts services. You need to understand why evictions happen and prevent recurrence.
The Solution
Step 1: Find Evicted Pods
# List all evicted pods
kubectl get pods -A --field-selector status.phase=Failed | grep Evicted
# Get details on a specific eviction
kubectl describe pod <evicted-pod> -n <namespace>
# Look for:
# Status: Failed
# Reason: Evicted
# Message: The node was low on resource: memory.Step 2: Check Node Pressure Conditions
# Check current node conditions
kubectl describe node worker-2 | grep -A5 Conditions
# MemoryPressure False ...
# DiskPressure False ...
# PIDPressure False ...
# Check allocated vs allocatable
kubectl describe node worker-2 | grep -A10 "Allocated resources"Step 3: Understand Eviction Types
| Type | Trigger | Behavior |
|---|---|---|
| Node pressure | Memory/disk/PID below threshold | kubelet evicts lowest-priority pods |
| Preemption | Higher-priority pod needs resources | Scheduler evicts lower-priority pods |
| API-initiated | kubectl drain or HPA scale-down | Respects PDBs |
| OOM Kill | Container exceeds memory limit | Not technically eviction β kernel kills the process |
Step 4: Set Proper Resource Requests and Limits
resources:
requests:
memory: "256Mi" # Scheduling guarantee
cpu: "250m"
limits:
memory: "512Mi" # Hard ceiling β OOMKilled if exceeded
cpu: "1"Step 5: Configure Eviction Thresholds (if needed)
# Check kubelet eviction thresholds
kubectl get node worker-2 -o json | jq '.status.allocatable'
# Default soft thresholds:
# memory.available < 100Mi
# nodefs.available < 10%
# imagefs.available < 15%Step 6: Use Priority Classes
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "Critical workloads β evicted last"
---
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
priorityClassName: high-priorityCommon Issues
Memory Pressure Eviction Loop
Pods evicted for memory β rescheduled β same node β evicted again. Fix: set proper requests.memory so the scheduler doesnβt overcommit.
Disk Pressure from Container Logs
# Check log sizes on a node
oc debug node/worker-2 -- chroot /host du -sh /var/log/containers/* | sort -rh | head -10Evicted Pods Accumulating
# Clean up evicted pods
kubectl delete pods -A --field-selector status.phase=FailedBest Practices
- Always set memory requests β prevents overcommitment that leads to memory pressure
- Use PriorityClasses for critical workloads β theyβre evicted last
- Monitor node resource usage β alert before pressure thresholds are hit
- Set resource limits β prevents a single pod from consuming all node resources
- Clean up evicted pods periodically β they donβt auto-delete
Key Takeaways
- Pod eviction is kubeletβs response to node resource pressure
- Check
kubectl describe podfor the eviction reason,kubectl describe nodefor current pressure - Proper resource requests prevent overcommitment
- PriorityClasses control eviction order β highest priority evicted last
- OOMKill (kernel) and eviction (kubelet) are different mechanisms

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses β