CrashLoopBackOff: How to Fix in Kubernetes
Fix CrashLoopBackOff in Kubernetes pods. Learn why pods crash loop, systematic debugging with kubectl logs and describe, and solutions for common causes.
π‘ Quick Answer: To fix CrashLoopBackOff, run
kubectl logs <pod> --previousto see why the container crashed. Check the exit code: 137 means out of memory (increaseresources.limits.memory), 1 means application error (check configs/env vars), 139 means segfault (debug the app). Usekubectl describe pod <pod>to see events and last state.
The Problem
Your pod is in CrashLoopBackOff state, repeatedly crashing and restarting with increasing backoff delays.
Understanding CrashLoopBackOff
CrashLoopBackOff means:
- The container starts
- The container crashes or exits
- Kubernetes restarts it
- It crashes again
- Kubernetes increases the wait time before the next restart
Backoff delays: 10s β 20s β 40s β β¦ β 5 minutes (max)
Step 1: Check Pod Status
# Get pod status
kubectl get pods
# Output example:
# NAME READY STATUS RESTARTS AGE
# myapp 0/1 CrashLoopBackOff 5 10mStep 2: Describe the Pod
kubectl describe pod myappLook for:
- Events: Shows restart history and reasons
- Last State: Exit code and reason
- Containers: Image, command, and configuration
Key exit codes:
| Code | Meaning |
|---|---|
| 0 | Graceful exit (shouldnβt restart) |
| 1 | Application error |
| 137 | OOMKilled (out of memory) |
| 139 | Segmentation fault |
| 143 | SIGTERM (graceful shutdown) |
Step 3: Check Container Logs
# Current container logs
kubectl logs myapp
# Previous container logs (before crash)
kubectl logs myapp --previous
# Follow logs
kubectl logs myapp -f
# Specific container in multi-container pod
kubectl logs myapp -c container-nameCommon Causes and Solutions
1. Application Error (Exit Code 1)
Symptoms:
Last State: Terminated
Exit Code: 1Solutions:
- Check logs for error messages
- Verify environment variables
- Check configuration files
- Test the image locally:
docker run -it myapp:tag2. OOMKilled (Exit Code 137)
Symptoms:
Last State: Terminated
Reason: OOMKilled
Exit Code: 137Solution: Increase memory limit:
resources:
limits:
memory: "512Mi" # Increase this3. Missing Configuration
Symptoms:
Error: ConfigMap "myapp-config" not foundSolution: Create the missing ConfigMap/Secret:
kubectl create configmap myapp-config --from-file=config.yaml4. Image Pull Error
Symptoms:
Warning Failed ImagePullBackOffSolution: Check image name and registry credentials:
# Verify image exists
docker pull myapp:tag
# Create registry secret
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password5. Failing Health Checks
Symptoms:
Liveness probe failed: connection refusedSolution: Fix or adjust probe configuration:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30 # Give app time to start
periodSeconds: 10
failureThreshold: 36. Permission Errors
Symptoms:
Error: permission deniedSolution: Fix security context:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 10007. Wrong Command/Entrypoint
Symptoms: Container exits immediately.
Solution: Verify the command:
containers:
- name: myapp
image: myapp:tag
command: ["/bin/sh", "-c"]
args: ["./start.sh"] # Ensure this script existsDebug with Ephemeral Containers
For running pods (Kubernetes 1.25+):
kubectl debug myapp -it --image=busybox --target=myappDebug by Running a Shell
Replace the command temporarily:
containers:
- name: myapp
image: myapp:tag
command: ["/bin/sh"]
args: ["-c", "sleep 3600"] # Keep container runningThen exec into it:
kubectl exec -it myapp -- /bin/shQuick Debugging Checklist
# 1. Get pod events
kubectl describe pod myapp | grep -A 20 Events
# 2. Get logs
kubectl logs myapp --previous
# 3. Check exit code
kubectl get pod myapp -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'
# 4. Check resource usage
kubectl top pod myapp
# 5. Check events in namespace
kubectl get events --sort-by=.metadata.creationTimestampPrevention Tips
- Always set resource limits to prevent OOMKills
- Use proper health checks with adequate delays
- Test images locally before deploying
- Use init containers for dependencies
- Log to stdout/stderr for easy debugging
One-Liner Debug Commands
# Get all failing pods
kubectl get pods --field-selector=status.phase=Failed
# Get pods with restarts
kubectl get pods -o wide | awk '$5 > 0'
# Watch events
kubectl get events -w
# Get last termination reason
kubectl get pod myapp -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}'Key Takeaways
- Check logs first with
kubectl logs --previous - Exit code 137 = memory issue
- Exit code 1 = application error
- Use
kubectl describe podfor events - Debug by overriding the command to sleep
π Go Further with Kubernetes Recipes
Love this recipe? Thereβs so much more! This is just one of 100+ hands-on recipes in our comprehensive Kubernetes Recipes book.
Inside the book, youβll master:
- β Production-ready deployment strategies
- β Advanced networking and security patterns
- β Observability, monitoring, and troubleshooting
- β Real-world best practices from industry experts
βThe practical, recipe-based approach made complex Kubernetes concepts finally click for me.β
π Get Your Copy Now β Start building production-grade Kubernetes skills today!
Frequently Asked Questions
What is CrashLoopBackOff in Kubernetes?
CrashLoopBackOff means Kubernetes is trying to restart a container that keeps crashing. The kubelet uses exponential backoff (10s, 20s, 40s, up to 5 minutes) between restart attempts. Common causes include missing environment variables, failed health checks, incorrect commands, and application errors.
How do I fix CrashLoopBackOff?
- Check logs:
kubectl logs <pod-name> --previous(use--previousto see crash logs) - Describe the pod:
kubectl describe pod <pod-name>for events and exit codes - Verify the container image and command are correct
- Check environment variables and ConfigMaps/Secrets exist
- Ensure liveness probes arenβt killing healthy pods (increase
initialDelaySeconds)
Whatβs the difference between CrashLoopBackOff and Error?
Error means the container exited with a non-zero exit code on its most recent attempt. CrashLoopBackOff means it has crashed repeatedly and Kubernetes is waiting before retrying. CrashLoopBackOff always follows repeated Error states.
Frequently Asked Questions
What does CrashLoopBackOff mean in Kubernetes?
CrashLoopBackOff means a podβs container keeps crashing and Kubernetes is restarting it with exponentially increasing delays (10s, 20s, 40s, up to 5 minutes). Itβs not an error itself β itβs Kubernetes telling you the container keeps failing after restart attempts.
How do I fix CrashLoopBackOff?
Start with kubectl logs <pod> --previous to see why the container crashed. Common fixes: increase memory limits if OOMKilled, fix application bugs, ensure ConfigMaps/Secrets exist, verify the container entrypoint, and adjust liveness probe settings.
How long does CrashLoopBackOff last?
The backoff delay increases exponentially: 10s β 20s β 40s β 80s β 160s β 300s (5 minutes max). Kubernetes retries indefinitely until the issue is fixed or the pod is deleted.
Can CrashLoopBackOff fix itself?
Yes, if the root cause is transient (e.g., a database that was temporarily down). Once the dependency becomes available, the next restart succeeds.
How do I debug CrashLoopBackOff when there are no logs?
If kubectl logs shows nothing, the container crashes before writing output. Check events with kubectl describe pod, run the image interactively with kubectl run debug --rm -it --image=<image> -- /bin/sh, or check for OOM kills in the pod status.
See also: OOMKilled Troubleshooting, Pod Pending, ImagePullBackOff

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses β