πŸ“šBook Signing at KubeCon EU 2026Meet us at Booking.com HQ (Mon 18:30-21:00) & vCluster booth #521 (Tue 24 Mar, 12:30-1:30pm) β€” free book giveaway!RSVP Booking.com Event
Troubleshooting beginner ⏱ 15 minutes K8s 1.28+

CrashLoopBackOff: How to Fix in Kubernetes

Fix CrashLoopBackOff in Kubernetes pods. Learn why pods crash loop, systematic debugging with kubectl logs and describe, and solutions for common causes.

By Luca Berton β€’ β€’ πŸ“– 5 min read

πŸ’‘ Quick Answer: To fix CrashLoopBackOff, run kubectl logs <pod> --previous to see why the container crashed. Check the exit code: 137 means out of memory (increase resources.limits.memory), 1 means application error (check configs/env vars), 139 means segfault (debug the app). Use kubectl describe pod <pod> to see events and last state.

The Problem

Your pod is in CrashLoopBackOff state, repeatedly crashing and restarting with increasing backoff delays.

Understanding CrashLoopBackOff

CrashLoopBackOff means:

  1. The container starts
  2. The container crashes or exits
  3. Kubernetes restarts it
  4. It crashes again
  5. Kubernetes increases the wait time before the next restart

Backoff delays: 10s β†’ 20s β†’ 40s β†’ … β†’ 5 minutes (max)

Step 1: Check Pod Status

# Get pod status
kubectl get pods

# Output example:
# NAME    READY   STATUS             RESTARTS   AGE
# myapp   0/1     CrashLoopBackOff   5          10m

Step 2: Describe the Pod

kubectl describe pod myapp

Look for:

  • Events: Shows restart history and reasons
  • Last State: Exit code and reason
  • Containers: Image, command, and configuration

Key exit codes:

CodeMeaning
0Graceful exit (shouldn’t restart)
1Application error
137OOMKilled (out of memory)
139Segmentation fault
143SIGTERM (graceful shutdown)

Step 3: Check Container Logs

# Current container logs
kubectl logs myapp

# Previous container logs (before crash)
kubectl logs myapp --previous

# Follow logs
kubectl logs myapp -f

# Specific container in multi-container pod
kubectl logs myapp -c container-name

Common Causes and Solutions

1. Application Error (Exit Code 1)

Symptoms:

Last State:     Terminated
  Exit Code:    1

Solutions:

  • Check logs for error messages
  • Verify environment variables
  • Check configuration files
  • Test the image locally:
docker run -it myapp:tag

2. OOMKilled (Exit Code 137)

Symptoms:

Last State:     Terminated
  Reason:       OOMKilled
  Exit Code:    137

Solution: Increase memory limit:

resources:
  limits:
    memory: "512Mi"  # Increase this

3. Missing Configuration

Symptoms:

Error: ConfigMap "myapp-config" not found

Solution: Create the missing ConfigMap/Secret:

kubectl create configmap myapp-config --from-file=config.yaml

4. Image Pull Error

Symptoms:

Warning  Failed     ImagePullBackOff

Solution: Check image name and registry credentials:

# Verify image exists
docker pull myapp:tag

# Create registry secret
kubectl create secret docker-registry regcred \
  --docker-server=registry.example.com \
  --docker-username=user \
  --docker-password=password

5. Failing Health Checks

Symptoms:

Liveness probe failed: connection refused

Solution: Fix or adjust probe configuration:

livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30  # Give app time to start
  periodSeconds: 10
  failureThreshold: 3

6. Permission Errors

Symptoms:

Error: permission denied

Solution: Fix security context:

securityContext:
  runAsUser: 1000
  runAsGroup: 1000
  fsGroup: 1000

7. Wrong Command/Entrypoint

Symptoms: Container exits immediately.

Solution: Verify the command:

containers:
- name: myapp
  image: myapp:tag
  command: ["/bin/sh", "-c"]
  args: ["./start.sh"]  # Ensure this script exists

Debug with Ephemeral Containers

For running pods (Kubernetes 1.25+):

kubectl debug myapp -it --image=busybox --target=myapp

Debug by Running a Shell

Replace the command temporarily:

containers:
- name: myapp
  image: myapp:tag
  command: ["/bin/sh"]
  args: ["-c", "sleep 3600"]  # Keep container running

Then exec into it:

kubectl exec -it myapp -- /bin/sh

Quick Debugging Checklist

# 1. Get pod events
kubectl describe pod myapp | grep -A 20 Events

# 2. Get logs
kubectl logs myapp --previous

# 3. Check exit code
kubectl get pod myapp -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'

# 4. Check resource usage
kubectl top pod myapp

# 5. Check events in namespace
kubectl get events --sort-by=.metadata.creationTimestamp

Prevention Tips

  1. Always set resource limits to prevent OOMKills
  2. Use proper health checks with adequate delays
  3. Test images locally before deploying
  4. Use init containers for dependencies
  5. Log to stdout/stderr for easy debugging

One-Liner Debug Commands

# Get all failing pods
kubectl get pods --field-selector=status.phase=Failed

# Get pods with restarts
kubectl get pods -o wide | awk '$5 > 0'

# Watch events
kubectl get events -w

# Get last termination reason
kubectl get pod myapp -o jsonpath='{.status.containerStatuses[0].lastState.terminated.reason}'

Key Takeaways

  • Check logs first with kubectl logs --previous
  • Exit code 137 = memory issue
  • Exit code 1 = application error
  • Use kubectl describe pod for events
  • Debug by overriding the command to sleep

πŸ“˜ Go Further with Kubernetes Recipes

Love this recipe? There’s so much more! This is just one of 100+ hands-on recipes in our comprehensive Kubernetes Recipes book.

Inside the book, you’ll master:

  • βœ… Production-ready deployment strategies
  • βœ… Advanced networking and security patterns
  • βœ… Observability, monitoring, and troubleshooting
  • βœ… Real-world best practices from industry experts

β€œThe practical, recipe-based approach made complex Kubernetes concepts finally click for me.”

πŸ‘‰ Get Your Copy Now β€” Start building production-grade Kubernetes skills today!

Frequently Asked Questions

What is CrashLoopBackOff in Kubernetes?

CrashLoopBackOff means Kubernetes is trying to restart a container that keeps crashing. The kubelet uses exponential backoff (10s, 20s, 40s, up to 5 minutes) between restart attempts. Common causes include missing environment variables, failed health checks, incorrect commands, and application errors.

How do I fix CrashLoopBackOff?

  1. Check logs: kubectl logs <pod-name> --previous (use --previous to see crash logs)
  2. Describe the pod: kubectl describe pod <pod-name> for events and exit codes
  3. Verify the container image and command are correct
  4. Check environment variables and ConfigMaps/Secrets exist
  5. Ensure liveness probes aren’t killing healthy pods (increase initialDelaySeconds)

What’s the difference between CrashLoopBackOff and Error?

Error means the container exited with a non-zero exit code on its most recent attempt. CrashLoopBackOff means it has crashed repeatedly and Kubernetes is waiting before retrying. CrashLoopBackOff always follows repeated Error states.

#troubleshooting #crashloopbackoff #debugging #logs #pods
Luca Berton
Written by Luca Berton

Principal Solutions Architect specializing in Kubernetes, AI/GPU infrastructure, and cloud-native platforms. Author of Kubernetes Recipes and creator of CopyPasteLearn courses.

Kubernetes Recipes book cover

Want More Kubernetes Recipes?

This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens