Fix ResourceQuota Exceeded Errors
Debug resource quota violations preventing pod scheduling. Understand LimitRange defaults, ResourceQuota, and namespace management.
π‘ Quick Answer:
forbidden: exceeded quotameans the namespace has a ResourceQuota and your podβs requests exceed the remaining budget. Checkkubectl describe quota -n <ns>to see used vs. hard limits. Free up quota by deleting unused pods or increasing the quota.
The Problem
Pod creation fails with:
Error from server (Forbidden): pods "myapp-xyz" is forbidden:
exceeded quota: compute-quota, requested: cpu=500m,memory=512Mi,
used: cpu=3500m,memory=7Gi, limited: cpu=4,memory=8GiThe Solution
Step 1: Check Current Quota Usage
kubectl describe quota -n myapp
# Name: compute-quota
# Resource Used Hard
# -------- ---- ----
# cpu 3500m 4 β Only 500m remaining
# memory 7Gi 8Gi β Only 1Gi remaining
# pods 7 20
# requests.cpu 3500m 4
# requests.memory 7Gi 8GiStep 2: Find Whatβs Consuming Quota
# List all pods and their resource requests
kubectl get pods -n myapp -o json | jq -r '
.items[] |
.metadata.name as $name |
.spec.containers[] |
"\($name): cpu=\(.resources.requests.cpu // "none") memory=\(.resources.requests.memory // "none")"
'Step 3: Fix β Free Up or Increase Quota
# Option A: Delete unused pods/deployments
kubectl delete deploy old-service -n myapp
# Option B: Reduce resource requests
kubectl set resources deploy myapp --requests=cpu=200m,memory=256Mi -n myapp
# Option C: Increase quota (requires cluster-admin)
kubectl patch resourcequota compute-quota -n myapp --type merge -p '{
"spec": {"hard": {"cpu": "8", "memory": "16Gi"}}
}'LimitRange Defaults
If pods donβt specify resources, a LimitRange may inject defaults that count against quota:
kubectl describe limitrange -n myapp
# Default Request: cpu=250m, memory=256Mi
# Even pods WITHOUT explicit requests get these defaults β they consume quotaCommon Issues
Quota Counts Terminating Pods
Pods in Terminating state still count. Force-delete stuck terminating pods to free quota.
Quota Blocks Scaling
HPA canβt scale up because quota is exhausted. Either increase quota or set conservative HPA maxReplicas.
Best Practices
- Always set resource requests β without them, LimitRange defaults apply (which may be too high)
- Monitor quota usage β alert at 80% to prevent surprise failures
- Use separate quotas per team β prevents one team from consuming all resources
- Set both requests and limits quotas β
requests.cpufor scheduling,limits.cpufor burst
Key Takeaways
- ResourceQuota caps total resource requests per namespace
kubectl describe quotashows used vs hard limits at a glance- LimitRange injects default requests β these count against quota even if you didnβt set them
- Free quota by deleting pods, reducing requests, or increasing the quota

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses β