Fix PV Stuck in Terminating State
Resolve PVs and PVCs stuck in Terminating status. Remove finalizers safely, check volume detachment, and handle storage issues.
π‘ Quick Answer: PVs/PVCs stuck in Terminating usually have a finalizer preventing deletion. Check
kubectl get pvc <name> -o jsonpath='{.metadata.finalizers}'. If the volume is safely detached, remove the finalizer withkubectl patch pvc <name> -p '{"metadata":{"finalizers":null}}'.
The Problem
You deleted a PVC or PV but it stays in Terminating status indefinitely. New PVCs canβt bind to the underlying storage, namespace deletions hang, and storage capacity appears consumed by ghost volumes.
The Solution
Step 1: Identify Whatβs Stuck
# Find stuck PVCs
kubectl get pvc -A | grep Terminating
# Find stuck PVs
kubectl get pv | grep Terminating
# Check finalizers on a stuck PVC
kubectl get pvc my-data -n myapp -o json | jq '.metadata.finalizers'
# ["kubernetes.io/pvc-protection"] β This finalizer prevents deletion while a pod mounts itStep 2: Check If a Pod Is Still Using It
# The pvc-protection finalizer stays until no pod references the PVC
kubectl get pods -n myapp -o json | jq -r '
.items[] |
select(.spec.volumes[]?.persistentVolumeClaim.claimName == "my-data") |
.metadata.name
'
# my-app-0 β This pod still mounts the PVC β delete the pod firstStep 3: Remove the Finalizer (If Safe)
# Only after confirming no pod uses the PVC:
kubectl patch pvc my-data -n myapp --type merge -p '{"metadata":{"finalizers":null}}'
# For PVs:
kubectl patch pv pv-my-data --type merge -p '{"metadata":{"finalizers":null}}'Step 4: Handle Stuck PV with External Storage
# If PV is stuck because the storage backend can't detach:
kubectl describe pv pv-my-data | grep -A5 "Events:"
# Warning VolumeFailedDetach detach volume failed: rpc error: ...
# Force detach the volume attachment
kubectl get volumeattachment | grep pv-my-data
kubectl delete volumeattachment <attachment-name> --force --grace-period=0graph TD
A[PVC stuck Terminating] --> B{Finalizer present?}
B -->|No| C[Check PV status]
B -->|Yes| D{Pod still using PVC?}
D -->|Yes| E[Delete the pod first]
D -->|No| F[Patch to remove finalizer]
E --> F
F --> G[PVC deletes]
C --> H{VolumeAttachment stuck?}
H -->|Yes| I[Force delete VolumeAttachment]
H -->|No| J[Check storage provider logs]Common Issues
PV Stuck After Node Deletion
The volume was attached to a node that no longer exists. Force-delete the VolumeAttachment object.
ReclaimPolicy: Retain Prevents PV Cleanup
kubectl get pv pv-my-data -o jsonpath='{.spec.persistentVolumeReclaimPolicy}'
# Retain β PV won't be deleted even after PVC is gone
# Change to Delete if you want automatic cleanup:
kubectl patch pv pv-my-data -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'CSI Driver Pod Down
If the CSI driver managing the volume is unhealthy, detach/delete operations fail silently. Restart the CSI driver pods.
Best Practices
- Donβt remove finalizers blindly β always verify no pod is using the volume first
- Check VolumeAttachments before removing PV finalizers β orphaned attachments cause data corruption
- Use
reclaimPolicy: Deletefor ephemeral storage,Retainfor data you want to keep - Monitor CSI driver health β unhealthy drivers cause stuck volumes
Key Takeaways
kubernetes.io/pvc-protectionfinalizer prevents deletion while a pod mounts the PVC- Delete the pod first, then the finalizer clears automatically
- Force-removing finalizers is safe only after confirming detachment
- VolumeAttachment objects can also block PV deletion β check and force-delete if needed

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses β