OpenClaw with Persistent Storage
Configure persistent storage for OpenClaw workspaces using PVCs, StorageClasses, and backup strategies in Kubernetes.
π‘ Quick Answer: Use PersistentVolumeClaims with appropriate StorageClasses to persist OpenClaw workspace data (memory files, skills, configuration) across pod restarts.
The Problem
OpenClaw stores its identity, memory, and workspace files at ~/.openclaw. Without persistent storage, pod restarts or rescheduling wipe all agent memory and configuration, resetting the agent to a blank state.
The Solution
Mount a PVC at the OpenClaw workspace directory with the right StorageClass for your environment.
Basic PVC Setup
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openclaw-workspace
namespace: openclaw
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3-encrypted
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openclaw
namespace: openclaw
spec:
replicas: 1
selector:
matchLabels:
app: openclaw
template:
metadata:
labels:
app: openclaw
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: openclaw
image: ghcr.io/openclaw/openclaw:latest
volumeMounts:
- name: workspace
mountPath: /home/node/.openclaw
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: "2"
memory: 2Gi
volumes:
- name: workspace
persistentVolumeClaim:
claimName: openclaw-workspace
- name: tmp
emptyDir:
sizeLimit: 1GiStorageClass Selection Guide
# AWS EBS gp3 β good default for single-AZ
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3-encrypted
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
throughput: "125"
iops: "3000"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
# NFS β for multi-replica ReadWriteMany access
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-openclaw
provisioner: nfs.csi.k8s.io
parameters:
server: nfs-server.storage.svc.cluster.local
share: /exports/openclaw
reclaimPolicy: Retain
mountOptions:
- nfsvers=4.1
- hard
- timeo=600Volume Snapshot for Quick Backups
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: openclaw-snap-20260226
namespace: openclaw
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: openclaw-workspace
---
# Restore from snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openclaw-workspace-restored
namespace: openclaw
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3-encrypted
resources:
requests:
storage: 10Gi
dataSource:
name: openclaw-snap-20260226
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.ioCronJob for Scheduled Snapshots
apiVersion: batch/v1
kind: CronJob
metadata:
name: openclaw-snapshot
namespace: openclaw
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
serviceAccountName: snapshot-creator
containers:
- name: snapshot
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
DATE=$(date +%Y%m%d)
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: openclaw-snap-${DATE}
namespace: openclaw
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: openclaw-workspace
EOF
# Clean up snapshots older than 7 days
kubectl get volumesnapshot -n openclaw \
--sort-by=.metadata.creationTimestamp \
-o name | head -n -7 | xargs -r kubectl delete -n openclaw
restartPolicy: OnFailuregraph TD
A[OpenClaw Pod] --> B[PVC: workspace]
B --> C[StorageClass: gp3-encrypted]
C --> D[EBS Volume]
D --> E[Daily VolumeSnapshot]
E --> F[Retain 7 days]
B --> G[workspace directory]
G --> H[MEMORY.md]
G --> I[SOUL.md]
G --> J[memory/*.md]
G --> K[skills/]Common Issues
- Permission denied on mount β set
fsGroup: 1000in pod securityContext to match OpenClawβs UID - Pod stuck Pending after reschedule β WaitForFirstConsumer binding means PV is AZ-locked; use topology-aware scheduling
- Slow filesystem on NFS β use
hardmount option and increasetimeo; avoid NFS for high-IOPS workloads - PVC full β enable
allowVolumeExpansion: trueon StorageClass; monitor with alerts
Best Practices
- Use
reclaimPolicy: Retainto prevent accidental data loss - Enable volume expansion on StorageClass for growth
- Schedule daily VolumeSnapshots with 7-day retention
- Separate workspace PVC from tmp (use emptyDir for /tmp)
- Monitor PVC usage with Prometheus
kubelet_volume_stats_used_bytes - Use encrypted StorageClass in production (secrets in workspace)
Key Takeaways
- PVC at
/home/node/.openclawpreserves agent memory across restarts - StorageClass choice depends on access pattern (RWO vs RWX) and cloud provider
- VolumeSnapshots provide quick, space-efficient backups
fsGroupmust match OpenClaw container UID for write access- Retain policy + snapshots = defense against accidental deletion

Recommended
Kubernetes Recipes β The Complete Book100+ production-ready patterns with detailed explanations, best practices, and copy-paste YAML. Everything in one place.
Get the Book βLearn by Doing
CopyPasteLearn β Hands-on Cloud & DevOps CoursesMaster Kubernetes, Ansible, Terraform, and MLOps with interactive, copy-paste-run lessons. Start free.
Browse Courses βπ Deepen Your Skills β Hands-on Courses
Courses by CopyPasteLearn.com β Learn IT by Doing
