πŸ“šBook Signing at KubeCon EU 2026Meet us at Booking.com HQ (Mon 18:30-21:00) & vCluster booth #521 (Tue 24 Mar, 12:30-1:30pm) β€” free book giveaway!RSVP Booking.com Event
Storage intermediate ⏱ 20 minutes K8s 1.28+

OpenClaw Persistent State Management on Kubernetes

Manage OpenClaw agent state and workspace data with Kubernetes PVCs. Init container config seeding, backups, and storage classes.

By Luca Berton β€’ β€’ πŸ“– 5 min read

πŸ’‘ Quick Answer: OpenClaw stores agent state, memory, skills, and config in /home/node/.openclaw, backed by a 10Gi PVC. An init container seeds the config from a ConfigMap on every start without overwriting existing workspace data. Use Recreate strategy (not RollingUpdate) since the PVC is RWO.

The Problem

OpenClaw agents build up state over time β€” memory files, learned preferences, workspace files, installed skills. Losing this state on pod restart means losing your agent’s personality, context, and work history. You need persistent storage that survives pod restarts, node failures, and upgrades, while also seeding initial config on first deploy.

The Solution

PVC Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: openclaw-home-pvc
  labels:
    app: openclaw
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Init Container Config Seeding

The init container copies config files from the ConfigMap to the PVC on every start:

initContainers:
  - name: init-config
    image: busybox:1.37
    command:
      - sh
      - -c
      - |
        cp /config/openclaw.json /home/node/.openclaw/openclaw.json
        mkdir -p /home/node/.openclaw/workspace
        cp /config/AGENTS.md /home/node/.openclaw/workspace/AGENTS.md
    volumeMounts:
      - name: openclaw-home
        mountPath: /home/node/.openclaw
      - name: config
        mountPath: /config

This design:

  • Always refreshes openclaw.json and AGENTS.md from ConfigMap
  • Preserves agent memory (memory/), workspace files, and skills
  • Creates workspace directory if first run

PVC Contents Structure

/home/node/.openclaw/           (PVC mount)
β”œβ”€β”€ openclaw.json               (seeded by init container)
β”œβ”€β”€ workspace/
β”‚   β”œβ”€β”€ AGENTS.md               (seeded by init container)
β”‚   β”œβ”€β”€ SOUL.md                 (created by agent)
β”‚   β”œβ”€β”€ USER.md                 (created by agent)
β”‚   β”œβ”€β”€ MEMORY.md               (created by agent)
β”‚   └── memory/
β”‚       β”œβ”€β”€ 2026-03-19.md       (daily notes)
β”‚       └── heartbeat-state.json
β”œβ”€β”€ skills/                     (installed skills)
β”œβ”€β”€ sessions/                   (active sessions)
└── state/                      (gateway state)
graph TD
    A[ConfigMap] -->|Init Container| B[PVC: /home/node/.openclaw]
    B --> C[openclaw.json<br>Refreshed each start]
    B --> D[workspace/AGENTS.md<br>Refreshed each start]
    B --> E[workspace/memory/<br>Preserved across restarts]
    B --> F[workspace/SOUL.md<br>Preserved across restarts]
    B --> G[skills/<br>Preserved across restarts]
    B --> H[sessions/<br>Preserved across restarts]

Deployment Strategy: Recreate

Since the PVC uses ReadWriteOnce, only one pod can mount it at a time:

spec:
  strategy:
    type: Recreate  # Not RollingUpdate!

RollingUpdate would deadlock β€” new pod can’t mount the PVC until old pod releases it.

Storage Class Selection

Choose based on your environment:

# Cloud providers β€” use SSD-backed storage
spec:
  storageClassName: gp3    # AWS EBS gp3
  storageClassName: pd-ssd  # GCP Persistent Disk
  storageClassName: managed-premium  # Azure

# On-prem β€” NFS for multi-node, local-path for single-node
spec:
  storageClassName: nfs-client
  storageClassName: local-path  # Kind, k3s

Backup Strategy

# Option 1: Snapshot (cloud providers)
kubectl create -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: openclaw-backup-$(date +%Y%m%d)
  namespace: openclaw
spec:
  volumeSnapshotClassName: csi-snapclass
  source:
    persistentVolumeClaimName: openclaw-home-pvc
EOF

# Option 2: Copy to local
kubectl cp openclaw/openclaw-xxx:/home/node/.openclaw/workspace ./backup/

# Option 3: Agent-driven GitOps (recommended)
# Agent commits workspace changes to Git automatically

Resize PVC

# Check if StorageClass supports expansion
kubectl get sc -o jsonpath='{.items[*].allowVolumeExpansion}'

# Resize
kubectl patch pvc openclaw-home-pvc -n openclaw \
  -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'

Common Issues

Pod Stuck in Pending (PVC Not Binding)

kubectl describe pvc openclaw-home-pvc -n openclaw
# Check Events for provisioner errors

# Kind/k3s: ensure local-path provisioner is installed
kubectl get sc

Data Lost After Helm Uninstall

PVCs with reclaimPolicy: Delete get destroyed. Change to Retain:

kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Config Changes Not Taking Effect

The init container overwrites openclaw.json on every start. Edit the ConfigMap, then restart:

kubectl edit configmap openclaw-config -n openclaw
kubectl rollout restart deployment/openclaw -n openclaw

Best Practices

  • Use Recreate strategy β€” RWO PVC requires it; RollingUpdate will deadlock
  • Backup before upgrades β€” snapshot PVC or kubectl cp workspace before major changes
  • GitOps for workspace β€” let the agent commit changes to Git for version-controlled state
  • Size appropriately β€” 10Gi is generous; monitor usage with kubectl exec -- du -sh /home/node/.openclaw
  • Init container for config only β€” never let init container touch memory/ or skills/
  • Retain PVCs β€” set reclaim policy to Retain to survive accidental namespace deletion

Key Takeaways

  • OpenClaw state lives in a single PVC mounted at /home/node/.openclaw
  • Init container seeds config from ConfigMap without overwriting agent-generated files
  • Use Recreate deployment strategy with RWO PVCs
  • Back up with VolumeSnapshots (cloud) or agent-driven GitOps (recommended)
  • Choose SSD-backed storage classes for responsive agent sessions
#openclaw #persistent-volumes #state-management #storage #backup
Luca Berton
Written by Luca Berton

Principal Solutions Architect specializing in Kubernetes, AI/GPU infrastructure, and cloud-native platforms. Author of Kubernetes Recipes and creator of CopyPasteLearn courses.

Kubernetes Recipes book cover

Want More Kubernetes Recipes?

This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens