πŸ“šBook Signing at KubeCon EU 2026Meet us at Booking.com HQ (Mon 18:30-21:00) & vCluster booth #521 (Tue 24 Mar, 12:30-1:30pm) β€” free book giveaway!RSVP Booking.com Event
Storage advanced ⏱ 15 minutes K8s 1.28+

Rook-Ceph: Distributed Storage for Kubernetes

Deploy Rook-Ceph on Kubernetes for distributed block, file, and object storage. Covers installation, CephCluster configuration, StorageClasses, and monitoring.

By Luca Berton β€’ β€’ πŸ“– 5 min read

πŸ’‘ Quick Answer: Deploy Rook-Ceph on Kubernetes for distributed block, file, and object storage. Covers installation, CephCluster configuration, StorageClasses, and monitoring.

The Problem

Engineers frequently search for this topic but find scattered, incomplete guides. This recipe provides a comprehensive, production-ready reference.

The Solution

Install Rook-Ceph

helm repo add rook-release https://charts.rook.io/release
helm install rook-ceph rook-release/rook-ceph \
  --namespace rook-ceph --create-namespace
# CephCluster configuration
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: quay.io/ceph/ceph:v18
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: false
  storage:
    useAllNodes: true
    useAllDevices: true        # Auto-detect available disks
    # Or specify devices:
    # nodes:
    #   - name: worker-1
    #     devices:
    #       - name: sdb
    #       - name: sdc
  resources:
    mgr:
      requests:
        cpu: 500m
        memory: 512Mi
    osd:
      requests:
        cpu: 500m
        memory: 2Gi

StorageClasses

# Block storage (RWO β€” databases)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  pool: replicapool
  clusterID: rook-ceph
  csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Retain
---
# Shared filesystem (RWX β€” shared data)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-filesystem
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
  pool: cephfs-data0
  clusterID: rook-ceph
  fsName: myfs
---
# Object storage (S3-compatible)
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: my-store
  namespace: rook-ceph
spec:
  metadataPool:
    replicated:
      size: 3
  dataPool:
    replicated:
      size: 3
  gateway:
    port: 80
    instances: 2
graph TD
    A[Rook Operator] --> B[CephCluster]
    B --> C[MON x3 - metadata]
    B --> D[OSD per disk - data]
    B --> E[MGR - management]
    F[StorageClass: ceph-block] --> G[RBD volumes - RWO]
    H[StorageClass: ceph-fs] --> I[CephFS volumes - RWX]
    J[CephObjectStore] --> K[S3-compatible API]

Frequently Asked Questions

Minimum nodes for Rook-Ceph?

3 nodes minimum for production (3 MONs for quorum). Each node needs at least one raw disk (no filesystem). Rook-Ceph is overkill for small clusters β€” use NFS or cloud storage instead.

Best Practices

  • Start with the simplest approach that solves your problem
  • Test thoroughly in staging before production
  • Monitor and iterate based on real metrics
  • Document decisions for your team

Key Takeaways

  • This is essential Kubernetes operational knowledge
  • Production-readiness requires proper configuration and monitoring
  • Use kubectl describe and logs for troubleshooting
  • Automate where possible to reduce human error
#rook #ceph #distributed-storage #block-storage #object-storage
Luca Berton
Written by Luca Berton

Principal Solutions Architect specializing in Kubernetes, AI/GPU infrastructure, and cloud-native platforms. Author of Kubernetes Recipes and creator of CopyPasteLearn courses.

Kubernetes Recipes book cover

Want More Kubernetes Recipes?

This recipe is from Kubernetes Recipes, our 750-page practical guide with hundreds of production-ready patterns.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens