Adding Support for Custom clusterID in Rook Ceph CSI
In the latest enhancement to Rook’s Ceph operator, users can now explicitly specify the clusterID
when creating CephBlockPoolRadosNamespace
and CephFilesystemSubVolumeGroup
custom resources. This update gives users more control and flexibility over how storage clusters are identified and referenced by the CSI driver.
🚨 Why This Feature Matters
Previously, the clusterID
was internally generated by the Rook operator, meaning users had no influence over how it was defined or named. The users need to create the CephBlockPoolRadosNamespace
and CephFilesystemSubVolumeGroup
CR’s and wait for clusterID
to appear in the status of these CR’s and then later create the StorageClass and consume it.
With this change:
- The
clusterID
can now be explicitly defined by the user. - Users can assign meaningful and recognizable names to their clusters (e.g.,
prod-ceph
,backup-ceph
,test-cephfs-clusterid
). - This also helps to avoid hard-to-debug mismatches between the CSI config and Ceph cluster references.
📝 Important Note
⚠️ It is the user’s responsibility to ensure the
clusterID
is unique across all CephClusters managed by the same Rook operator instance.
Duplicate or conflicting clusterID
s can result in unexpected CSI behavior, incorrect volume provisioning, or failures.
🔧 How to Use the New clusterID
Field
CephBlockPoolRadosNamespace Example
apiVersion: ceph.rook.io/v1
kind: CephBlockPoolRadosNamespace
metadata:
name: namespace-a
namespace: rook-ceph
spec:
blockPoolName: replicapool
clusterID: rbd-test-clusterid
CephFilesystemSubVolumeGroup Example
apiVersion: ceph.rook.io/v1
kind: CephFilesystemSubVolumeGroup
metadata:
name: group-a
namespace: rook-ceph
spec:
filesystemName: myfs
dataPoolName: ""
pinning:
distributed: 1
clusterID: cephfs-test-clusterid
Once you create above CR’s backend resouces will be created for it and Rook update the status with Ready state and with the clusterID specified in the spec field.
📦 StorageClass Examples
Using above ClusterID we can create the StorageClass and provisioning PVC from it.
🔷 RBD StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbd-sc
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rbd-test-clusterid
...
reclaimPolicy: Delete
allowVolumeExpansion: true
🔷 CephFS StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cephfs-sc
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
clusterID: cephfs-test-clusterid
...
reclaimPolicy: Delete
allowVolumeExpansion: true
✅ Summary
This new clusterID support puts control back into the hands of cluster administrators. It simplifies CSI integration, makes configurations more predictable.
Try it out and structure your CSI configuration the way you want it!