Multitenancy with ceph-csi-rbd and RADOS Namespaces
Running multiple Kubernetes clusters for multiple tenants on top of a single Ceph cluster poses the challenge on how to grant the cluster access to Ceph for dynamic provisioning of PersistentVolumes. This involves installing Ceph’s RBD CSI (ceph-csi-rbd) and configuring access to the Ceph cluster for it.
You don’t want to create multiple Ceph Pools, one for each tenant, as that’s not the correct way to handle multitenancy in Ceph. Luckily, Ceph is supporting namespacing in RADOS for quite a while now, which allows operators to part resources in different isolated namespaces and to create keyrings that are restricted to these namespaces. That way Tenant A can’t access the resources of Tenant B, living in a different namespace.
The only issue is that Ceph’s RBD CSI doesn’t tell you whether it’s possible and (if so) how to make use of RADOS namespaces. Good news is, it’s definitely possible and it only requires some slight adjustments to the CSI’s Helm values.
Namespace Creation
Given that you’re running a Ceph Pool called kubernetes, you can create a RADOS namespace within that pool alongside the matching keyrings for accessing it with the following commands.
rbd namespace -p kubernetes create --namespace tenant-arbd namespace -p kubernetes create --namespace tenant-bOnce created, you can create the approporiate keyrings:
ceph auth get-or-create client.tenant-a mon 'profile rbd' osd 'profile rbd pool=kubernetes namespace=tenant-a' -o /etc/ceph/ceph.client.tenant-a.keyringceph auth get-or-create client.tenant-b mon 'profile rbd' osd 'profile rbd pool=kubernetes namespace=tenant-b' -o /etc/ceph/ceph.client.tenant-b.keyringTake note of the keyring’s contents, because you need them down the line for the configuration of the Ceph RBD CSI.
Ceph CSI RBD
Then, in order to deploy the ceph-rbd-csi chart to your Kubernetes cluster, you need to prepare a value.yaml. In my case, this looks like the following.
cephconf: | [global] fsid = 0ec297ca-c5cb-11ee-XXXX-XXXXXXXXXXXX mon_host = [v2:10.100.0.7:3300/0,v1:10.100.0.7:6789/0] [v2:10.100.0.8:3300/0,v1:10.100.0.8:6789/0] [v2:10.100.0.9:3300/0,v1:10.100.0.9:6789/0]csiConfig:- clusterID: 0ec297ca-c5cb-11ee-XXXX-XXXXXXXXXXXX monitors: - 10.100.0.7:3300 - 10.100.0.8:3300 - 10.100.0.9:3300 rbd: radosNamespace: tenant-aprovisioner: replicaCount: 1secret: create: true name: csi-rbd-secret userID: tenant-a userKey: AQAeG85ld9XXXXXXXXXXXXXXXXXXXXXXXXXXXX==storageClass: allowVolumeExpansion: true annotations: storageclass.kubernetes.io/is-default-class: "true" cephLogDir: /var/log/ceph cephLogStrategy: compress clusterID: 0ec297ca-c5cb-11ee-XXXX-XXXXXXXXXXXX create: true fstype: ext4 mountOptions: [] pool: kubernetes provisionerSecret: csi-rbd-secret reclaimPolicy: DeleteHighlighted in the snippet above is the (undocumented) possibility to configure the RADOS namespace to use. Otherwise, the values.yaml is pretty default, only containing the necessary values to get the CSI up and running.
In order to use the values for your own cluster, you need to modify the following fields to your own needs and your own Ceph cluster:
cephconf- Set the cluster ID and replace the array of Ceph Monitors with your own onescsiConfig.monitors- Set your own Ceph Monitors herecsiConfig.rbd.radosNamespace- Set the name of the RADOS namespace to usesecret.userID- Replace this value with the Ceph client namesecret.userKey- Replace this value with the corresponding key from the keyring for said userstorageClass.clusterID- Set your Ceph cluster ID again
Aside from that, all values could stay the same in order to get a functioning setup. Once prepared, you’re ready to install the ceph-csi-rbd Helm chart in your cluster with the following command where values.yaml is pointing to the values file we just created above.
helm install -f values.yaml --namespace "ceph-csi-rbd" "ceph-csi-rbd" ceph-csi/ceph-csi-rbdValidation
After you’ve installed the Ceph RBD CSI, you can check whether everything is working as expected. You can simply create a new PersistentVolume using the ceph-csi-rbd storage class. After you’ve applied the manifest and you can see the volume in a Ready-state, you can check the RBD image on the Ceph cluster:
mon01.ceph ~ # rbd info -p kubernetes --namespace tenant-a csi-vol-1d2b8143-8ac8-4e68-b0f2-374c7a4f6696rbd image 'csi-vol-1d2b8143-8ac8-4e68-b0f2-374c7a4f6696': size 5 GiB in 1280 objects order 22 (4 MiB objects) snapshot_count: 0 id: f122b4130b07e2 block_name_prefix: rbd_data.f122b4130b07e2 format: 2 features: layering op_features: flags: create_timestamp: Wed Jan 24 08:00:36 2026 access_timestamp: Wed Jan 24 08:00:36 2026 modify_timestamp: Wed Jan 24 08:00:36 2026Running the same command without the --namespace flag yields an error, which is the expected and desired result as it confirms that the RBD image created by the CSI is indeed within the correct namespace. Given the privilege and namespace separation in Ceph, a thenant that extracted the keyring from their cluster, is still unable to manually access the RBD images of other tenants which is crucial for a safe multitenant operation of Ceph and Kubernetes.