Background
ODF deploys a Ceph cluster on top of an existing OCP cluster. Ceph features are provided to the cluster via a variety of storage classes.
This post as been verified against OCP and ODF versions 4.8 through 4.11.
Storage Class Listing
Let's start by verifying the availability of ODF storage classes. The following command will display the available storage classes.
$ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 40h ocs-storagecluster-ceph-nfs openshift-storage.nfs.csi.ceph.com Delete Immediate false 40h ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 40h ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 40h ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 40h openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 40h
Verify RBD Storage Class
To verify RBD, we will create a namespace, bind a PVC and create a POD with the PVC attached. Once the POD is running we will copy content into the mount point and verify access.
Create RBD Verification Namespace
The namespace is created via the following yaml and command
$ cat rbd-ns.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: rbd-ns spec: {} $ oc apply -f rbd-ns.yaml namespace/rbd-ns created
Create RBD Verification PVC
The PVC is created against the RBD backed storage class.
$ cat rbd-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc namespace: rbd-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd
$ oc apply -f rbd-pvc.yaml persistentvolumeclaim/rbd-pvc created
Create RBD Verification POD
A pod is created in the RBD verification namespace. This pod uses the Red Hat's Apache 2.4 image based on their UBI image. This image was chosen as a convenience and to permit verification of network access to copied data. The image can be replaced with any other suitable image and related tests.
NOTE: This pod description includes setting the fsGroup in the securityContext. This will ensure the mounted volume will be accessible by the internal process. In this pod, httpd runs as UID 1001.
$ cat rbd-pod.yaml apiVersion: v1 kind: Pod metadata: name: test-pod namespace: rbd-ns labels: app: rook-ceph-block spec: securityContext: fsGroup: 1001 containers: - name: sample-pod1 image: registry.access.redhat.com/ubi8/httpd-24 imagePullPolicy: IfNotPresent volumeMounts: - name: repo-vol mountPath: "/var/www/html/" volumes: - name: repo-vol persistentVolumeClaim: claimName: rbd-pvc
$ oc apply -f rbd-pod.yaml pod/test-pod created
Copy Static Content Into RBD Verification Pod
A simple HTML file is copied into the pod's document root. This will ensure the attached PVC can be written to and read from bia the web service.
$ $ cat index-block.html Here be block dragons $ oc cp index-rbd.html rbd-ns/test-pod:/var/www/html/index.html
Create and Expose Service from Verification Pod
The RBD pod is exposed via the standard OCP service and route commands.
$ cat rbd-service.yaml apiVersion: v1 kind: Service metadata: name: block-httpd namespace: rbd-ns spec: ports: - name: http port: 8080 target: 8080 protocol: TCP selector: app: rook-ceph-block --- apiVersion: route.openshift.io/v1 kind: Route metadata: name: block-httpd namespace: rbd-ns spec: port: targetPort: http to: kind: Service name: block-httpd $ oc apply -f rbd-service.yaml service/block-httpd created route.route.openshift.io/block-httpd created
Verify RBD Service and Data Access
A simple curl command is used to validate access to the data written to the RBD backed PVC.
$ RBD_HOST=`oc get route -n rbd-ns block-httpd block-httpd -o jsonpath='{.items[0].spec.host}'` $ curl ${RBD_HOST} Here be block dragons
RBD Verification Conclusion
As with the CephFS post, this post has demonstrated how to bind a PVC to a RBD backed storage class, attach the PVC to a pod, copy data into the pod and verify access to the data via a network service. Other than setting a securityContext, no major changes are needed from the previous demo.
No comments:
Post a Comment