Friday, September 16, 2022

Verifying Openshift Data Foundation data access with CephFS

This post serves as a follow up to my previous post about deploying Openshift Data Foundation on Openshift Container Platform. 

This blog post will demonstrate how to use the ODF provisioned CepfFS backed storage class by binding a persistent volume claim (PVC), binding to a pod and verifying data access. 

Background

ODF deploys a Ceph cluster on top of an existing OCP cluster. Ceph features are provided to the cluster via a variety of storage classes.

This post as been verified against OCP and ODF versions 4.8 through 4.11. 

Storage Class Listing

Lets start by verifying the availability of ODF storage classes. The following command will display the available storage classes.

$ oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  40h
ocs-storagecluster-ceph-nfs   openshift-storage.nfs.csi.ceph.com      Delete          Immediate              false                  40h
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   40h
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  40h
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   40h
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  40h

Verify CepfFS Storage Class

To verify CephFS, we will create a namespace, bind a PVC and create a POD with the PVC attached. Once the POD is running we will copy content into the mount point and verify access.

Create CephFS Verification Namespace

 The namespace is created via the following yaml and command

$ cat cephfs-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: cephfs-ns
spec: {}
$ oc apply -f cephfs-ns.yaml 
namespace/cephfs-ns created

Create CephFS Verification PVC

The PVC is created against the CephFS backed storage class.

$ cat cephfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
  namespace: cephfs-ns
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: ocs-storagecluster-cephfs
$ oc apply -f cephfs-pvc.yaml 
persistentvolumeclaim/cephfs-pvc created

Create CephFS Verification POD

A pod is created in the CephFS verification namespace. This pod uses the Red Hat's Apache 2.4 image based on their UBI image. This image was chosen as a convenience and to permit verification of network access to copied data. The image can be replaced with any other suitable image and related tests. 

$ cat cephfs-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: cephfs-pod
  namespace: cephfs-ns
  labels:
    app: rook-ceph-cephfs
spec:
  containers:
  - name: cephfs-pod1
    image: registry.access.redhat.com/ubi8/httpd-24
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: repo-vol
      mountPath: "/var/www/html/"
  volumes:
  - name: repo-vol
    persistentVolumeClaim:
      claimName: cephfs-pvc
$ oc apply -f cephfs-pod.yaml 
pod/cephfs-pod created

Copy Static Content Into CephFS Verification Pod

A simple HTML file is copied into the pod's document root. This will ensure the attached PVC can be written to and read from bia the web service.

$ cat index-cephfs.html 
Here be cephfs dragons
$ oc cp index-cephfs.html cephfs-ns/test-pod:/var/www/html/index.html

Create and Expose Service from Verification Pod

The CephFS pod is exposed via the standard OCP service and route commands.

 $ cat cephfs-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: cephfs-httpd
  namespace: cephfs-ns
spec:
  ports:
  - name: http
    port: 8080
    target: 8080
    protocol: TCP
  selector:
    app: rook-ceph-cephfs
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: cephfs-httpd
  namespace: cephfs-ns
spec:
  port:
    targetPort: http
  to:
    kind: Service
    name: cephfs-httpd
$ oc apply -f cephfs-service.yaml 
service/cephfs-httpd created
route.route.openshift.io/cephfs-httpd created

Verify CephFS Service and Data Access

A simple curl command is used to validate access to the data written to the CephFS backed PVC

$ CEPHFS_HOST=`oc get route -n cephfs-ns cephfs-httpd cephfs-httpd -o jsonpath='{.items[0].spec.host}'`
$ curl ${CEPHFS_HOST} Here be cephfs dragons

CephFS Verification Conclusion

This post has demonstrated how to bind a PVC to a CephFS backed storage class, attach the PVC to a pod, copy data into the pod and verify access to the data via a network service. 


No comments:

Post a Comment