My previous posts demonstrated how to use CephFS and Ceph RBD backed storage classes as deployed by Open Data Framework on OpenShift.
This blog post I will demonstrate how to use the Ceph RGW backed storage class from within a pod running on the OpenShift cluster. I will extract the S3 authentication credentials, create a name space, start a pod and demonstrate how to securely interact with the S3 service.
Background
As stated in the previous blog posts ODF deploys a Ceph cluster within the OCP cluster. The cluster master nodes are used as the Ceph monitor processes, workers are utilized as OSD and the remaining related pods are scheduled on the cluster.
This post as been verified against OCP and ODF versions 4.11.
Storage Class Listing
Let's start by verifying the availability of ODF storage classes. The following command will display the available storage classes.
$ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 6d ocs-storagecluster-ceph-nfs openshift-storage.nfs.csi.ceph.com Delete Immediate false 6d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 6d ocs-storagecluster-ceph-rgw openshift-storage.ceph.rook.io/bucket Delete Immediate false 6d ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 6d openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 6d
Verify S3 Access
To verify S3 access we will create an object bucket claim, extract the needed authentication and connection information, create a namespace with a configured POD and verify access.
Create ObjectBucketClaim
An ObjectBucketClaim is created against the CephRGW backed storage class.
$ cat objectbucketclaim.yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-bucket namespace: openshift-storage spec: generateBucketName: ceph-bkt storageClassName: ocs-storagecluster-ceph-rgw $ oc apply -f objectbucketclaim.yaml objectbucketclaim.objectbucket.io/ceph-bucket created $ oc get objectbucketclaim -n openshift-storage ceph-bucket -o jsonpath='{.status.phase}{"\n"}' Bound
Extract Secrets and Connection Information
Once the ObjectBucketClaim is phase is Bound, the S3 secrets and connection information can be extract from the OCP cluster. The following commands will extract the needed information for later usage and print the resulting information to the screen.
$ export AWS_ACCESS_KEY_ID=`oc get secret -n openshift-storage rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -o jsonpath='{.data.AccessKey}' | base64 -d` $ export AWS_SECRET_ACCESS_KEY=`oc get secret -n openshift-storage rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -o jsonpath='{.data.SecretKey}' | base64 -d` $ export AWS_BUCKET=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_NAME}'` $ export AWS_HOST=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_HOST}'` $ export AWS_PORT=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_PORT}'` $ echo ${AWS_ACCESS_KEY_ID} $ echo ${AWS_SECRET_ACCESS_KEY} $ echo ${AWS_BUCKET} $ echo ${AWS_BUCKET} $ echo ${AWS_HOST} $ echo ${AWS_PORT}
Update S3 pod yaml
The pod yaml file will be updated to pass the S3 parameters and applied to the cluster$ cat 74-consume-s3.yaml apiVersion: v1 kind: Namespace metadata: name: s3-test --- apiVersion: v1 kind: Pod metadata: name: test-pod namespace: s3-test labels: app: rook-s3 spec: containers: - name: run-pod1 image: registry.access.redhat.com/ubi8/ubi imagePullPolicy: IfNotPresent command: ['sh', '-c', 'yum install -y wget python3 && cd /tmp && wget https://downloads.sourceforge.net/project/s3tools/s3cmd/2.2.0/s3cmd-2.2.0.tar.gz && tar -zxf /tmp/s3cmd-2.2.0.tar.gz && ls /tmp && tail -f /dev/null' ] env: - name: AWS_ACCESS_KEY_ID value: VALUE_FROM_ECHO_AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY value: VALUE_FROM_ECHO_AWS_SECRET_ACCESS_KEY - name: AWS_HOST value: VALUE_FROM_ECHO_AWS_HOST - name: AWS_PORT value: VALUE_FROM_ECHO_AWS_PORT - name: AWS_BUCKET value: VALUE_FROM_ECHO_AWS_BUCKET $ sed -e "s/VALUE_FROM_ECHO_AWS_ACCESS_KEY_ID/${AWS_ACCESS_KEY_ID}/g" \ -e "s/VALUE_FROM_ECHO_AWS_SECRET_ACCESS_KEY/${AWS_SECRET_ACCESS_KEY}/" \ -e "s/VALUE_FROM_ECHO_AWS_HOST/${AWS_HOST}/" \ -e "s/VALUE_FROM_ECHO_AWS_PORT/\"${AWS_PORT}\"/" \ -e "s/VALUE_FROM_ECHO_AWS_BUCKET/${AWS_BUCKET}/" \ -i consume-s3.yaml $ oc apply -f consume-s3.yaml namespace/s3-test created pod/test-pod created
Wait for the pod to be ready
The pod command line includes the installation commands needed to update the UBI image as needed for this demonstration. A customized image should be used in a production environment. Checking for the python3 command will be a sufficient check to ensure this demonstration pod is configured.
$ oc exec -n s3-test test-pod -- python3 -V Python 3.6.8
NOTE: This demonstration is using the s3cmd to interface with the bucket. A customized configuration file using the needed S3 parameters and the service CA certificate is copied into to the pod for easier testing. This updating and copying of this file is left out of this blog post.
Verify S3 Environment
$ oc exec -n s3-test test-pod -- printenv | grep AWS AWS_BUCKET=ceph-bkt-...ac49 AWS_ACCESS_KEY_ID=FAJ...1HR AWS_SECRET_ACCESS_KEY=3z3...Vtd AWS_HOST=rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc AWS_PORT=443
Verify S3 Access
To verify S3 access, we will simply create and list a new bucket.
$ oc exec -n s3-test test-pod -- python3 /tmp/s3cmd-2.2.0/s3cmd mb s3://validate.${RANDOM} Bucket 's3://validate.18454/' created $ oc exec -n s3-test test-pod -- python3 /tmp/s3cmd-2.2.0/s3cmd ls 2022-10-05 20:54 s3://validate.18454
Conclusion
With this blog post we have demonstrated how to consume the internal S3 storage service by creating an ObjectBucketClaim, extracting needed authentication information, deploying a customized pod and running commands. This information can be extended to support the deployment and operation of customized S3 enabled applications.