Showing posts with label radosgw. Show all posts
Showing posts with label radosgw. Show all posts

Wednesday, October 5, 2022

Using S3 storage on Open Data Framework provided by Ceph-RGW

My previous posts demonstrated how to use CephFS and Ceph RBD backed storage classes as deployed by Open Data Framework on OpenShift. 

This blog post I will demonstrate how to use the Ceph RGW backed storage class from within a pod running on the OpenShift cluster. I will extract the S3 authentication credentials, create a name space, start a pod and demonstrate how to securely interact with the S3 service. 

Background

As stated in the previous blog posts ODF deploys a Ceph cluster within the OCP cluster. The cluster master nodes are used as the Ceph monitor processes, workers are utilized as OSD and the remaining related pods are scheduled on the cluster. 

This post as been verified against OCP and ODF versions 4.11. 

Storage Class Listing

Let's start by verifying the availability of ODF storage classes. The following command will display the available storage classes.

$ oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  6d
ocs-storagecluster-ceph-nfs   openshift-storage.nfs.csi.ceph.com      Delete          Immediate              false                  6d
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   6d
ocs-storagecluster-ceph-rgw   openshift-storage.ceph.rook.io/bucket   Delete          Immediate              false                  6d
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   6d
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  6d

Verify S3 Access

To verify S3 access we will create an object bucket claim, extract the needed authentication and connection information, create a namespace with a configured POD and verify access.

Create ObjectBucketClaim

An ObjectBucketClaim is created against the CephRGW backed storage class.

$ cat objectbucketclaim.yaml 
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-bucket
  namespace: openshift-storage
spec:
  generateBucketName: ceph-bkt
  storageClassName: ocs-storagecluster-ceph-rgw
$ oc apply -f objectbucketclaim.yaml 
objectbucketclaim.objectbucket.io/ceph-bucket created
$ oc get objectbucketclaim -n openshift-storage ceph-bucket -o jsonpath='{.status.phase}{"\n"}'
Bound

Extract Secrets and Connection Information

Once the ObjectBucketClaim is phase is Bound, the S3 secrets and connection information can be extract from the OCP cluster. The following commands will extract the needed information for later usage and print the resulting information to the screen. 

$ export AWS_ACCESS_KEY_ID=`oc get secret -n openshift-storage rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -o jsonpath='{.data.AccessKey}'  | base64 -d`
$ export AWS_SECRET_ACCESS_KEY=`oc get secret -n openshift-storage rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser -o jsonpath='{.data.SecretKey}'  | base64 -d`
$ export AWS_BUCKET=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_NAME}'`
$ export AWS_HOST=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_HOST}'`
$ export AWS_PORT=`oc get cm ceph-bucket -n openshift-storage -o jsonpath='{.data.BUCKET_PORT}'`
$ echo ${AWS_ACCESS_KEY_ID}
$ echo ${AWS_SECRET_ACCESS_KEY}
$ echo ${AWS_BUCKET}
$ echo ${AWS_BUCKET}
$ echo ${AWS_HOST}
$ echo ${AWS_PORT}

Update S3 pod yaml

The pod yaml file will be updated to pass the S3 parameters and applied to the cluster
$ cat 74-consume-s3.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: s3-test
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
  namespace: s3-test
  labels:
    app: rook-s3
spec:
  containers:
  - name: run-pod1
    image: registry.access.redhat.com/ubi8/ubi
    imagePullPolicy: IfNotPresent
    command: ['sh', '-c', 'yum install -y wget python3 && cd /tmp && wget https://downloads.sourceforge.net/project/s3tools/s3cmd/2.2.0/s3cmd-2.2.0.tar.gz && tar -zxf /tmp/s3cmd-2.2.0.tar.gz && ls /tmp && tail -f /dev/null' ]
    env:
    - name: AWS_ACCESS_KEY_ID
      value: VALUE_FROM_ECHO_AWS_ACCESS_KEY_ID
    - name: AWS_SECRET_ACCESS_KEY
      value: VALUE_FROM_ECHO_AWS_SECRET_ACCESS_KEY
    - name: AWS_HOST
      value: VALUE_FROM_ECHO_AWS_HOST
    - name: AWS_PORT
      value: VALUE_FROM_ECHO_AWS_PORT
    - name: AWS_BUCKET
      value: VALUE_FROM_ECHO_AWS_BUCKET
$ sed -e "s/VALUE_FROM_ECHO_AWS_ACCESS_KEY_ID/${AWS_ACCESS_KEY_ID}/g" \
-e "s/VALUE_FROM_ECHO_AWS_SECRET_ACCESS_KEY/${AWS_SECRET_ACCESS_KEY}/" \
-e "s/VALUE_FROM_ECHO_AWS_HOST/${AWS_HOST}/" \
-e "s/VALUE_FROM_ECHO_AWS_PORT/\"${AWS_PORT}\"/" \
-e "s/VALUE_FROM_ECHO_AWS_BUCKET/${AWS_BUCKET}/" \
-i consume-s3.yaml
$ oc apply -f consume-s3.yaml 
namespace/s3-test created
pod/test-pod created

Wait for the pod to be ready

The pod command line includes the installation commands needed to update the UBI image as needed for this demonstration. A customized image should be used in a production environment. Checking for the python3 command will be a sufficient check to ensure this demonstration pod is configured.

$ oc exec -n s3-test test-pod -- python3 -V
Python 3.6.8

NOTE: This demonstration is using the s3cmd to interface with the bucket. A customized configuration file using the needed S3 parameters and the service CA certificate is copied into to the pod for easier testing. This updating and copying of this file is left out of this blog post.

Verify S3 Environment

We can use the printenv command to verify the setting of the necessary environment credentials. These parameters can be used with a custom image to access the internal Ceph RGW storage
$ oc exec -n s3-test test-pod -- printenv | grep AWS
AWS_BUCKET=ceph-bkt-...ac49
AWS_ACCESS_KEY_ID=FAJ...1HR
AWS_SECRET_ACCESS_KEY=3z3...Vtd
AWS_HOST=rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
AWS_PORT=443

Verify S3 Access

To verify S3 access, we will simply create and list a new bucket.

$ oc exec -n s3-test test-pod -- python3 /tmp/s3cmd-2.2.0/s3cmd mb s3://validate.${RANDOM}
Bucket 's3://validate.18454/' created
$ oc exec -n s3-test test-pod -- python3 /tmp/s3cmd-2.2.0/s3cmd ls
2022-10-05 20:54  s3://validate.18454

Conclusion

With this blog post we have demonstrated how to consume the internal S3 storage service by creating an ObjectBucketClaim, extracting needed authentication information, deploying a customized pod and running commands. This information can be extended to support the deployment and operation of customized S3 enabled applications. 

Monday, February 18, 2019

Using Rook deployed Object Store in Kubernetes

Using Rook Deployed Object Store in Kubernetes 


This posting will explore Rook deployed object store using Ceph radosgw within a kubernetes cluster. A radosgw pod will be deployed and a user created. This will be followed by an exploration of the stored credential information and using this information with a custom deployed pod.

Background


Previously, we have deployed a Ceph storage cluster using Rook and demonstrated how to customize the cluster configuration.

Current deployments

$ kubectl -n rook-ceph get pods
NAME                                     READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-569d76f456-sddpg         1/1     Running     0          3d21h
rook-ceph-mon-a-6bc4689f9d-r8jcm         1/1     Running     0          3d21h
rook-ceph-mon-b-566cdf9d6-mhf8w          1/1     Running     0          3d21h
rook-ceph-mon-c-74c6779667-svktr         1/1     Running     0          3d21h
rook-ceph-osd-0-6766d4f547-6qlvv         1/1     Running     0          3d21h
rook-ceph-osd-1-c5c7ddf67-xrm2k          1/1     Running     0          3d21h
rook-ceph-osd-2-f7b75cf4d-bm5sc          1/1     Running     0          3d21h
rook-ceph-osd-prepare-kube-node1-ddgkk   0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node2-nmgqj   0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node3-xzwnr   0/2     Completed   0          3d21h
rook-ceph-tools-76c7d559b6-tnmrf         1/1     Running     0          3d21h

Ceph cluster status

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- ceph status
  cluster:
    id:     eff29897-252c-4e65-93e7-6f4975c0d83a
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,c,b
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in
 
 data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.1 GiB used, 57 GiB / 60 GiB avail
    pgs:     

Deploying Object Store Pod and User

Rook has CephObjectStore and CephObjectStoreUser CRD which permit the creation of a Ceph RadosGW service and related users. 
The default configurations are deployed with the below commands
RGW Pod

$ kubectl create -f object.yaml 
cephobjectstore.ceph.rook.io/my-store created
$ kubectl -n rook-ceph get pods
NAME                                      READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-569d76f456-sddpg          1/1     Running     0          3d21h
rook-ceph-mon-a-6bc4689f9d-r8jcm          1/1     Running     0          3d21h
rook-ceph-mon-b-566cdf9d6-mhf8w           1/1     Running     0          3d21h
rook-ceph-mon-c-74c6779667-svktr          1/1     Running     0          3d21h
rook-ceph-osd-0-6766d4f547-6qlvv          1/1     Running     0          3d21h
rook-ceph-osd-1-c5c7ddf67-xrm2k           1/1     Running     0          3d21h
rook-ceph-osd-2-f7b75cf4d-bm5sc           1/1     Running     0          3d21h
rook-ceph-osd-prepare-kube-node1-ddgkk    0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node2-nmgqj    0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node3-xzwnr    0/2     Completed   0          3d21h
rook-ceph-rgw-my-store-57556c8479-vkjvn   1/1     Running     0          5m58s
rook-ceph-tools-76c7d559b6-tnmrf          1/1     Running     0          3d21h

RGW User

$ kubectl create -f object-user.yaml 
cephobjectstoreuser.ceph.rook.io/my-user created
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user list
[
    "my-user"
]

The RGW user information can be explored with the radosgw-admin command below. Notice the access_key and secret_key are available. 

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user info --uid my-user
{
    "user_id": "my-user",
    "display_name": "my display name",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "my-user",
            "access_key": "C6FIZTCAAEH1LBWNY84X",
            "secret_key": "i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

Rook adds the access_key and secret_key to a stored secret for usage by other pods.

$ kubectl -n rook-ceph get secrets
NAME                                     TYPE                                  DATA   AGE
default-token-67jhm                      kubernetes.io/service-account-token   3      3d21h
rook-ceph-admin-keyring                  kubernetes.io/rook                    1      3d21h
rook-ceph-config                         kubernetes.io/rook                    2      3d21h
rook-ceph-dashboard-password             kubernetes.io/rook                    1      3d21h
rook-ceph-mgr-a-keyring                  kubernetes.io/rook                    1      3d21h
rook-ceph-mgr-token-sbk8f                kubernetes.io/service-account-token   3      3d21h
rook-ceph-mon                            kubernetes.io/rook                    4      3d21h
rook-ceph-mons-keyring                   kubernetes.io/rook                    1      3d21h
rook-ceph-object-user-my-store-my-user   kubernetes.io/rook                    2      7m31s
rook-ceph-osd-token-qlsst                kubernetes.io/service-account-token   3      3d21h
rook-ceph-rgw-my-store                   kubernetes.io/rook                    1      14m

$ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user
NAME                                     TYPE                 DATA   AGE
rook-ceph-object-user-my-store-my-user   kubernetes.io/rook   2      7m49s

$ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml
apiVersion: v1
data:
  AccessKey: QzZGSVpUQ0FBRUgxTEJXTlk4NFg=
  SecretKey: aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ==
kind: Secret
metadata:
  creationTimestamp: "2019-02-18T14:37:40Z"
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: my-store
    user: my-user
  name: rook-ceph-object-user-my-store-my-user
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    kind: CephCluster
    name: rook-ceph
    uid: 200d59ce-307a-11e9-bc12-52540087c4d1
  resourceVersion: "1175482"
  selfLink: /api/v1/namespaces/rook-ceph/secrets/rook-ceph-object-user-my-store-my-user
  uid: be847f63-338a-11e9-bc12-52540087c4d1
type: kubernetes.io/rook

The AccessKey and SecretKey values are base64 encoded stores of the access_key and secret_key seen in the radosgw-admin command.

$ base64 -d -
QzZGSVpUQ0FBRUgxTEJXTlk4NFg=
C6FIZTCAAEH1LBWNY84X
$ base64 -d -
aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ==
i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE

The RGW service can be queried to display currently available S3 buckets. No buckets are initially created by Rook.

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list
[]

Consume Object Store

Some information must be gathered so they can be provided to the custom pod. 
Object Store name: 
$ kubectl -n rook-ceph get pods --selector='app=rook-ceph-rgw' -o jsonpath='{.items[0].metadata.labels.rook_object_store}'
my-store

Object Store Service Hostname
$ kubectl -n rook-ceph get services
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
rook-ceph-mgr             ClusterIP   10.108.38.115            9283/TCP   3d23h
rook-ceph-mgr-dashboard   ClusterIP   10.107.5.72              8443/TCP   3d23h
rook-ceph-mon-a           ClusterIP   10.99.36.158             6789/TCP   3d23h
rook-ceph-mon-b           ClusterIP   10.103.132.39            6789/TCP   3d23h
rook-ceph-mon-c           ClusterIP   10.101.117.131           6789/TCP   3d23h
rook-ceph-rgw-my-store    ClusterIP   10.107.251.22            80/TCP     147m
 

Create Custom Deployment
A custom deployment file to deploy a basic CentOS7 and install the python-bono package. The environment is populated with the needed parameters to access the S3 store and create a bucket. 

$ cat ~/my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydemo
  namespace: rook-ceph
  labels:
    app: mydemo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mydemo
  template:
    metadata:
      labels:
        app: mydemo
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      containers:
      - name: mydemo
        image: docker.io/jdeathe/centos-ssh
        imagePullPolicy: IfNotPresent
        command: ["/usr/bin/supervisord"]
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN
        lifecycle:
          postStart:
            exec:
              command: ["/usr/bin/yum", "install", "-y", "python-boto"]
        env:
          - name: AWSACCESSKEYID
            valueFrom:
              secretKeyRef:
                name: rook-ceph-object-user-my-store-my-user
                key: AccessKey
          - name: AWSSECRETACCESSKEY
            valueFrom:
              secretKeyRef:
                name: rook-ceph-object-user-my-store-my-user
                key: SecretKey
          - name: BUCKETNAME
            value: my-store
          - name: RGWHOST
            # the value is {service-name}.{name-space}
            value: rook-ceph-rgw-my-store.rook-ceph


$ kubectl create -f ~/my-deployment.yaml
 deployment.apps/mydemo created

Once the pod is running, open a shell and examine the environment

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=mydemo -o jsonpath='{.items[0].metadata.name}'` -it /bin/bash
# printenv | sort | grep -E "AWSACCESS|AWSSECRET|BUCKETNAME|RGWHOST"
AWSACCESSKEYID=C6FIZTCAAEH1LBWNY84X
AWSSECRETACCESSKEY=i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE
BUCKETNAME=my-store
RGWHOST=rook-ceph-rgw-my-store.rook-ceph
Create and run a simple test script. This script will connect to the RGW and create a bucket. No output will be printed on a success.

# cat ~/test-s3.py
import boto
import os
import boto.s3.connection
access_key = os.environ['AWSACCESSKEYID']
secret_key = os.environ['AWSSECRETACCESSKEY']
bucket = os.environ['BUCKETNAME']
myhost = os.environ['RGWHOST']
conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = myhost,
        is_secure=False,
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )
bucket = conn.create_bucket(bucket)

# python ~/test-s3.py 

The RGW service can be queried to display the created bucket

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list
[
    "my-store"
]

Issues and Going Forward

  • Deleting the CephObjectStoreUser does not delete the stored secret.
  • The ceph.conf section name is hard coded for the radosgw service
  • Dynamically query the environment variables used for the application deployment