Using Rook Deployed Object Store in Kubernetes
This posting will explore Rook deployed object store using Ceph radosgw within a kubernetes cluster. A radosgw pod will be deployed and a user created. This will be followed by an exploration of the stored credential information and using this information with a custom deployed pod.
Background
Previously, we have deployed a Ceph storage cluster using Rook and demonstrated how to customize the cluster configuration.
Current deployments
$ kubectl -n rook-ceph get pods NAME READY STATUS RESTARTS AGE rook-ceph-mgr-a-569d76f456-sddpg 1/1 Running 0 3d21h rook-ceph-mon-a-6bc4689f9d-r8jcm 1/1 Running 0 3d21h rook-ceph-mon-b-566cdf9d6-mhf8w 1/1 Running 0 3d21h rook-ceph-mon-c-74c6779667-svktr 1/1 Running 0 3d21h rook-ceph-osd-0-6766d4f547-6qlvv 1/1 Running 0 3d21h rook-ceph-osd-1-c5c7ddf67-xrm2k 1/1 Running 0 3d21h rook-ceph-osd-2-f7b75cf4d-bm5sc 1/1 Running 0 3d21h rook-ceph-osd-prepare-kube-node1-ddgkk 0/2 Completed 0 3d21h rook-ceph-osd-prepare-kube-node2-nmgqj 0/2 Completed 0 3d21h rook-ceph-osd-prepare-kube-node3-xzwnr 0/2 Completed 0 3d21h rook-ceph-tools-76c7d559b6-tnmrf 1/1 Running 0 3d21h
Ceph cluster status
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- ceph status cluster: id: eff29897-252c-4e65-93e7-6f4975c0d83a health: HEALTH_OK services: mon: 3 daemons, quorum a,c,b mgr: a(active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.1 GiB used, 57 GiB / 60 GiB avail pgs:
Deploying Object Store Pod and User
Rook has CephObjectStore and CephObjectStoreUser CRD which permit the creation of a Ceph RadosGW service and related users.
The default configurations are deployed with the below commands
RGW Pod
$ kubectl create -f object.yaml cephobjectstore.ceph.rook.io/my-store created $ kubectl -n rook-ceph get pods NAME READY STATUS RESTARTS AGE rook-ceph-mgr-a-569d76f456-sddpg 1/1 Running 0 3d21h rook-ceph-mon-a-6bc4689f9d-r8jcm 1/1 Running 0 3d21h rook-ceph-mon-b-566cdf9d6-mhf8w 1/1 Running 0 3d21h rook-ceph-mon-c-74c6779667-svktr 1/1 Running 0 3d21h rook-ceph-osd-0-6766d4f547-6qlvv 1/1 Running 0 3d21h rook-ceph-osd-1-c5c7ddf67-xrm2k 1/1 Running 0 3d21h rook-ceph-osd-2-f7b75cf4d-bm5sc 1/1 Running 0 3d21h rook-ceph-osd-prepare-kube-node1-ddgkk 0/2 Completed 0 3d21h rook-ceph-osd-prepare-kube-node2-nmgqj 0/2 Completed 0 3d21h rook-ceph-osd-prepare-kube-node3-xzwnr 0/2 Completed 0 3d21h rook-ceph-rgw-my-store-57556c8479-vkjvn 1/1 Running 0 5m58s rook-ceph-tools-76c7d559b6-tnmrf 1/1 Running 0 3d21h
RGW User
$ kubectl create -f object-user.yaml cephobjectstoreuser.ceph.rook.io/my-user created $ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user list [ "my-user" ]
The RGW user information can be explored with the radosgw-admin command below. Notice the access_key and secret_key are available.
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user info --uid my-user { "user_id": "my-user", "display_name": "my display name", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "my-user", "access_key": "C6FIZTCAAEH1LBWNY84X", "secret_key": "i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw", "mfa_ids": [] }
Rook adds the access_key and secret_key to a stored secret for usage by other pods.
$ kubectl -n rook-ceph get secrets NAME TYPE DATA AGE default-token-67jhm kubernetes.io/service-account-token 3 3d21h rook-ceph-admin-keyring kubernetes.io/rook 1 3d21h rook-ceph-config kubernetes.io/rook 2 3d21h rook-ceph-dashboard-password kubernetes.io/rook 1 3d21h rook-ceph-mgr-a-keyring kubernetes.io/rook 1 3d21h rook-ceph-mgr-token-sbk8f kubernetes.io/service-account-token 3 3d21h rook-ceph-mon kubernetes.io/rook 4 3d21h rook-ceph-mons-keyring kubernetes.io/rook 1 3d21h rook-ceph-object-user-my-store-my-user kubernetes.io/rook 2 7m31s rook-ceph-osd-token-qlsst kubernetes.io/service-account-token 3 3d21h rook-ceph-rgw-my-store kubernetes.io/rook 1 14m $ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user NAME TYPE DATA AGE rook-ceph-object-user-my-store-my-user kubernetes.io/rook 2 7m49s $ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml apiVersion: v1 data: AccessKey: QzZGSVpUQ0FBRUgxTEJXTlk4NFg= SecretKey: aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ== kind: Secret metadata: creationTimestamp: "2019-02-18T14:37:40Z" labels: app: rook-ceph-rgw rook_cluster: rook-ceph rook_object_store: my-store user: my-user name: rook-ceph-object-user-my-store-my-user namespace: rook-ceph ownerReferences: - apiVersion: v1 blockOwnerDeletion: true kind: CephCluster name: rook-ceph uid: 200d59ce-307a-11e9-bc12-52540087c4d1 resourceVersion: "1175482" selfLink: /api/v1/namespaces/rook-ceph/secrets/rook-ceph-object-user-my-store-my-user uid: be847f63-338a-11e9-bc12-52540087c4d1 type: kubernetes.io/rook
The AccessKey and SecretKey values are base64 encoded stores of the access_key and secret_key seen in the radosgw-admin command.
$ base64 -d - QzZGSVpUQ0FBRUgxTEJXTlk4NFg= C6FIZTCAAEH1LBWNY84X $ base64 -d - aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ== i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE
The RGW service can be queried to display currently available S3 buckets. No buckets are initially created by Rook.
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list []
Consume Object Store
Some information must be gathered so they can be provided to the custom pod.
Object Store name:
$ kubectl -n rook-ceph get pods --selector='app=rook-ceph-rgw' -o jsonpath='{.items[0].metadata.labels.rook_object_store}' my-store
Object Store Service Hostname
$ kubectl -n rook-ceph get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.108.38.1159283/TCP 3d23h rook-ceph-mgr-dashboard ClusterIP 10.107.5.72 8443/TCP 3d23h rook-ceph-mon-a ClusterIP 10.99.36.158 6789/TCP 3d23h rook-ceph-mon-b ClusterIP 10.103.132.39 6789/TCP 3d23h rook-ceph-mon-c ClusterIP 10.101.117.131 6789/TCP 3d23h rook-ceph-rgw-my-store ClusterIP 10.107.251.22 80/TCP 147m
Create Custom Deployment
A custom deployment file to deploy a basic CentOS7 and install the python-bono package. The environment is populated with the needed parameters to access the S3 store and create a bucket.
$ cat ~/my-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydemo namespace: rook-ceph labels: app: mydemo spec: replicas: 1 selector: matchLabels: app: mydemo template: metadata: labels: app: mydemo spec: dnsPolicy: ClusterFirstWithHostNet hostNetwork: true containers: - name: mydemo image: docker.io/jdeathe/centos-ssh imagePullPolicy: IfNotPresent command: ["/usr/bin/supervisord"] securityContext: privileged: true capabilities: add: - SYS_ADMIN lifecycle: postStart: exec: command: ["/usr/bin/yum", "install", "-y", "python-boto"] env: - name: AWSACCESSKEYID valueFrom: secretKeyRef: name: rook-ceph-object-user-my-store-my-user key: AccessKey - name: AWSSECRETACCESSKEY valueFrom: secretKeyRef: name: rook-ceph-object-user-my-store-my-user key: SecretKey - name: BUCKETNAME value: my-store - name: RGWHOST # the value is {service-name}.{name-space} value: rook-ceph-rgw-my-store.rook-ceph $ kubectl create -f ~/my-deployment.yaml deployment.apps/mydemo created
Once the pod is running, open a shell and examine the environment
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=mydemo -o jsonpath='{.items[0].metadata.name}'` -it /bin/bash # printenv | sort | grep -E "AWSACCESS|AWSSECRET|BUCKETNAME|RGWHOST" AWSACCESSKEYID=C6FIZTCAAEH1LBWNY84X AWSSECRETACCESSKEY=i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE BUCKETNAME=my-store RGWHOST=rook-ceph-rgw-my-store.rook-ceph
Create and run a simple test script. This script will connect to the RGW and create a bucket. No output will be printed on a success.
# cat ~/test-s3.py import boto import os import boto.s3.connection access_key = os.environ['AWSACCESSKEYID'] secret_key = os.environ['AWSSECRETACCESSKEY'] bucket = os.environ['BUCKETNAME'] myhost = os.environ['RGWHOST'] conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = myhost, is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) bucket = conn.create_bucket(bucket) # python ~/test-s3.py
The RGW service can be queried to display the created bucket
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list [ "my-store" ]
Issues and Going Forward
- Deleting the CephObjectStoreUser does not delete the stored secret.
- The ceph.conf section name is hard coded for the radosgw service.
- Dynamically query the environment variables used for the application deployment