Monday, February 18, 2019

Using Rook deployed Object Store in Kubernetes

Using Rook Deployed Object Store in Kubernetes 


This posting will explore Rook deployed object store using Ceph radosgw within a kubernetes cluster. A radosgw pod will be deployed and a user created. This will be followed by an exploration of the stored credential information and using this information with a custom deployed pod.

Background


Previously, we have deployed a Ceph storage cluster using Rook and demonstrated how to customize the cluster configuration.

Current deployments

$ kubectl -n rook-ceph get pods
NAME                                     READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-569d76f456-sddpg         1/1     Running     0          3d21h
rook-ceph-mon-a-6bc4689f9d-r8jcm         1/1     Running     0          3d21h
rook-ceph-mon-b-566cdf9d6-mhf8w          1/1     Running     0          3d21h
rook-ceph-mon-c-74c6779667-svktr         1/1     Running     0          3d21h
rook-ceph-osd-0-6766d4f547-6qlvv         1/1     Running     0          3d21h
rook-ceph-osd-1-c5c7ddf67-xrm2k          1/1     Running     0          3d21h
rook-ceph-osd-2-f7b75cf4d-bm5sc          1/1     Running     0          3d21h
rook-ceph-osd-prepare-kube-node1-ddgkk   0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node2-nmgqj   0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node3-xzwnr   0/2     Completed   0          3d21h
rook-ceph-tools-76c7d559b6-tnmrf         1/1     Running     0          3d21h

Ceph cluster status

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- ceph status
  cluster:
    id:     eff29897-252c-4e65-93e7-6f4975c0d83a
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,c,b
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in
 
 data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.1 GiB used, 57 GiB / 60 GiB avail
    pgs:     

Deploying Object Store Pod and User

Rook has CephObjectStore and CephObjectStoreUser CRD which permit the creation of a Ceph RadosGW service and related users. 
The default configurations are deployed with the below commands
RGW Pod

$ kubectl create -f object.yaml 
cephobjectstore.ceph.rook.io/my-store created
$ kubectl -n rook-ceph get pods
NAME                                      READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-569d76f456-sddpg          1/1     Running     0          3d21h
rook-ceph-mon-a-6bc4689f9d-r8jcm          1/1     Running     0          3d21h
rook-ceph-mon-b-566cdf9d6-mhf8w           1/1     Running     0          3d21h
rook-ceph-mon-c-74c6779667-svktr          1/1     Running     0          3d21h
rook-ceph-osd-0-6766d4f547-6qlvv          1/1     Running     0          3d21h
rook-ceph-osd-1-c5c7ddf67-xrm2k           1/1     Running     0          3d21h
rook-ceph-osd-2-f7b75cf4d-bm5sc           1/1     Running     0          3d21h
rook-ceph-osd-prepare-kube-node1-ddgkk    0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node2-nmgqj    0/2     Completed   0          3d21h
rook-ceph-osd-prepare-kube-node3-xzwnr    0/2     Completed   0          3d21h
rook-ceph-rgw-my-store-57556c8479-vkjvn   1/1     Running     0          5m58s
rook-ceph-tools-76c7d559b6-tnmrf          1/1     Running     0          3d21h

RGW User

$ kubectl create -f object-user.yaml 
cephobjectstoreuser.ceph.rook.io/my-user created
$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user list
[
    "my-user"
]

The RGW user information can be explored with the radosgw-admin command below. Notice the access_key and secret_key are available. 

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin user info --uid my-user
{
    "user_id": "my-user",
    "display_name": "my display name",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "my-user",
            "access_key": "C6FIZTCAAEH1LBWNY84X",
            "secret_key": "i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

Rook adds the access_key and secret_key to a stored secret for usage by other pods.

$ kubectl -n rook-ceph get secrets
NAME                                     TYPE                                  DATA   AGE
default-token-67jhm                      kubernetes.io/service-account-token   3      3d21h
rook-ceph-admin-keyring                  kubernetes.io/rook                    1      3d21h
rook-ceph-config                         kubernetes.io/rook                    2      3d21h
rook-ceph-dashboard-password             kubernetes.io/rook                    1      3d21h
rook-ceph-mgr-a-keyring                  kubernetes.io/rook                    1      3d21h
rook-ceph-mgr-token-sbk8f                kubernetes.io/service-account-token   3      3d21h
rook-ceph-mon                            kubernetes.io/rook                    4      3d21h
rook-ceph-mons-keyring                   kubernetes.io/rook                    1      3d21h
rook-ceph-object-user-my-store-my-user   kubernetes.io/rook                    2      7m31s
rook-ceph-osd-token-qlsst                kubernetes.io/service-account-token   3      3d21h
rook-ceph-rgw-my-store                   kubernetes.io/rook                    1      14m

$ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user
NAME                                     TYPE                 DATA   AGE
rook-ceph-object-user-my-store-my-user   kubernetes.io/rook   2      7m49s

$ kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml
apiVersion: v1
data:
  AccessKey: QzZGSVpUQ0FBRUgxTEJXTlk4NFg=
  SecretKey: aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ==
kind: Secret
metadata:
  creationTimestamp: "2019-02-18T14:37:40Z"
  labels:
    app: rook-ceph-rgw
    rook_cluster: rook-ceph
    rook_object_store: my-store
    user: my-user
  name: rook-ceph-object-user-my-store-my-user
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    kind: CephCluster
    name: rook-ceph
    uid: 200d59ce-307a-11e9-bc12-52540087c4d1
  resourceVersion: "1175482"
  selfLink: /api/v1/namespaces/rook-ceph/secrets/rook-ceph-object-user-my-store-my-user
  uid: be847f63-338a-11e9-bc12-52540087c4d1
type: kubernetes.io/rook

The AccessKey and SecretKey values are base64 encoded stores of the access_key and secret_key seen in the radosgw-admin command.

$ base64 -d -
QzZGSVpUQ0FBRUgxTEJXTlk4NFg=
C6FIZTCAAEH1LBWNY84X
$ base64 -d -
aThQdzQ0VmlLdDNEVkFRVFNXSUVKY2F6VUhyWVJDajB1Nlh3OWpQRQ==
i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE

The RGW service can be queried to display currently available S3 buckets. No buckets are initially created by Rook.

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list
[]

Consume Object Store

Some information must be gathered so they can be provided to the custom pod. 
Object Store name: 
$ kubectl -n rook-ceph get pods --selector='app=rook-ceph-rgw' -o jsonpath='{.items[0].metadata.labels.rook_object_store}'
my-store

Object Store Service Hostname
$ kubectl -n rook-ceph get services
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
rook-ceph-mgr             ClusterIP   10.108.38.115            9283/TCP   3d23h
rook-ceph-mgr-dashboard   ClusterIP   10.107.5.72              8443/TCP   3d23h
rook-ceph-mon-a           ClusterIP   10.99.36.158             6789/TCP   3d23h
rook-ceph-mon-b           ClusterIP   10.103.132.39            6789/TCP   3d23h
rook-ceph-mon-c           ClusterIP   10.101.117.131           6789/TCP   3d23h
rook-ceph-rgw-my-store    ClusterIP   10.107.251.22            80/TCP     147m
 

Create Custom Deployment
A custom deployment file to deploy a basic CentOS7 and install the python-bono package. The environment is populated with the needed parameters to access the S3 store and create a bucket. 

$ cat ~/my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydemo
  namespace: rook-ceph
  labels:
    app: mydemo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mydemo
  template:
    metadata:
      labels:
        app: mydemo
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      containers:
      - name: mydemo
        image: docker.io/jdeathe/centos-ssh
        imagePullPolicy: IfNotPresent
        command: ["/usr/bin/supervisord"]
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN
        lifecycle:
          postStart:
            exec:
              command: ["/usr/bin/yum", "install", "-y", "python-boto"]
        env:
          - name: AWSACCESSKEYID
            valueFrom:
              secretKeyRef:
                name: rook-ceph-object-user-my-store-my-user
                key: AccessKey
          - name: AWSSECRETACCESSKEY
            valueFrom:
              secretKeyRef:
                name: rook-ceph-object-user-my-store-my-user
                key: SecretKey
          - name: BUCKETNAME
            value: my-store
          - name: RGWHOST
            # the value is {service-name}.{name-space}
            value: rook-ceph-rgw-my-store.rook-ceph


$ kubectl create -f ~/my-deployment.yaml
 deployment.apps/mydemo created

Once the pod is running, open a shell and examine the environment

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=mydemo -o jsonpath='{.items[0].metadata.name}'` -it /bin/bash
# printenv | sort | grep -E "AWSACCESS|AWSSECRET|BUCKETNAME|RGWHOST"
AWSACCESSKEYID=C6FIZTCAAEH1LBWNY84X
AWSSECRETACCESSKEY=i8Pw44ViKt3DVAQTSWIEJcazUHrYRCj0u6Xw9jPE
BUCKETNAME=my-store
RGWHOST=rook-ceph-rgw-my-store.rook-ceph
Create and run a simple test script. This script will connect to the RGW and create a bucket. No output will be printed on a success.

# cat ~/test-s3.py
import boto
import os
import boto.s3.connection
access_key = os.environ['AWSACCESSKEYID']
secret_key = os.environ['AWSSECRETACCESSKEY']
bucket = os.environ['BUCKETNAME']
myhost = os.environ['RGWHOST']
conn = boto.connect_s3(
        aws_access_key_id = access_key,
        aws_secret_access_key = secret_key,
        host = myhost,
        is_secure=False,
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )
bucket = conn.create_bucket(bucket)

# python ~/test-s3.py 

The RGW service can be queried to display the created bucket

$ kubectl -n rook-ceph exec `kubectl -n rook-ceph get pods --selector=app=rook-ceph-tools -o jsonpath='{.items[0].metadata.name}'` -- radosgw-admin bucket list
[
    "my-store"
]

Issues and Going Forward

  • Deleting the CephObjectStoreUser does not delete the stored secret.
  • The ceph.conf section name is hard coded for the radosgw service
  • Dynamically query the environment variables used for the application deployment

Wednesday, February 13, 2019

Customizing Ceph.conf deployed with Rook

Customizing a Ceph.conf deployed with Rook on Kubernetes


Two options for customizing the parameters of a ceph.conf file deployed by Rook. These customizations could be needed for performance or troubleshooting reasons. In this post, the debug output level of various Ceph services will be modified. Other parameters can be updated using the same process

The override settings are saved in a ConfigMap called rook-config-override within the rook-ceph namespace and applied to the pods during creation. The contents of this map are empty in a default deployment.

To list the available ConfigMaps in the rook-ceph namespace:

$ kubectl -n rook-ceph get ConfigMaps
NAME                      DATA   AGE
rook-ceph-config          1      106m
rook-ceph-mon-endpoints   3      106m
rook-config-override      1      106m
rook-crush-config         1      105m
rook-test-ownerref        0      106m

To list the contents of the rook-config-override ConfigMap and display the contents in YAML format.

$ kubectl -n rook-ceph get ConfigMap rook-config-override -o yaml
apiVersion: v1
data:
  config: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2019-02-13T21:53:51Z"
  name: rook-config-override
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    kind: CephCluster
    name: rook-ceph
    uid: cfcb486e-2fd9-11e9-bc12-52540087c4d1
  resourceVersion: "164993"
  selfLink: /api/v1/namespaces/rook-ceph/configmaps/rook-config-override
  uid: d9b48916-2fd9-11e9-bc12-52540087c4d1


Option 1: Setting parameters via live editing


Edit config map using EDITOR. YAML is the default format for editing

$ kubectl -n rook-ceph edit ConfigMap rook-config-override -o yaml

In this example, we will update the config from "" to setting needed to enable debugging of the OSD service.  More information on available configuration settings can be found in the references below.

Display updated config map

$ kubectl -n rook-ceph get configmaps rook-config-override -o yaml
apiVersion: v1
data:
  config: |
    [global]
    debug osd = 5
kind: ConfigMap
metadata:
  creationTimestamp: "2019-02-13T21:53:51Z"
  name: rook-config-override
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    kind: CephCluster
    name: rook-ceph
    uid: cfcb486e-2fd9-11e9-bc12-52540087c4d1
  resourceVersion: "182611"
  selfLink: /api/v1/namespaces/rook-ceph/configmaps/rook-config-override
  uid: d9b48916-2fd9-11e9-bc12-52540087c4d1

Restart impacted pod

Once the updated settings are saved, the impacted pods will need to be restarted. In this case, the OSD pods will need to be restarted. Care must be taken to restart the OSD pods only with the cluster is healthy and all data protected.

List running OSD pods:

$ kubectl -n rook-ceph get pods --selector=app=rook-ceph-osd
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-osd-0-6669bdbf6d-vvsjn   1/1     Running   0          127m
rook-ceph-osd-1-556bff7694-8mhq2   1/1     Running   0          127m
rook-ceph-osd-2-7db64c88bc-d7sp5   1/1     Running   0          127m

Restart each pod while waiting for the cluster to recover

$ kubectl -n rook-ceph delete pod rook-ceph-osd-0-6669bdbf6d-vvsjn
pod "rook-ceph-osd-0-6669bdbf6d-vvsjn" deleted

The cluster will go into a HEALTH_WARN while the OSD pod is being restarted and any data re-syncing occurs. Restart the remaining OSD pods once the ceph cluster has returned to HEALTH_OK

Verify setting

$ kubectl -n rook-ceph exec rook-ceph-osd-0-6669bdbf6d-p5msf -- cat /etc/ceph/ceph.conf | grep "debug osd"
debug osd                 = 5


Option 2: Setting parameters during deployment


Customized ceph.conf parameters can be deployed during cluster deployment by adding a ConfigMap to the cluster.yaml file. These values will be applied to the nodes during initial and follow pod startups. In this example, we are going to set the debug level for the Ceph mon service.

Edit cluster.yaml file

Using your favorite editor, add the following lines to the cluster.yaml file:

---
apiVersion: v1
kind: ConfigMap
data:
  config: |
    [global]
    debug mon = 5
metadata:
  name: rook-config-override
  namespace: rook-ceph

Deploy cluster

Deploy the ceph cluster as per previous blog posts listed below.

Verify setting

$ kubectl -n rook-ceph get pods --selector=app=rook-ceph-mon
NAME                               READY   STATUS    RESTARTS   AGE
rook-ceph-mon-a-79b6c85f89-6rzrw   1/1     Running   0          2m50s
rook-ceph-mon-b-57bc4756c9-bmg9n   1/1     Running   0          2m39s
rook-ceph-mon-c-6f8bb9598d-9mlcb   1/1     Running   0          2m25s
$ kubectl -n rook-ceph exec rook-ceph-mon-a-79b6c85f89-6rzrw -- cat /etc/ceph/ceph.conf | grep "debug mon"
debug mon                 = 5

Notes

  • The OSD and Mon pods should be restarted one at a time with the Ceph cluster returning to a HEALTH_OK between pod restarts
  • No validation is performed on the override parameters. "mon debug" or a config option which could render the Ceph cluster inoperable can be added to the config override and applied on the next pod restart
References: