Thursday, January 31, 2019

Deploying Rook with Ceph using Bluestore

Deploying Rook using Ceph/Bluestore on Kubernetes


This article will describe how to deploy Rook using Ceph storage and Bluestore as the backend storage. This is deployed within a kurbernetes infrastructure

System Design

Four hosts are used for this deployment: 1 master node and 3 worker nodes. In this example, these nodes are hosted within a libvirt environment and deployed using Ben Schmaus' write up and configuration changes detailed below.


The kube-master host is configured with 2vCPU, 2G of ram and 20Gb of virtio disk local storage. Each work is configured with 1vCPU, 2G of ram, and two 20G virtio disks. one for local storage and for OSD storage disk.

Additional Configuration Changes


After the Rook git repository is mirrored, the cluster.yaml file needs to be updated prior to being deployed with the following changes.
  • set storeType to bluestore
  • set useAllNodes to false 
  • each node needs to listed with the designated OSD device

Configuration Excerpt:



  storage:
    useAllNodes: false
    useAllDevices: false
    deviceFilter:
    location:
    nodes:
    - name: "kube-node1"
      devices:
      - name: "vdb"
    - name: "kube-node2"
      devices:
      - name: "vdb"
    - name: "kube-node3"
      devices:
      - name: "vdb"
    config:
      storeType: bluestore

Deployment and Verification


Rook deployment follows the same steps as detailed in Ben's link above. Verification can be performed with a few simple commands. 

Ceph Status


$  kubectl -n rook-ceph exec \
$(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" \
-o jsonpath='{.items[0].metadata.name}') -- ceph -s
  cluster:
    id:     9467587e-7d45-4b81-9c68-c216964e7d79
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum b,a,c
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     


OSD Tree


$ kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" \
-o jsonpath='{.items[0].metadata.name}') -- ceph osd tree
ID CLASS WEIGHT  TYPE NAME           STATUS REWEIGHT PRI-AFF 
-1       0.05846 root default                                
-3       0.01949     host kube-node1                         
 0   hdd 0.01949         osd.0           up  1.00000 1.00000 
-7       0.01949     host kube-node2                         
 1   hdd 0.01949         osd.1           up  1.00000 1.00000 
-5       0.01949     host kube-node3                         
 2   hdd 0.01949         osd.2           up  1.00000 1.00000 

OSD Store Location


$ kubectl -n rook-ceph exec rook-ceph-osd-0-d848c6b74-rncp4 -- ls -laF /var/lib/ceph/osd/ceph-0
total 24
drwxrwxrwt  2 ceph ceph 180 Feb  1 05:00 ./
drwxr-x---. 1 ceph ceph  20 Feb  1 05:00 ../
lrwxrwxrwx  1 ceph ceph  92 Feb  1 05:00 block -> /dev/ceph-651a13f3-dc9f-465c-8cef-7c6618958f0b/osd-data-2e004e5d-8210-4e20-9e28-d431f468a977
-rw-------  1 ceph ceph  37 Feb  1 05:00 ceph_fsid
-rw-------  1 ceph ceph  37 Feb  1 05:00 fsid
-rw-------  1 ceph ceph  55 Feb  1 05:00 keyring
-rw-------  1 ceph ceph   6 Feb  1 05:00 ready
-rw-------  1 ceph ceph  10 Feb  1 05:00 type
-rw-------  1 ceph ceph   2 Feb  1 05:00 whoami