Friday, March 8, 2019

Configuring a Rook Test and Development System

Reason

All patches should be tested locally prior to submitting a PR to the Rook project. This posting will detail the steps needed to configure an environment for developing and testing patches for the Rook project.

Environment

This post was developed on virtual machines running Fedora Core 29 but should work on other recent Fedora/CentOS/RHEL OS releases.

Pre-reqs

  • Four (4) virtual machines with the following specifications
    • 1 node: 2 CPU, =>2G, 20Gb storage
    • 3 nodes: 1 CPU, =>2G, 20Gb storage
  • A non-root user with sudo access with the ability install packages locally and on the other test nodes.

Description

This posting will extend the previous blog posting about deploying Ceph storage using Rook on a kubernetes cluster. This posting will detail how to deploy a local docker registry, configure the environment to Rook development, build Rook and, finally, how to test the locally built code.

Procedure


Deploy Local Docker Registry
The docker registry needs to be added to the kube-master host. This can be performed after the base OS configuration but prior to running "kubeadm --init"

The following command will pull down the docker registry image, start the container, make it always restart with the base OS and expose TCP port 5000. Note: the docker command is not normally ran as a non-root user as per this blog posting. For my local config, I have updated the group ownership of the docker UNIX socket from root to wheel.

 $ docker run -d --restart=always -p 5000:5000 --name registry registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ... 
sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5: Pulling from docker.io/library/registry
c87736221ed0: Pull complete 
1cc8e0bb44df: Pull complete 
54d33bcb37f5: Pull complete 
e8afc091c171: Pull complete 
b4541f6d3db6: Pull complete 
Digest: sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5
Status: Downloaded newer image for docker.io/registry:latest
4d56fadeadbff76b14de2093b655af44b7cd08484df5f366fe3c83b4942c7306
$ sudo netstat -tulpn | grep 5000
tcp6       0      0 :::5000                 :::*                    LISTEN      11227/docker-proxy- 

Next, all nodes need docker to be configured to use the local insecure registry. This is done by updating /etc/containers/registries.conf and adding the new registry to the registries.insecure list.
 [registries.insecure]  
 registries = ['192.168.122.3:5000']  

And restart docker.
 $ sudo systemctl restart docker  

Configure Rook Development Environment

Configuring the rook development environment is fairly direct. The only build requirements are the git, go and perl-Digest-SHA packages. The rest of the golang libraries are retrieved during the build process
$ sudo yum install -y git go perl-Digest-SHA  
$ export GOPATH=/home/user/go  
$ mkdir -p ${GOPATH}/github.com/rook  
$ cd ${GOPATH}/github.com/rook  
$ git clone http://github.com/myrepo/rook.git  
$ cd rook  

Note: You should fork the upstream repository, if you are going to develop code for an eventual pull request.

Build Rook

 $ make  
 === helm package rook-ceph  
 ==> Linting /home/kschinck/go/src/github.com/rook/rook/cluster/charts/rook-ceph  
 Lint OK  
 1 chart(s) linted, no failures  
 Successfully packaged chart and saved it to: /home/kschinck/go/src/github.com/rook/rook/_output/charts/rook-ceph-v0.9.0-162.g2ed99b1c.dirty.tgz  
 === helm index  
 === go vet  
 === go build linux_amd64  
 .  
 .  
 . skipped text   
 .  
 .  
 === docker build build-7ad1f371/ceph-amd64  
 sha256:91a2f03ae0bb99a8f65b225b4160402927350618cd2a9592068503c24cb5d701  
 === docker build build-7ad1f371/cockroachdb-amd64  
 sha256:2d6f04f76d22de950fcf7df9e7205608c9baca627dbe4df81448419a4aff2b68  
 === docker build build-7ad1f371/minio-amd64  
 sha256:c1a4b7e09dc5069a6de809bbe2db6f963420038e243080c61bc61ea13bceff14  
 === docker build build-7ad1f371/nfs-amd64  
 sha256:e80fd66429923a0114210b8c254226d11e5ffb0a68cfc56377b8a3eccd3b663f  
 === saving image build-7ad1f371/nfs-amd64  
 === docker build build-7ad1f371/cassandra-amd64  
 sha256:492b314ef2123328c84702c66b0c9589f342b5f644ac5cff1de2263356446701  
 === docker build build-7ad1f371/edgefs-amd64  
 sha256:0d0ab8dd81261d0f325f83a690c7997707582126ab01e5f7e7a55fe964143c5d  
 === saving image build-7ad1f371/edgefs-amd64  
 $ docker images  
 REPOSITORY                         TAG              IMAGE ID      CREATED       SIZE  
 build-7ad1f371/edgefs-amd64        latest             0d0ab8dd8126    13 minutes ago   411 MB  
 build-7ad1f371/cassandra-amd64     latest             492b314ef212    13 minutes ago   141 MB  
 build-7ad1f371/nfs-amd64           latest             e80fd6642992    13 minutes ago   391 MB  
 build-7ad1f371/minio-amd64         latest             c1a4b7e09dc5    18 minutes ago   81.9 MB  
 build-7ad1f371/cockroachdb-amd64   latest             2d6f04f76d22    18 minutes ago   243 MB  
 build-7ad1f371/ceph-amd64          latest             91a2f03ae0bb    18 minutes ago   610 MB  
 docker.io/rook/ceph                master             b924a5979c14    4 days ago     610 MB  

Testing Locally Built Code

The built Ceph image needs to be tagged and the Ceph operator.yaml CRD file updated to use the new image from the local registry.
 $ IMG=`docker images | grep -Eo '^build-[a-z0-9]{8}/ceph-[a-z0-9]+\s'`  
 $ echo $IMG  
 build-7ad1f371/ceph-amd64  
 $ docker tag $IMG 192.168.122.3:5000/rook/ceph:latest  
 $ docker push 192.168.122.3:5000/rook/ceph:latest  
 The push refers to a repository [192.168.122.3:5000/rook/ceph]  
 5111818ce34e: Pushed   
 1d339a1e5398: Pushed   
 88c1eb656680: Pushed   
 f972d139738d: Pushed   
 latest: digest: sha256:80d3ec16e4e6206530756a5a7ce76b51f336a1438af5de77488917f5d1f7b602 size: 1163  
 $ docker images | grep "rook/ceph"  
 192.168.122.3:5000/rook/ceph     latest             91a2f03ae0bb    2 hours ago     610 MB  
 docker.io/rook/ceph         master             b924a5979c14    4 days ago     610 MB 
$ curl -X GET http://192.168.122.3:5000/v2/_catalog
{"repositories":["rook/ceph"]}
 $ sed -i 's|image: .*$|image: 192.168.122.3:5000/rook/ceph:latest|' cluster/examples/kubernetes/ceph/operator.yaml  

Once the operator.yaml CRD file has been updated, the ceph cluster can be deployed using the normal workflow.
$ cd cluster/examples/kubernetes/ceph/
$ kubectl -n rook-ceph-system describe pod rook-ceph-operator-9d496c54b-gmnrw | grep Image  
   Image:     192.168.122.3:5000/rook/ceph:latest  
   Image ID:   docker-pullable://192.168.122.3:5000/rook/ceph@sha256:80d3ec16e4e6206530756a5a7ce76b51f336a1438af5de77488917f5d1f7b602  


References:

https://schmaustech.blogspot.com/2019/01/rook-ceph-on-kubernetes.html
https://www.projectatomic.io/blog/2015/08/why-we-dont-let-non-root-users-run-docker-in-centos-fedora-or-rhel/
https://rook.io/docs/rook/v0.9/development-flow.html
https://github.com/rook/rook


Thursday, March 7, 2019

Configuring Static IP Assignments For A libvirt Development Based Development Environment

Reason

Predictable IP placement of virtual machines is useful when performing repeated test of network software.

Environment:

This post was developed on Fedora Core 29 but should work on other recent Fedora/CentOS/RHEL OS releases.

Pre-reqs

A non-root user with sudo access was used to query and configure libvirt

In this example, four virtual machines are used and connected to the default libvirt network. The virtual machines are configured for DHCP.

Network Configuration Diagram

   baseOS<->|   
            |<->vmos1  
    libvirt |<->vmos2  
    default |<->vmos3  
    network |<->vmos4  

Network Configuration Command Line

 $ sudo virsh net-dumpxml default  
 <network connections='4'>  
  <name>default</name>  
  <uuid>e418d7f8-b770-47e6-9cb0-3d013568b761</uuid>  
  <forward mode='nat'>  
   <nat>  
    <port start='1024' end='65535'/>  
   </nat>  
  </forward>  
  <bridge name='virbr0' stp='on' delay='0'/>  
  <mac address='52:54:00:7a:96:4d'/>  
  <ip address='192.168.122.1' netmask='255.255.255.0'>  
   <dhcp>  
    <range start='192.168.122.2' end='192.168.122.254'/>  
   </dhcp>  
  </ip>  
 </network>  

Gather the MAC addresses of each VM

The example hosts have only one network interface. If there are multiple interfaces, gather the MAC address attached to the default network. IP address can be defined for other networks by adjusting the below command lines as needed.
 $ sudo virsh domiflist vmos4  
 Interface Type    Source   Model    MAC  
 -------------------------------------------------------  
 vnet3   network  default  virtio   52:54:00:d3:3e:e7  

Set the mac IP reservation for each MAC address.

Ensure the assigned IP address are available.

 sudo virsh net-update default add ip-dhcp-host '<host mac="52:54:00:87:c4:d1" ip="192.168.122.3"/>' --live --config  
 sudo virsh net-update default add ip-dhcp-host '<host mac="52:54:00:57:b0:a6" ip="192.168.122.4"/>' --live --config  
 sudo virsh net-update default add ip-dhcp-host '<host mac="52:54:00:37:aa:4f" ip="192.168.122.5"/>' --live --config  
 sudo virsh net-update default add ip-dhcp-host '<host mac="52:54:00:d3:3e:e7" ip="192.168.122.6"/>' --live --config  

Display updated network configuration

 $ sudo virsh net-dumpxml default  
 <network connections='4'>  
  <name>default</name>  
  <uuid>e418d7f8-b770-47e6-9cb0-3d013568b761</uuid>  
  <forward mode='nat'>  
   <nat>  
    <port start='1024' end='65535'/>  
   </nat>  
  </forward>  
  <bridge name='virbr0' stp='on' delay='0'/>  
  <mac address='52:54:00:7a:96:4d'/>  
  <ip address='192.168.122.1' netmask='255.255.255.0'>  
   <dhcp>  
    <range start='192.168.122.2' end='192.168.122.254'/>  
    <host mac='52:54:00:87:c4:d1' ip='192.168.122.3'/>  
    <host mac='52:54:00:57:b0:a6' ip='192.168.122.4'/>  
    <host mac='52:54:00:37:aa:4f' ip='192.168.122.5'/>  
    <host mac='52:54:00:d3:3e:e7' ip='192.168.122.6'/>  
   </dhcp>  
  </ip>  
 </network>  

Start/Restart each VM as needed.


The interface can be disconnected/reconnected if a restart is not wanted.

 $ for f in vmos1 vmow2 vmos3 vmos4 ; do sudo virsh destroy $f ; sleep 5 ; sudo virsh start $f ; done  

Update hosts file

The hosts file is updated to ease host access
 $ cat <<EOF | sudo tee -a /etc/hosts  
 192.168.122.3 vmos1  
 192.168.122.4 vmos2  
 192.168.122.5 vmos3  
 192.168.122.6 vmos4  
 EOF  

Test connectivity

 $ ping -c4 vmos1  
 PING vmos1 (192.168.122.3) 56(84) bytes of data.  
 64 bytes from vmos1 (192.168.122.3): icmp_seq=1 ttl=64 time=0.387 ms  
 64 bytes from vmos1 (192.168.122.3): icmp_seq=2 ttl=64 time=0.418 ms  
 64 bytes from vmos1 (192.168.122.3): icmp_seq=3 ttl=64 time=0.396 ms  
 64 bytes from vmos1 (192.168.122.3): icmp_seq=4 ttl=64 time=0.355 ms  
 --- vmos1 ping statistics ---  
 4 packets transmitted, 4 received, 0% packet loss, time 104ms  
 rtt min/avg/max/mdev = 0.355/0.389/0.418/0.022 ms  


References
Libvirt Networking
Red Hat Virtualization Deployment and Administration Guide