Showing posts with label CI. Show all posts
Showing posts with label CI. Show all posts

Friday, April 5, 2019

Developing and testing patches for Rook

Backgroud


The purpose of this posting is to detail the steps needed to develop and test Rook code using the provided CI scripts. My previous blog posting did not use the CI scripting provided by rook and may not provide sufficient testing coverage to pass the upstream CI.

Pre-built infrastructure


Test Hosts Configuration
This CI development environment uses four (4) libvirt virtual machines as test hosts. The hardware configurations of the virtual machines remains the same as in the previous blog posts. These machines are built from the same clone image and configured with as follows
  • Ubuntu 16 (some commands may change for Ubuntu 18)
  • Local user with admin rights, updated to use sudo with no password
  • Pre-populated SSH host keys and StrictHostchecking set to no in the .ssh/config file
  • /etc/hosts file pre-populated with the host name and IP of each test host. 
  • Statically defined IP address as per this blog post

Hostnames and IPs

The virtual machines in this environment will be configured with the following hostnames and IPs
192.168.122.31 kube-ub-master
192.168.122.32 kube-ub-node1
192.168.122.33 kube-ub-node2
192.168.122.34 kube-ub-node3

Configuring the master server

The Rook CI scripts are designed to run from the kubernetes master host. These configuration steps are ran as the configured admin user on that host.

Update DNS resolver and hostname

On Ubuntu 16, the resolv.conf file is managed by systemd. This configuration can result in the coredns container not starting due to a lookup loop.
sudo sed -i -e 's/dns=.*/dns=default/' /etc/NetworkManager/NetworkManager.conf
sudo systemctl disable systemd-resolved.service
sudo systemctl stop  systemd-resolved.service
sudo rm /etc/resolv.conf
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf
echo kube-ub-master | sudo tee /etc/hostname

Install go 1.11

The CI requires go version 1.11 to be available on the system in order to build the local images and to run the go based test infrastructure.
wget https://dl.google.com/go/go1.11.6.linux-amd64.tar.gz
cd /usr/local
sudo tar -zxf ~kschinck/go1.11.6.linux-amd64.tar.gz
cd ~
echo export PATH=/usr/local/go/bin:\$PATH | tee -a ~/.bashrc
source .bashrc

Install and configure docker

Configure docker to use the registry on the master node
sudo apt-get install -y docker.io git ansible curl
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "insecure-registries" : ["192.168.122.31:5000"]
}
EOF
sudo systemctl start docker
sudo systemctl enable docker

Add user to the docker group

To run the CI scripts, the docker user needs to able to run docker commands without using sudo.
sudo usermod -a -G docker myuser
The new group assignment can be picked up by either logging out and back in or running the newgrp command.

Start the docker registry container

The docker registry container needs to be running in order to host the locally build versions of the Rook images.
docker run -d -p 5000:5000 --restart=always --name registry registry:2

Verify Registry Access

The curl command can be used to verify access to the local registry.

curl http://192.168.122.31:5000/v2/_catalog
{"repositories":[]}

Configure Worker Nodes

The worker nodes need to be configure similar to the master node with the exception of needing golang support added.
for HOST in kube-ub-node1 kube-ub-node2 kube-ub-node3
do
  cat<<EOF | ssh $HOST
sudo sed -i -e 's/dns=.*/dns=default/' /etc/NetworkManager/NetworkManager.conf
sudo systemctl disable systemd-resolved.service
sudo systemctl stop  systemd-resolved.service
sudo rm /etc/resolv.conf
echo nameserver 8.8.8.8 | sudo tee /etc/resolv.conf
echo $HOST | sudo tee /etc/hostname
sudo apt-get install -y docker.io 
cat <<EOG | sudo tee /etc/docker/daemon.json
{
  "insecure-registries" : ["192.168.122.31:5000"]
}
EOG
sudo systemctl start docker
sudo systemctl enable docker
echo ‘export GOPATH=${HOME}/go’ | tee -a ~/.bashrc
source .bashrc
echo $PATH
echo $GOPATH
mkdir -p ${GOPATH}/src/github.com/rook
cd ${GOPATH}/src/github.com/rook
git clone http://github.com/myuser/rook.git
cd rook
git checkout mypatch
EOF
done

Download Source Code

At this point we are ready to prepare for and download the forked git repository.The repository is configured according to the Rook development guidelines
These commands will prepare the needed directory path, clone the git repo, switch to the development branch and build the images.
cat <<EOF>>~/.bashrc
export GOPATH=${HOME}/go
EOF
export GOPATH=${HOME}/go
mkdir -p ${GOPATH}/src/github.com/rook
cd ${GOPATH}/src/github.com/rook
git clone http://github.com/myuser/rook.git
cd rook
git checkout mypatch
make

Tag and push built image

The make command will build and upload a rook/ceph image. In order to use this image, this image will need to be tagged and pushed back to the local registry.
docker tag `docker images | grep "rook/ceph" | awk '{print $1":"$2}'` 192.168.122.31:5000/rook/ceph:latest
docker push 192.168.122.31:5000/rook/ceph:latest
The local test framework files need to be update to use the new image. The following command will update the image reference to the local registry.
sed -i -e 's/image: rook.*$/image: 192.168.122.31:5000\/rook\/ceph:latest/' ./tests/framework/installer/ceph_manifests.go

Deploy master  node using CI script

cd tests/scripts
./kubeadm.sh up
The end of this command will display the kubectl command used to configure additional worker nodes. This needs to be copied and used in below worker node configuration setp

Deploy worker nodes using CI script

This command will add each worker host to the kubernetes cluster.
for HOST in kube-ub-node1 kube-ub-node2 kube-ub-node3
do
  cat<<EOF | ssh $HOST
  cd cd ${GOPATH}/src/github.com/rook/rook/tests/scripts
  ./kubeadm.sh install node TEXT_TAKEN_FROM_THE_MASTER_INSTALL_OUTPUT
EOF


Setup the kubectl command for debugging


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point, the admin will be able to use the kubectl command to interact with the deployed kubernetes cluster.

Run CI Test

Rook uses the provide test framework of go to run the validation commands. The test command is ran from the top of the rook repository directory. It will deploy a variety of configurations and validate the operation of the cluster. Output is saved under the _output directory.
This command will run the standard tests called SmokeSuite and save the output to a file in the /tmp directory.


time go test -v -timeout 1800s -run SmokeSuite github.com/rook/rook/tests/integration 2>&1 | tee /tmp/integration.log

The results can be reviewed from /tmp/integration.log file.

The kubernetes cluster should be in a clean state if the tests finish successfully.

Repeating the CI Execution


If an operator is to be update updated, the previous uploaded images should be deleted from the local repository prior to making updates, rebuilding the image, tagging and pushing.

If the CI configuration is updated, it is sufficient to rerun the go test command line.



Friday, March 8, 2019

Configuring a Rook Test and Development System

Reason

All patches should be tested locally prior to submitting a PR to the Rook project. This posting will detail the steps needed to configure an environment for developing and testing patches for the Rook project.

Environment

This post was developed on virtual machines running Fedora Core 29 but should work on other recent Fedora/CentOS/RHEL OS releases.

Pre-reqs

  • Four (4) virtual machines with the following specifications
    • 1 node: 2 CPU, =>2G, 20Gb storage
    • 3 nodes: 1 CPU, =>2G, 20Gb storage
  • A non-root user with sudo access with the ability install packages locally and on the other test nodes.

Description

This posting will extend the previous blog posting about deploying Ceph storage using Rook on a kubernetes cluster. This posting will detail how to deploy a local docker registry, configure the environment to Rook development, build Rook and, finally, how to test the locally built code.

Procedure


Deploy Local Docker Registry
The docker registry needs to be added to the kube-master host. This can be performed after the base OS configuration but prior to running "kubeadm --init"

The following command will pull down the docker registry image, start the container, make it always restart with the base OS and expose TCP port 5000. Note: the docker command is not normally ran as a non-root user as per this blog posting. For my local config, I have updated the group ownership of the docker UNIX socket from root to wheel.

 $ docker run -d --restart=always -p 5000:5000 --name registry registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ... 
sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5: Pulling from docker.io/library/registry
c87736221ed0: Pull complete 
1cc8e0bb44df: Pull complete 
54d33bcb37f5: Pull complete 
e8afc091c171: Pull complete 
b4541f6d3db6: Pull complete 
Digest: sha256:3b00e5438ebd8835bcfa7bf5246445a6b57b9a50473e89c02ecc8e575be3ebb5
Status: Downloaded newer image for docker.io/registry:latest
4d56fadeadbff76b14de2093b655af44b7cd08484df5f366fe3c83b4942c7306
$ sudo netstat -tulpn | grep 5000
tcp6       0      0 :::5000                 :::*                    LISTEN      11227/docker-proxy- 

Next, all nodes need docker to be configured to use the local insecure registry. This is done by updating /etc/containers/registries.conf and adding the new registry to the registries.insecure list.
 [registries.insecure]  
 registries = ['192.168.122.3:5000']  

And restart docker.
 $ sudo systemctl restart docker  

Configure Rook Development Environment

Configuring the rook development environment is fairly direct. The only build requirements are the git, go and perl-Digest-SHA packages. The rest of the golang libraries are retrieved during the build process
$ sudo yum install -y git go perl-Digest-SHA  
$ export GOPATH=/home/user/go  
$ mkdir -p ${GOPATH}/github.com/rook  
$ cd ${GOPATH}/github.com/rook  
$ git clone http://github.com/myrepo/rook.git  
$ cd rook  

Note: You should fork the upstream repository, if you are going to develop code for an eventual pull request.

Build Rook

 $ make  
 === helm package rook-ceph  
 ==> Linting /home/kschinck/go/src/github.com/rook/rook/cluster/charts/rook-ceph  
 Lint OK  
 1 chart(s) linted, no failures  
 Successfully packaged chart and saved it to: /home/kschinck/go/src/github.com/rook/rook/_output/charts/rook-ceph-v0.9.0-162.g2ed99b1c.dirty.tgz  
 === helm index  
 === go vet  
 === go build linux_amd64  
 .  
 .  
 . skipped text   
 .  
 .  
 === docker build build-7ad1f371/ceph-amd64  
 sha256:91a2f03ae0bb99a8f65b225b4160402927350618cd2a9592068503c24cb5d701  
 === docker build build-7ad1f371/cockroachdb-amd64  
 sha256:2d6f04f76d22de950fcf7df9e7205608c9baca627dbe4df81448419a4aff2b68  
 === docker build build-7ad1f371/minio-amd64  
 sha256:c1a4b7e09dc5069a6de809bbe2db6f963420038e243080c61bc61ea13bceff14  
 === docker build build-7ad1f371/nfs-amd64  
 sha256:e80fd66429923a0114210b8c254226d11e5ffb0a68cfc56377b8a3eccd3b663f  
 === saving image build-7ad1f371/nfs-amd64  
 === docker build build-7ad1f371/cassandra-amd64  
 sha256:492b314ef2123328c84702c66b0c9589f342b5f644ac5cff1de2263356446701  
 === docker build build-7ad1f371/edgefs-amd64  
 sha256:0d0ab8dd81261d0f325f83a690c7997707582126ab01e5f7e7a55fe964143c5d  
 === saving image build-7ad1f371/edgefs-amd64  
 $ docker images  
 REPOSITORY                         TAG              IMAGE ID      CREATED       SIZE  
 build-7ad1f371/edgefs-amd64        latest             0d0ab8dd8126    13 minutes ago   411 MB  
 build-7ad1f371/cassandra-amd64     latest             492b314ef212    13 minutes ago   141 MB  
 build-7ad1f371/nfs-amd64           latest             e80fd6642992    13 minutes ago   391 MB  
 build-7ad1f371/minio-amd64         latest             c1a4b7e09dc5    18 minutes ago   81.9 MB  
 build-7ad1f371/cockroachdb-amd64   latest             2d6f04f76d22    18 minutes ago   243 MB  
 build-7ad1f371/ceph-amd64          latest             91a2f03ae0bb    18 minutes ago   610 MB  
 docker.io/rook/ceph                master             b924a5979c14    4 days ago     610 MB  

Testing Locally Built Code

The built Ceph image needs to be tagged and the Ceph operator.yaml CRD file updated to use the new image from the local registry.
 $ IMG=`docker images | grep -Eo '^build-[a-z0-9]{8}/ceph-[a-z0-9]+\s'`  
 $ echo $IMG  
 build-7ad1f371/ceph-amd64  
 $ docker tag $IMG 192.168.122.3:5000/rook/ceph:latest  
 $ docker push 192.168.122.3:5000/rook/ceph:latest  
 The push refers to a repository [192.168.122.3:5000/rook/ceph]  
 5111818ce34e: Pushed   
 1d339a1e5398: Pushed   
 88c1eb656680: Pushed   
 f972d139738d: Pushed   
 latest: digest: sha256:80d3ec16e4e6206530756a5a7ce76b51f336a1438af5de77488917f5d1f7b602 size: 1163  
 $ docker images | grep "rook/ceph"  
 192.168.122.3:5000/rook/ceph     latest             91a2f03ae0bb    2 hours ago     610 MB  
 docker.io/rook/ceph         master             b924a5979c14    4 days ago     610 MB 
$ curl -X GET http://192.168.122.3:5000/v2/_catalog
{"repositories":["rook/ceph"]}
 $ sed -i 's|image: .*$|image: 192.168.122.3:5000/rook/ceph:latest|' cluster/examples/kubernetes/ceph/operator.yaml  

Once the operator.yaml CRD file has been updated, the ceph cluster can be deployed using the normal workflow.
$ cd cluster/examples/kubernetes/ceph/
$ kubectl -n rook-ceph-system describe pod rook-ceph-operator-9d496c54b-gmnrw | grep Image  
   Image:     192.168.122.3:5000/rook/ceph:latest  
   Image ID:   docker-pullable://192.168.122.3:5000/rook/ceph@sha256:80d3ec16e4e6206530756a5a7ce76b51f336a1438af5de77488917f5d1f7b602  


References:

https://schmaustech.blogspot.com/2019/01/rook-ceph-on-kubernetes.html
https://www.projectatomic.io/blog/2015/08/why-we-dont-let-non-root-users-run-docker-in-centos-fedora-or-rhel/
https://rook.io/docs/rook/v0.9/development-flow.html
https://github.com/rook/rook