Monday, September 12, 2016

Moving Ceph Monitor Interface

Problem


A deployed ceph monitor is bound to the wrong interface and needs to be reconfigured. In this case, the ceph monitor on ceph2 is bound to localhost when it needs to be bound to the publically accesible interface.

The ceph status shows the IP address and socket the monitor is bound to

[root@ceph2 ~]# ceph -s
    cluster 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
     health HEALTH_WARN
            too many PGs per OSD (320 > max 300)
     monmap e1: 1 mons at {ceph2=127.0.0.1:6789/0}
            election epoch 4, quorum 0 ceph2
     osdmap e9: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v29432: 320 pgs, 5 pools, 0 bytes data, 0 objects
            2577 MB used, 10692 MB / 13270 MB avail
                 320 active+clean

Initial Conditions


  • Ceph monitor is bound to localhost
  • External IP address is 192.168.122.7
  • CentOS 7 is the base OS
  • Ceph Jewel is installed

Reconfiguration Process


  • Obtain and print current monitor map

[root@ceph2 ~]# ceph mon getmap -o monmap
got monmap epoch 1
[root@ceph2 ~]# monmaptool --print monmap
monmaptool: monmap file monmap
epoch 1
fsid 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
last_changed 2016-07-22 00:14:22.139218
created 2016-07-22 00:14:22.139218
0: 127.0.0.1:6789/0 mon.ceph2



  • Shut down all ceph services. Shutting down the monitor is show below.

[root@ceph2 ~]# systemctl stop ceph-mon@ceph2
[root@ceph2 ~]# systemctl status ceph-mon@ceph2
● ceph-mon@ceph2.service - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Sun 2016-09-11 10:01:15 EDT; 5s ago
  Process: 8535 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=0/SUCCESS)
 Main PID: 8535 (code=exited, status=0/SUCCESS)

Sep 11 10:00:45 ceph2.example.com systemd[1]: Started Ceph cluster monitor daemon.
Sep 11 10:00:45 ceph2.example.com systemd[1]: Starting Ceph cluster monitor daemon...
Sep 11 10:00:45 ceph2.example.com ceph-mon[8535]: starting mon.ceph2 rank 0 at 127.0.0.1:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph2 fsid 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
Sep 11 10:01:15 ceph2.example.com systemd[1]: Stopping Ceph cluster monitor daemon...
Sep 11 10:01:15 ceph2.example.com ceph-mon[8535]: 2016-09-11 10:01:15.784773 7fbd7acc2700 -1 mon.ceph2@0(leader) e1 *** Got Signal Terminated ***
Sep 11 10:01:15 ceph2.example.com systemd[1]: Stopped Ceph cluster monitor daemon.


  • Update the ceph.conf file to list the new IP address

[root@ceph2 ~]# diff -u /etc/ceph/ceph.conf.ORIG /etc/ceph/ceph.conf
--- /etc/ceph/ceph.conf.ORIG    2016-09-11 10:03:42.612569161 -0400
+++ /etc/ceph/ceph.conf    2016-09-11 10:04:27.634243638 -0400
@@ -9,7 +9,7 @@
 auth_supported = cephx
 auth_cluster_required = cephx
 osd_journal_size = 100
-mon_host = 127.0.0.1
+mon_host = 192.168.122.7
 auth_client_required = cephx
 osd_pool_default_size = 1


  • Remove old entry from the monitor map

[root@ceph2 ~]# monmaptool --rm ceph2 monmap
monmaptool: monmap file monmap
monmaptool: removing ceph2
monmaptool: writing epoch 1 to monmap (0 monitors)
[root@ceph2 ~]# monmaptool --print monmap
monmaptool: monmap file monmap
epoch 1
fsid 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
last_changed 2016-07-22 00:14:22.139218
created 2016-07-22 00:14:22.139218


  • Add new entry to the monitor map

[root@ceph2 ~]# monmaptool --add ceph2 192.168.122.7:6789 monmap
monmaptool: monmap file monmap
monmaptool: writing epoch 1 to monmap (1 monitors)
[root@ceph2 ~]# monmaptool --print monmap
monmaptool: monmap file monmap
epoch 1
fsid 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
last_changed 2016-07-22 00:14:22.139218
created 2016-07-22 00:14:22.139218
0: 192.168.122.7:6789/0 mon.ceph2



  • Add the new monitor map to the cluster

[root@ceph2 ~]# ceph-mon -i ceph2 --inject-monmap monmap

  • Copy the new monmap to any other ceph monitors and perform the same remove and add process. 
  • Copy ceph.conf file to all ceph members and clients
  • Restart and verify the ceph services. The ceph monitor is show below. 

[root@ceph2 ~]# systemctl start ceph-mon@ceph2
[root@ceph2 ~]# ceph -s
    cluster 7200aea0-2ddd-4a32-aa2a-d49f66ab554c
     health HEALTH_WARN
            too many PGs per OSD (320 > max 300)
     monmap e3: 1 mons at {ceph2=192.168.122.7:6789/0}
            election epoch 7, quorum 0 ceph2
     osdmap e9: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v29432: 320 pgs, 5 pools, 0 bytes data, 0 objects
            2577 MB used, 10692 MB / 13270 MB avail
                 320 active+clean
[root@ceph2 ~]# netstat -tulpn | grep 6789
tcp        0      0 192.168.122.7:6789      0.0.0.0:*               LISTEN      9081/ceph-mon
      


Saturday, September 10, 2016

Integrating OpenStack Mitaka Cinder with External Ceph

This post describes how to manually integrate OpenStack Mitaka Cinder with a prexisting external Ceph cluster. The final configuration goals are to have Cinder configuration with multiple storage backends and support for creating volumes in either backend.

This post will not cover the initial deployment of OpenStack Cinder or the Ceph clusters.

Initial Conditions


  • OpenStack Mitaka deployed - deployed all-in-one on clone.example.com
  • Two Ceph clusters deployed - deployed as ceph1.example.com and ceph2.example.com

First Backend Configuration Process


The example configuration is for a OpenStack cluster installed in an all-in-one configuration and one external Ceph cluster is utilized. For larger OpenStack installations, the Cinder reconfiguration operations will need to be repeated on each controller. For multiple Ceph clusters, the ceph steps will be repeated once per cluster and a unique cinder.conf configuration stanza will be created.

Create Ceph pool

The Ceph pool should be created for cinder usage. The placement group size should be adjusted to satisfy operational requirements

[root@ceph1 ~]# ceph osd pool create cinder1 32
[root@ceph1 ~]# rados lspools
rbd
cinder1


Create Ceph client keyring

The client authentication keyring is created to permit cephx authenticated client connections. The "images1" and "vms1" pools are for other OpenStack usage.  The client name (client.ceph1) needs to be unique for this service across all Ceph clusters.

[root@ceph1 ~]# ceph auth get-or-create client.ceph1 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder1, allow rwx pool=vms1, allow rx pool=images1' | tee /etc/ceph/ceph.client.ceph1.keyring

Copy Ceph config

The ceph configuration file needs to be copied to the OpenStack controllers. 
[root@ceph1 ~]# scp /etc/ceph/ceph.conf root@clone:/etc/ceph/ceph-ceph1.conf

Copy Ceph client keyring[root@ceph1 ~]# scp /etc/ceph/ceph.client.ceph1.keyring root@clone:/etc/ceph/ceph.client.ceph1.keyring

Install ceph-common packages


The ceph-common package needs to be installed on the cinder servers

[root@clone ~]# yum -q -y install ceph-common
Package 1:ceph-common-10.2.2-0.el7.x86_64 already installed and latest version


Configure cinder.conf

Add the storage backend to the cinder.conf

[BACKEND_ceph1]
volume_driver=cinder.volume.drivers.rbd.RBDDriver

#rbd_secret_uuid is used by libvirt
rbd_secret_uuid=d0439829-7970-421b-a25e-37b1c3a97d7f

#rbd_ceph_conf points to the configuration file copied above
rbd_ceph_conf=/etc/ceph/ceph-ceph1.conf

#rbd_pool is the OSD pool created for this service
rbd_pool=cinder
backend_host=rbd:cinder1

#rbd_user is the client key create in the ceph cluster.
rbd_user=ceph1
volume_backend_name=BACKEND_ceph1



Update the list of available backends


enabled_backends = BACKEND_1,BACKEND_ceph1


Restart Cinder

The Cinder volume servers needs to be restarted on each configured controller.

[root@clone ~]# systemctl restart openstack-cinder-volume

Configure OpenStack to use new backend

 Create a new type of volume

[root@clone ~]# openstack volume type create BACKEND_ceph1

Associate the new type to the configured backend.

[root@clone ~]# openstack volume type set --property volume_backend_name=BACKEND_ceph1 BACKEND_ceph1

List available volume types and display the configuration information of the new type.
[root@clone ~]# openstack volume type list
[root@clone ~]# openstack volume type show BACKEND_ceph1

Test new backend

 Create a new volume with the new type.

[root@clone ~]# openstack volume create --size 1 --type BACKEND_ceph1 test
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-09-14T03:15:58.362561           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | a7950b6b-fcfd-4c90-8897-a0561befdaad |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | test                                 |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | BACKEND_ceph1                        |
| updated_at          | None                                 |
| user_id             | d91cad8a93bb462cab84f51a6925e752     |
+---------------------+--------------------------------------+


Second Backend Configuration Process

The same process is used to configure the second backend with the following parameters changed:
  • Ceph client name - ceph2
  • Ceph client keyring name - ceph.clinet.ceph2.keyring
  • Ceph pool name - cinder2 
  • Cinder configuration stanza and backend type - BACKEND_ceph2
  • OpenStack volume type names

Final Configurations

Below is the output of various OpenStack and Ceph configuration queries

Contents of /etc/ceph

[root@clone ~]# ls -laF /etc/ceph
total 36
drwxr-xr-x.  2 root root 4096 Sep 13 23:20 ./
drwxr-xr-x. 87 root root 8192 Sep 13 23:21 ../
-rw-r--r--.  1 root root  400 Sep 13 22:21 ceph-ceph1.conf
-rw-r--r--.  1 root root  400 Sep 13 23:20 ceph-ceph2.conf
-rw-r--r--.  1 root root   63 Sep 13 22:22 ceph.client.ceph1.keyring
-rw-r--r--.  1 root root   63 Sep 13 23:19 ceph.client.ceph2.keyring
-rwxr-xr-x.  1 root root   92 Jul  4 06:00 rbdmap*

OpenStack Volume Services

[root@clone ~]# openstack volume service list
+------------------+-----------------------------+------+---------+-------+----------------------------+
| Binary           | Host                        | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------+------+---------+-------+----------------------------+
| cinder-volume    | clone.example.com@BACKEND_1 | nova | enabled | up    | 2016-09-14T03:26:57.000000 |
| cinder-scheduler | clone.example.com           | nova | enabled | up    | 2016-09-14T03:26:58.000000 |
| cinder-volume    | rbd:cinder1@BACKEND_ceph1   | nova | enabled | up    | 2016-09-14T03:27:07.000000 |
| cinder-volume    | rbd:cinder2@BACKEND_ceph2   | nova | enabled | up    | 2016-09-14T03:27:07.000000 |
+------------------+-----------------------------+------+---------+-------+----------------------------+

OpenStack Volume Types

[root@clone ~]# openstack volume type list
+--------------------------------------+---------------+
| ID                                   | Name          |
+--------------------------------------+---------------+
| 00bc818f-4d04-4678-9b33-739c9457d14f | BACKEND_ceph2 |
| fbd2bdba-c909-4369-beef-00427df10934 | BACKEND_ceph1 |
| 66c24362-d848-4b21-8124-171cb246f34f | BACKEND_1     |
+--------------------------------------+---------------+

OpenStack Volume Type BACKEND_ceph1

[root@clone ~]# openstack volume type show BACKEND_ceph1
+---------------------------------+--------------------------------------+
| Field                           | Value                                |
+---------------------------------+--------------------------------------+
| access_project_ids              | None                                 |
| description                     | None                                 |
| id                              | fbd2bdba-c909-4369-beef-00427df10934 |
| is_public                       | True                                 |
| name                            | BACKEND_ceph1                        |
| os-volume-type-access:is_public | True                                 |
| properties                      | volume_backend_name='BACKEND_ceph1'  |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+



OpenStack Volume Type BACKEND_ceph2


[root@clone ~]# openstack volume type show BACKEND_ceph2
+---------------------------------+--------------------------------------+
| Field                           | Value                                |
+---------------------------------+--------------------------------------+
| access_project_ids              | None                                 |
| description                     | None                                 |
| id                              | 00bc818f-4d04-4678-9b33-739c9457d14f |
| is_public                       | True                                 |
| name                            | BACKEND_ceph2                        |
| os-volume-type-access:is_public | True                                 |
| properties                      | volume_backend_name='BACKEND_ceph2'  |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

OpenStack Volumes


[root@clone ~]# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Display Name | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| eada663d-523a-4d00-9874-fd7749745f1d | test2        | available |    1 |             |
| a7950b6b-fcfd-4c90-8897-a0561befdaad | test         | available |    1 |             |
+--------------------------------------+--------------+-----------+------+-------------+


OpenStack Volume Information

[root@clone ~]# openstack volume show test
+--------------------------------+-----------------------------------------+
| Field                          | Value                                   |
+--------------------------------+-----------------------------------------+
| attachments                    | []                                      |
| availability_zone              | nova                                    |
| bootable                       | false                                   |
| consistencygroup_id            | None                                    |
| created_at                     | 2016-09-14T03:15:58.000000              |
| description                    | None                                    |
| encrypted                      | False                                   |
| id                             | a7950b6b-fcfd-4c90-8897-a0561befdaad    |
| migration_status               | None                                    |
| multiattach                    | False                                   |
| name                           | test                                    |
| os-vol-host-attr:host          | rbd:cinder1@BACKEND_ceph1#BACKEND_ceph1 |
| os-vol-mig-status-attr:migstat | None                                    |
| os-vol-mig-status-attr:name_id | None                                    |
| os-vol-tenant-attr:tenant_id   | 63919c9c4e4c4d149e560ad0815c41d3        |
| properties                     |                                         |
| replication_status             | disabled                                |
| size                           | 1                                       |
| snapshot_id                    | None                                    |
| source_volid                   | None                                    |
| status                         | available                               |
| type                           | BACKEND_ceph1                           |
| updated_at                     | 2016-09-14T03:16:00.000000              |
| user_id                        | d91cad8a93bb462cab84f51a6925e752        |
+--------------------------------+-----------------------------------------+
[root@clone ~]# openstack volume show test2
+--------------------------------+-----------------------------------------+
| Field                          | Value                                   |
+--------------------------------+-----------------------------------------+
| attachments                    | []                                      |
| availability_zone              | nova                                    |
| bootable                       | false                                   |
| consistencygroup_id            | None                                    |
| created_at                     | 2016-09-14T03:24:21.000000              |
| description                    | None                                    |
| encrypted                      | False                                   |
| id                             | eada663d-523a-4d00-9874-fd7749745f1d    |
| migration_status               | None                                    |
| multiattach                    | False                                   |
| name                           | test2                                   |
| os-vol-host-attr:host          | rbd:cinder2@BACKEND_ceph2#BACKEND_ceph2 |
| os-vol-mig-status-attr:migstat | None                                    |
| os-vol-mig-status-attr:name_id | None                                    |
| os-vol-tenant-attr:tenant_id   | 63919c9c4e4c4d149e560ad0815c41d3        |
| properties                     |                                         |
| replication_status             | disabled                                |
| size                           | 1                                       |
| snapshot_id                    | None                                    |
| source_volid                   | None                                    |
| status                         | available                               |
| type                           | BACKEND_ceph2                           |
| updated_at                     | 2016-09-14T03:24:23.000000              |
| user_id                        | d91cad8a93bb462cab84f51a6925e752        |
+--------------------------------+-----------------------------------------+

Ceph Pool Contents

[root@ceph1 ~]# rados ls -p cinder1
rbd_header.5e4334da1f50
rbd_id.volume-a7950b6b-fcfd-4c90-8897-a0561befdaad
rbd_object_map.5e4334da1f50
rbd_directory


[root@ceph2 ~]# rados ls -p cinder2
rbd_id.volume-eada663d-523a-4d00-9874-fd7749745f1d
rbd_header.853994579fc
rbd_object_map.853994579fc
rbd_directory



More Information

Additional information about related topics to Ceph, Openstack and additional integration settings can be found at the following URLs.

Git repo related to this post
Configuring OpenStack to use Ceph
OpenStack Nova: configure multiple Ceph backends on one hypervisor