Chapter 2. Installing and configuring Ceph for OpenStack
As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.
2.1. Prerequisites Copy linkLink copied to clipboard!
- A new or existing Red Hat Ceph Storage cluster.
2.2. Creating Ceph pools for OpenStack Copy linkLink copied to clipboard!
You can create Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd
pool, but you can use any available pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Verify the Red Hat Ceph Storage cluster is running, and is in a
HEALTH_OK
state:ceph -s
[root@mon ~]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Ceph pools:
Example
ceph osd pool create volumes 128 ceph osd pool create backups 128 ceph osd pool create images 128 ceph osd pool create vms 128
[root@mon ~]# ceph osd pool create volumes 128 [root@mon ~]# ceph osd pool create backups 128 [root@mon ~]# ceph osd pool create images 128 [root@mon ~]# ceph osd pool create vms 128
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above example,
128
is the number of placement groups.ImportantRed Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.
Additional Resources
- See the Pools chapter in the Storage Strategies guide for more details on creating pools.
2.3. Installing the Ceph client on OpenStack Copy linkLink copied to clipboard!
You can install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.
Procedure
On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:
dnf install python-rbd ceph-common
[root@nova ~]# dnf install python-rbd ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenStack Glance host install the
python-rbd
package:dnf install python-rbd
[root@glance ~]# dnf install python-rbd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Copying the Ceph configuration file to OpenStack Copy linkLink copied to clipboard!
Copying the Ceph configuration file to the nova-compute
, cinder-backup
, cinder-volume
, and glance-api
nodes.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, and Glance nodes.
Procedure
Copy the Ceph configuration file from the Ceph Monitor host to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:
scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
[root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Configuring Ceph client authentication Copy linkLink copied to clipboard!
You can configure authentication for the Ceph client to access the Red Hat OpenStack Platform.
Prerequisites
- Root-level access to the Ceph Monitor host.
- A running Red Hat Ceph Storage cluster.
Procedure
From a Ceph Monitor host, create new users for Cinder, Cinder Backup and Glance:
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the keyrings for
client.cinder
,client.cinder-backup
andclient.glance
to the appropriate nodes and change their ownership:Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenStack Nova nodes need the keyring file for the
nova-compute
process:ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
[root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The OpenStack Nova nodes also need to store the secret key of the
client.cinder
user inlibvirt
. Thelibvirt
process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
[root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the storage cluster contains Ceph block device images that use the
exclusive-lock
feature, ensure that all Ceph block device users have permissions to blocklist clients:ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
[root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Return to the OpenStack Nova host:
ssh NOVA_NODE
[root@mon ~]# ssh NOVA_NODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a UUID for the secret, and save the UUID of the secret for configuring
nova-compute
later:uuidgen > uuid-secret.txt
[root@nova ~]# uuidgen > uuid-secret.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.
On the OpenStack Nova nodes, add the secret key to
libvirt
and remove the temporary copy of the key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set and define the secret for
libvirt
:virsh secret-define --file secret.xml virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
[root@nova ~]# virsh secret-define --file secret.xml [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow