Chapter 2. Installing and Configuring Ceph Clients
The nova-compute
, cinder-backup
and on the cinder-volume
node require both the Python bindings and the client command line tools:
# yum install python-rbd # yum install ceph-common
The glance-api
node requires the Python bindings for librbd
:
# yum install python-rbd
2.1. Copying Ceph Configuration File to OpenStack Nodes
The nodes running glance-api
, cinder-volume
, nova-compute
and cinder-backup
act as Ceph clients. Each requires the Ceph configuration file. Copy the Ceph configuration file from the monitor node to the OSP nodes.
# scp /etc/ceph/ceph.conf osp:/etc/ceph
2.2. Setting Up Ceph Client Authentication
From a Ceph monitor node, create new users for Cinder, Cinder Backup and Glance.
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' # ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
Add the keyrings for client.cinder
, client.cinder-backup
and client.glance
to the appropriate nodes and change their ownership:
# ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring # ssh {your-cinder-volume-server} chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring # ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} tee /etc/ceph/ceph.client.cinder-backup.keyring # ssh {your-cinder-backup-server} chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring # ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring # ssh {your-glance-api-server} chown glance:glance /etc/ceph/ceph.client.glance.keyring
Nodes running nova-compute
need the keyring file for the nova-compute
process:
# ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} tee /etc/ceph/ceph.client.cinder.keyring
Nodes running nova-compute
also need to store the secret key of the client.cinder
user in libvirt
. The libvirt
process needs it to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the nodes running nova-compute
:
# ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
If the storage cluster contains Ceph Block Device images that use the exclusive-lock
feature, ensure that all Ceph Block Device users have permissions to blacklist clients:
# ceph auth caps client.{ID} mon 'allow r, allow command "osd blacklist"' osd '{existing-OSD-user-capabilities}'
Return to the compute node.
# ssh {your-compute-node}
Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute
later.
# uuidgen > uuid-secret.txt
You don’t necessarily need the UUID on all the compute nodes. However from a platform consistency perspective, it’s better to keep the same UUID.
Then, on the compute nodes, add the secret key to libvirt
and remove the temporary copy of the key:
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
# virsh secret-define --file secret.xml # virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml