Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Installing and Configuring Ceph Clients
The nova-compute
, cinder-backup
and on the cinder-volume
node require both the Python bindings and the client command line tools:
yum install python-rbd yum install ceph-common
# yum install python-rbd
# yum install ceph-common
The glance-api
node requires the Python bindings for librbd
:
yum install python-rbd
# yum install python-rbd
2.1. Copying Ceph Configuration File to OpenStack Nodes Link kopierenLink in die Zwischenablage kopiert!
The nodes running glance-api
, cinder-volume
, nova-compute
and cinder-backup
act as Ceph clients. Each requires the Ceph configuration file. Copy the Ceph configuration file from the monitor node to the OSP nodes.
scp /etc/ceph/ceph.conf osp:/etc/ceph
# scp /etc/ceph/ceph.conf osp:/etc/ceph
2.2. Setting Up Ceph Client Authentication Link kopierenLink in die Zwischenablage kopiert!
From a Ceph monitor node, create new users for Cinder, Cinder Backup and Glance.
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
Add the keyrings for client.cinder
, client.cinder-backup
and client.glance
to the appropriate nodes and change their ownership:
Nodes running nova-compute
need the keyring file for the nova-compute
process:
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} tee /etc/ceph/ceph.client.cinder.keyring
# ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} tee /etc/ceph/ceph.client.cinder.keyring
Nodes running nova-compute
also need to store the secret key of the client.cinder
user in libvirt
. The libvirt
process needs it to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the nodes running nova-compute
:
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
# ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
If the storage cluster contains Ceph Block Device images that use the exclusive-lock
feature, ensure that all Ceph Block Device users have permissions to blacklist clients:
ceph auth caps client.{ID} mon 'allow r, allow command "osd blacklist"' osd '{existing-OSD-user-capabilities}'
# ceph auth caps client.{ID} mon 'allow r, allow command "osd blacklist"' osd '{existing-OSD-user-capabilities}'
Return to the compute node.
ssh {your-compute-node}
# ssh {your-compute-node}
Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute
later.
uuidgen > uuid-secret.txt
# uuidgen > uuid-secret.txt
You don’t necessarily need the UUID on all the compute nodes. However from a platform consistency perspective, it’s better to keep the same UUID.
Then, on the compute nodes, add the secret key to libvirt
and remove the temporary copy of the key:
virsh secret-define --file secret.xml virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
# virsh secret-define --file secret.xml
# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml