이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Installing and configuring Ceph for OpenStack
As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.
2.1. Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
2.2. Creating Ceph pools for Openstack
Creating Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd
pool, but you can use any available pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Verify the Red Hat Ceph Storage cluster is running, and is in a
HEALTH_OK
state:[root@mon ~]# ceph -s
Create the Ceph pools:
Example
[root@mon ~]# ceph osd pool create volumes 128 [root@mon ~]# ceph osd pool create backups 128 [root@mon ~]# ceph osd pool create images 128 [root@mon ~]# ceph osd pool create vms 128
In the above example,
128
is the number of placement groups.ImportantRed Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.
Additional Resources
- See the Pools chapter in the Storage Strategies guide for more details on creating pools.
2.3. Installing the Ceph client on Openstack
Install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.
Procedure
On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:
[root@nova ~]# yum install python-rbd [root@nova ~]# yum install ceph-common
On the OpenStack Glance node install the
python-rbd
package:[root@glance ~]# yum install python-rbd
2.4. Copying the Ceph configuration file to Openstack
Copying the Ceph configuration file to the nova-compute
, cinder-backup
, cinder-volume
, and glance-api
nodes.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, and Glance nodes.
Procedure
Copy the Ceph configuration file from the Ceph Monitor node to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:
[root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
2.5. Configuring Ceph client authentication
Configure authentication for the Ceph client to access the Red Hat OpenStack Platform.
Prerequisites
- Root-level access to the Ceph Monitor node.
- A running Red Hat Ceph Storage cluster.
Procedure
From a Ceph Monitor node, create new users for Cinder, Cinder Backup and Glance:
[root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
Add the keyrings for
client.cinder
,client.cinder-backup
andclient.glance
to the appropriate nodes and change their ownership:[root@mon ~]# ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring [root@mon ~]# ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring [root@mon ~]# ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring [root@mon ~]# ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring [root@mon ~]# ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring [root@mon ~]# ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring
OpenStack Nova nodes need the keyring file for the
nova-compute
process:[root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
The OpenStack Nova nodes also need to store the secret key of the
client.cinder
user inlibvirt
. Thelibvirt
process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:[root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
If the storage cluster contains Ceph block device images that use the
exclusive-lock
feature, ensure that all Ceph block device users have permissions to blacklist clients:[root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
Return to the OpenStack Nova node:
[root@mon ~]# ssh NOVA_NODE
Generate a UUID for the secret, and save the UUID of the secret for configuring
nova-compute
later:[root@nova ~]# uuidgen > uuid-secret.txt
NoteYou do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.
On the OpenStack Nova nodes, add the secret key to
libvirt
and remove the temporary copy of the key:cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
Set and define the secret for
libvirt
:[root@nova ~]# virsh secret-define --file secret.xml [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Additional Resources
- See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details.