Block Device to OpenStack Guide
Configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a back end for OpenStack.
Abstract
Chapter 1. Ceph block devices and OpenStack
The Red Hat Enterprise Linux OpenStack Platform Director provides two methods for using Ceph as a backend for Glance, Cinder, Cinder Backup and Nova:
- OpenStack creates the Ceph storage cluster: OpenStack Director can create a Ceph storage cluster. This requires configuring templates for the Ceph OSDs. OpenStack handles the installation and configuration of Ceph nodes. With this scenario, OpenStack will install the Ceph monitors with the OpenStack controller nodes.
- OpenStack connects to an existing Ceph storage cluster: OpenStack Director, using Red Hat OpenStack Platform 9 and higher, can connect to a Ceph monitor and configure the Ceph storage cluster for use as a backend for OpenStack.
The foregoing methods are the preferred methods for configuring Ceph as a backend for OpenStack, because they will handle much of the installation and configuration automatically.
This document details the manual procedure for configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a backend. This document is intended for use for those who do not intend to use the RHEL OSP Director.
A running Ceph storage cluster and at least one OpenStack node is required to use Ceph block devices as a backend for OpenStack.
Three parts of OpenStack integrate with Ceph’s block devices:
- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly.
- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. Ceph can serve as a black end for OpenStack Cinder and Cinder Backup.
-
Guest Disks: Guest disks are guest operating system disks. By default, when booting a virtual machine, its disk appears as a file on the file system of the hypervisor, by default, under
/var/lib/nova/instances/<uuid>/
directory. OpenStack Glance can store images in a Ceph block device, and can use Cinder to boot a virtual machine using a copy-on-write clone of an image.
Ceph doesn’t support QCOW2 for hosting a virtual machine disk. To boot virtual machines, either ephemeral backend or booting from a volume, the Glance image format must be RAW.
OpenStack can use Ceph for images, volumes or guest disks virtual machines. There is no requirement for using all three.
Additional Resources
- See the Red Hat OpenStack Platform documentation for additional details.
Chapter 2. Installing and configuring Ceph for OpenStack
As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.
2.1. Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
2.2. Creating Ceph pools for Openstack
Creating Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd
pool, but you can use any available pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Verify the Red Hat Ceph Storage cluster is running, and is in a
HEALTH_OK
state:[root@mon ~]# ceph -s
Create the Ceph pools:
Example
[root@mon ~]# ceph osd pool create volumes 128 [root@mon ~]# ceph osd pool create backups 128 [root@mon ~]# ceph osd pool create images 128 [root@mon ~]# ceph osd pool create vms 128
In the above example,
128
is the number of placement groups.ImportantRed Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.
Additional Resources
- See the Pools chapter in the Storage Strategies guide for more details on creating pools.
2.3. Installing the Ceph client on Openstack
Install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.
Procedure
On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:
[root@nova ~]# yum install python-rbd [root@nova ~]# yum install ceph-common
On the OpenStack Glance node install the
python-rbd
package:[root@glance ~]# yum install python-rbd
2.4. Copying the Ceph configuration file to Openstack
Copying the Ceph configuration file to the nova-compute
, cinder-backup
, cinder-volume
, and glance-api
nodes.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, and Glance nodes.
Procedure
Copy the Ceph configuration file from the Ceph Monitor node to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:
[root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
2.5. Configuring Ceph client authentication
Configure authentication for the Ceph client to access the Red Hat OpenStack Platform.
Prerequisites
- Root-level access to the Ceph Monitor node.
- A running Red Hat Ceph Storage cluster.
Procedure
From a Ceph Monitor node, create new users for Cinder, Cinder Backup and Glance:
[root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
Add the keyrings for
client.cinder
,client.cinder-backup
andclient.glance
to the appropriate nodes and change their ownership:[root@mon ~]# ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring [root@mon ~]# ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring [root@mon ~]# ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring [root@mon ~]# ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring [root@mon ~]# ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring [root@mon ~]# ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring
OpenStack Nova nodes need the keyring file for the
nova-compute
process:[root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
The OpenStack Nova nodes also need to store the secret key of the
client.cinder
user inlibvirt
. Thelibvirt
process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:[root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
If the storage cluster contains Ceph block device images that use the
exclusive-lock
feature, ensure that all Ceph block device users have permissions to blacklist clients:[root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
Return to the OpenStack Nova node:
[root@mon ~]# ssh NOVA_NODE
Generate a UUID for the secret, and save the UUID of the secret for configuring
nova-compute
later:[root@nova ~]# uuidgen > uuid-secret.txt
NoteYou do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.
On the OpenStack Nova nodes, add the secret key to
libvirt
and remove the temporary copy of the key:cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
Set and define the secret for
libvirt
:[root@nova ~]# virsh secret-define --file secret.xml [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Additional Resources
- See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details.
Chapter 3. Configuring OpenStack to use Ceph block devices
As a storage administrator, you must configure the Red Hat OpenStack Platform to use the Ceph block devices. The Red Hat OpenStack Platform can use Ceph block devices for Cinder, Cinder Backup, Glance, and Nova.
3.1. Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
- A running Red Hat OpenStack Platform environment.
3.2. Configuring Cinder to use Ceph block devices
The Red Hat OpenStack Platform can use Ceph block devices to provide back-end storage for Cinder volumes.
Prerequisites
- Root-level access to the Cinder node.
-
A Ceph
volume
pool. - The user and UUID of the secret to interact with Ceph block devices.
Procedure
Edit the Cinder configuration file:
[root@cinder ~]# vim /etc/cinder/cinder.conf
In the
[DEFAULT]
section, enable Ceph as a backend for Cinder:enabled_backends = ceph
Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends
, theglance_api_version = 2
setting must be in the[DEFAULT]
section and not the[ceph]
section.glance_api_version = 2
-
Create a
[ceph]
section in thecinder.conf
file. Add the Ceph settings in the following steps under the[ceph]
section. Specify the
volume_driver
setting and set it to use the Ceph block device driver:volume_driver = cinder.volume.drivers.rbd.RBDDriver
Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
ceph
and a Ceph configuration file at/etc/ceph/ceph.conf
. If the Ceph cluster name is notceph
, specify the cluster name and configuration file path appropriately:rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
By default, Red Hat OpenStack Platform stores Ceph volumes in the
rbd
pool. To use thevolumes
pool created earlier, specify therbd_pool
setting and set thevolumes
pool:rbd_pool = volumes
Red Hat OpenStack Platform does not have a default user name or a UUID of the secret for volumes. Specify
rbd_user
and set it to thecinder
user. Then, specify therbd_secret_uuid
setting and set it to the generated UUID stored in theuuid-secret.txt
file:rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
Specify the following settings:
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
When you configure Cinder to use Ceph block devices, the configuration file might look similar to this:
Example
[DEFAULT] enabled_backends = ceph glance_api_version = 2 … [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
NoteConsider removing the default
[lvm]
section and its settings.
3.3. Configuring Cinder backup to use Ceph block devices
The Red Hat OpenStack Platform can configure Cinder backup to use Ceph block devices.
Prerequisites
- Root-level access to the Cinder node.
Procedure
Edit the Cinder configuration file:
[root@cinder ~]# vim /etc/cinder/cinder.conf
-
Go to the
[ceph]
section of the configuration file. Specify the
backup_driver
setting and set it to the Ceph driver:backup_driver = cinder.backup.drivers.ceph
Specify the
backup_ceph_conf
setting and specify the path to the Ceph configuration file:backup_ceph_conf = /etc/ceph/ceph.conf
NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it can point to a different Ceph storage cluster.
Specify the Ceph pool for backups:
backup_ceph_pool = backups
NoteThe Ceph configuration file used for Cinder backup might be different from the Ceph configuration file used for Cinder.
Specify the
backup_ceph_user
setting and specify the user ascinder-backup
:backup_ceph_user = cinder-backup
Specify the following settings:
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
When you include the Cinder options, the
[ceph]
section of thecinder.conf
file might look similar to this:Example
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
Verify if Cinder backup is enabled:
[root@cinder ~]# cat /etc/openstack-dashboard/local_settings | grep enable_backup
If
enable_backup
is set toFalse
, then edit thelocal_settings
file and set it toTrue
.Example
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }
3.4. Configuring Glance to use Ceph block devices
The Red Hat OpenStack Platform can configure Glance to use Ceph block devices.
Prerequisites
- Root-level access to the Glance node.
Procedure
To use Ceph block devices by default, edit the
/etc/glance/glance-api.conf
file. If you used different pool, user or Ceph configuration file settings apply the appropriate values. Uncomment the following settings if necessary and change their values accordingly:[root@glance ~]# vim /etc/glance/glance-api.conf
stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
To enable copy-on-write (CoW) cloning set
show_image_direct_url
toTrue
.show_image_direct_url = True
ImportantEnabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
Disable cache management if necessary. The
flavor
should be set tokeystone
only, notkeystone+cachemanagement
.flavor = keystone
Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
The
virtio-scsi
controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every Cinder block device to that controller. Also, enable the QEMU guest agent and sendfs-freeze/thaw
calls through the QEMU guest agent.
3.5. Configuring Nova to use Ceph block devices
The Red Hat OpenStack Platform can configure Nova to use Ceph block devices.
You must configure each Nova node to use ephemeral back-end storage devices, which allows all virtual machines to use the Ceph block devices.
Prerequisites
- Root-level access to the Nova nodes.
Procedure
Edit the Ceph configuration file:
[root@nova ~]# vim /etc/ceph/ceph.conf
Add the following section to the
[client]
section of the Ceph configuration file:[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
Create new directories for the admin socket and log file, and change the directory permissions to use the
qemu
user andlibvirtd
group:[root@nova ~]# mkdir -p /var/run/ceph/guests/ /var/log/ceph/ [root@nova ~]# chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
NoteThe directories must be allowed by SELinux or AppArmor.
On each Nova node, edit the
/etc/nova/nova.conf
file. Under the[libvirt]
section, configure the following settings:Example
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes="network=writeback" inject_password = false inject_key = false inject_partition = -2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
Replace the UUID in
rbd_user_secret
with the UUID in theuuid-secret.txt
file.
3.6. Restarting the OpenStack services
Restarting the Red Hat OpenStack Platform services enables you to activate the Ceph block device drivers.
Prerequisites
- Root-level access to the Red Hat OpenStack Platform nodes.
Procedure
- Load the block device pool names and Ceph user names into the configuration file.
Restart the appropriate OpenStack services after modifying the corresponding configuration files:
[root@osp ~]# systemctl restart openstack-cinder-volume [root@osp ~]# systemctl restart openstack-cinder-backup [root@osp ~]# systemctl restart openstack-glance-api [root@osp ~]# systemctl restart openstack-nova-compute