Chapter 3. Configuring OpenStack to use Ceph block devices
As a storage administrator, you must configure the Red Hat OpenStack Platform to use the Ceph block devices. The Red Hat OpenStack Platform can use Ceph block devices for Cinder, Cinder Backup, Glance, and Nova.
3.1. Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
- A running Red Hat OpenStack Platform environment.
3.2. Configuring Cinder to use Ceph block devices
The Red Hat OpenStack Platform can use Ceph block devices to provide back-end storage for Cinder volumes.
Prerequisites
- Root-level access to the Cinder node.
-
A Ceph
volume
pool. - The user and UUID of the secret to interact with Ceph block devices.
Procedure
Edit the Cinder configuration file:
vim /etc/cinder/cinder.conf
[root@cinder ~]# vim /etc/cinder/cinder.conf
Copy to Clipboard Copied! In the
[DEFAULT]
section, enable Ceph as a backend for Cinder:enabled_backends = ceph
enabled_backends = ceph
Copy to Clipboard Copied! Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends
, theglance_api_version = 2
setting must be in the[DEFAULT]
section and not the[ceph]
section.glance_api_version = 2
glance_api_version = 2
Copy to Clipboard Copied! -
Create a
[ceph]
section in thecinder.conf
file. Add the Ceph settings in the following steps under the[ceph]
section. Specify the
volume_driver
setting and set it to use the Ceph block device driver:volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_driver = cinder.volume.drivers.rbd.RBDDriver
Copy to Clipboard Copied! Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
ceph
and a Ceph configuration file at/etc/ceph/ceph.conf
. If the Ceph cluster name is notceph
, specify the cluster name and configuration file path appropriately:rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
Copy to Clipboard Copied! By default, Red Hat OpenStack Platform stores Ceph volumes in the
rbd
pool. To use thevolumes
pool created earlier, specify therbd_pool
setting and set thevolumes
pool:rbd_pool = volumes
rbd_pool = volumes
Copy to Clipboard Copied! Red Hat OpenStack Platform does not have a default user name or a UUID of the secret for volumes. Specify
rbd_user
and set it to thecinder
user. Then, specify therbd_secret_uuid
setting and set it to the generated UUID stored in theuuid-secret.txt
file:rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
Copy to Clipboard Copied! Specify the following settings:
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
Copy to Clipboard Copied! When you configure Cinder to use Ceph block devices, the configuration file might look similar to this:
Example
[DEFAULT] enabled_backends = ceph glance_api_version = 2 … [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
[DEFAULT] enabled_backends = ceph glance_api_version = 2 … [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
Copy to Clipboard Copied! NoteConsider removing the default
[lvm]
section and its settings.
3.3. Configuring Cinder backup to use Ceph block devices
The Red Hat OpenStack Platform can configure Cinder backup to use Ceph block devices.
Prerequisites
- Root-level access to the Cinder node.
Procedure
Edit the Cinder configuration file:
vim /etc/cinder/cinder.conf
[root@cinder ~]# vim /etc/cinder/cinder.conf
Copy to Clipboard Copied! -
Go to the
[ceph]
section of the configuration file. Specify the
backup_driver
setting and set it to the Ceph driver:backup_driver = cinder.backup.drivers.ceph
backup_driver = cinder.backup.drivers.ceph
Copy to Clipboard Copied! Specify the
backup_ceph_conf
setting and specify the path to the Ceph configuration file:backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_conf = /etc/ceph/ceph.conf
Copy to Clipboard Copied! NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it can point to a different Ceph storage cluster.
Specify the Ceph pool for backups:
backup_ceph_pool = backups
backup_ceph_pool = backups
Copy to Clipboard Copied! NoteThe Ceph configuration file used for Cinder backup might be different from the Ceph configuration file used for Cinder.
Specify the
backup_ceph_user
setting and specify the user ascinder-backup
:backup_ceph_user = cinder-backup
backup_ceph_user = cinder-backup
Copy to Clipboard Copied! Specify the following settings:
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
Copy to Clipboard Copied! When you include the Cinder options, the
[ceph]
section of thecinder.conf
file might look similar to this:Example
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
Copy to Clipboard Copied! Verify if Cinder backup is enabled:
cat /etc/openstack-dashboard/local_settings | grep enable_backup
[root@cinder ~]# cat /etc/openstack-dashboard/local_settings | grep enable_backup
Copy to Clipboard Copied! If
enable_backup
is set toFalse
, then edit thelocal_settings
file and set it toTrue
.Example
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }
Copy to Clipboard Copied!
3.4. Configuring Glance to use Ceph block devices
The Red Hat OpenStack Platform can configure Glance to use Ceph block devices.
Prerequisites
- Root-level access to the Glance node.
Procedure
To use Ceph block devices by default, edit the
/etc/glance/glance-api.conf
file. If you used different pool, user or Ceph configuration file settings apply the appropriate values. Uncomment the following settings if necessary and change their values accordingly:vim /etc/glance/glance-api.conf
[root@glance ~]# vim /etc/glance/glance-api.conf
Copy to Clipboard Copied! stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
Copy to Clipboard Copied! To enable copy-on-write (CoW) cloning set
show_image_direct_url
toTrue
.show_image_direct_url = True
show_image_direct_url = True
Copy to Clipboard Copied! ImportantEnabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
Disable cache management if necessary. The
flavor
should be set tokeystone
only, notkeystone+cachemanagement
.flavor = keystone
flavor = keystone
Copy to Clipboard Copied! Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
Copy to Clipboard Copied! The
virtio-scsi
controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every Cinder block device to that controller. Also, enable the QEMU guest agent and sendfs-freeze/thaw
calls through the QEMU guest agent.
3.5. Configuring Nova to use Ceph block devices
The Red Hat OpenStack Platform can configure Nova to use Ceph block devices.
You must configure each Nova node to use ephemeral back-end storage devices, which allows all virtual machines to use the Ceph block devices.
Prerequisites
- Root-level access to the Nova nodes.
Procedure
Edit the Ceph configuration file:
vim /etc/ceph/ceph.conf
[root@nova ~]# vim /etc/ceph/ceph.conf
Copy to Clipboard Copied! Add the following section to the
[client]
section of the Ceph configuration file:[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
Copy to Clipboard Copied! Create new directories for the admin socket and log file, and change the directory permissions to use the
qemu
user andlibvirtd
group:mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
[root@nova ~]# mkdir -p /var/run/ceph/guests/ /var/log/ceph/ [root@nova ~]# chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
Copy to Clipboard Copied! NoteThe directories must be allowed by SELinux or AppArmor.
On each Nova node, edit the
/etc/nova/nova.conf
file. Under the[libvirt]
section, configure the following settings:Example
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes="network=writeback" inject_password = false inject_key = false inject_partition = -2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes="network=writeback" inject_password = false inject_key = false inject_partition = -2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
Copy to Clipboard Copied! Replace the UUID in
rbd_user_secret
with the UUID in theuuid-secret.txt
file.
3.6. Restarting the OpenStack services
Restarting the Red Hat OpenStack Platform services enables you to activate the Ceph block device drivers.
Prerequisites
- Root-level access to the Red Hat OpenStack Platform nodes.
Procedure
- Load the block device pool names and Ceph user names into the configuration file.
Restart the appropriate OpenStack services after modifying the corresponding configuration files:
systemctl restart openstack-cinder-volume systemctl restart openstack-cinder-backup systemctl restart openstack-glance-api systemctl restart openstack-nova-compute
[root@osp ~]# systemctl restart openstack-cinder-volume [root@osp ~]# systemctl restart openstack-cinder-backup [root@osp ~]# systemctl restart openstack-glance-api [root@osp ~]# systemctl restart openstack-nova-compute
Copy to Clipboard Copied!