Chapter 3. Configuring OpenStack to Use Ceph
3.1. Configuring Cinder
The cinder-volume
nodes require the Ceph block device driver, the volume
pool, the user and the UUID of the secret to interact with Ceph block devices. To configure Cinder, perform the following steps:
Open the Cinder configuration file.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vim /etc/cinder/cinder.conf
# vim /etc/cinder/cinder.conf
In the
[DEFAULT]
section, enable Ceph as a backend for Cinder.Copy to Clipboard Copied! Toggle word wrap Toggle overflow enabled_backends = ceph
enabled_backends = ceph
Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends
, theglance_api_version = 2
setting must be in the[DEFAULT]
section and not the[ceph]
section.Copy to Clipboard Copied! Toggle word wrap Toggle overflow glance_api_version = 2
glance_api_version = 2
-
Create a
[ceph]
section in thecinder.conf
file. Add the Ceph settings in the following steps under the[ceph]
section. Specify the
volume_driver
setting and set it to use the Ceph block device driver. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_driver = cinder.volume.drivers.rbd.RBDDriver
Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
ceph
and a Ceph configuration file at/etc/ceph/ceph.conf
. If the Ceph cluster name is notceph
, specify the cluster name and configuration file path appropriately. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
By default, OSP stores Ceph volumes in the
rbd
pool. To use thevolumes
pool created earlier, specify therbd_pool
setting and set thevolumes
pool. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rbd_pool = volumes
rbd_pool = volumes
OSP does not have a default user name or a UUID of the secret for volumes. Specify
rbd_user
and set it to thecinder
user. Then, specify therbd_secret_uuid
setting and set it to the generated UUID stored in theuuid-secret.txt
file. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
Specify the following settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
The resulting configuration should look something like this:
[DEFAULT] enabled_backends = ceph glance_api_version = 2 ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = ceph
rbd_pool = volumes
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
Consider removing the default [lvm]
section and its settings.
3.2. Configuring Cinder Backup
The cinder-backup
node requires a specific daemon. To configure Cinder backup, perform the following steps:
Open the Cinder configuration file.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vim /etc/cinder/cinder.conf
# vim /etc/cinder/cinder.conf
-
Go to the
[ceph]
section of the configuration file. Specify the
backup_driver
setting and set it to the Ceph driver.Copy to Clipboard Copied! Toggle word wrap Toggle overflow backup_driver = cinder.backup.drivers.ceph
backup_driver = cinder.backup.drivers.ceph
Specify the
backup_ceph_conf
setting and specify the path to the Ceph configuration file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_conf = /etc/ceph/ceph.conf
NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it may point to a different Ceph cluster.
Specify the Ceph pool for backups.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow backup_ceph_pool = backups
backup_ceph_pool = backups
NoteWhile it is possible to use the same pool for Cinder Backups as used with Cinder, it is NOT recommended. Consider using a pool with a different CRUSH hierarchy.
Specify the
backup_ceph_user
setting and specify the user ascinder-backup
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow backup_ceph_user = cinder-backup
backup_ceph_user = cinder-backup
Specify the following settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
With the Cinder settings included, the [ceph]
section of the cinder.conf
file should look something like this:
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_cluster_name = ceph
rbd_pool = volumes
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
backup_driver = cinder.backup.drivers.ceph
backup_ceph_user = cinder-backup
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
Check to see if Cinder backup is enabled under /etc/openstack-dashboard/
. The setting should be in a file called local_settings
, or local_settings.py
. For example:
cat /etc/openstack-dashboard/local_settings | grep enable_backup
cat /etc/openstack-dashboard/local_settings | grep enable_backup
If enable_backup
is set to False
, set it to True
. For example:
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }
OPENSTACK_CINDER_FEATURES = {
'enable_backup': True,
}
3.3. Configuring Glance
To use Ceph block devices by default, edit the /etc/glance/glance-api.conf
file. Uncomment the following settings if necessary and change their values accordingly. If you used different pool, user or Ceph configuration file settings apply the appropriate values.
vim /etc/glance/glance-api.conf
# vim /etc/glance/glance-api.conf
stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
To enable copy-on-write (CoW) cloning set show_image_direct_url
to True
.
show_image_direct_url = True
show_image_direct_url = True
Enabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
Disable cache management if necessary. The flavor
should be set to keystone
only, not keystone+cachemanagement
.
flavor = keystone
flavor = keystone
Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
hw_scsi_model=virtio-scsi
hw_disk_bus=scsi
hw_qemu_guest_agent=yes
os_require_quiesce=yes
The virtio-scsi
controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw
calls through the QEMU guest agent.
3.4. Configuring Nova
On every nova-compute
node, edit the Ceph configuration file to configure the ephemeral backend for Nova and to boot all the virtual machines directly into Ceph.
Open the Ceph configuration file.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vim /etc/ceph/ceph.conf
# vim /etc/ceph/ceph.conf
Add the following section to the
[client]
section of the Ceph configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow [client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log
Make directories for the admin socket and log file, and change their permissions to use the
qemu
user andlibvirtd
group.Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
NoteThe directories must be allowed by SELinux or AppArmor.
On every nova-compute
node, edit the /etc/nova/nova.conf
file under the [libvirt]`
section and configure the following settings:
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes="network=writeback" inject_password = false inject_key = false inject_partition = -2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED" hw_disk_discard = unmap
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap
If the Ceph configuration file is not /etc/ceph/ceph.conf
, provide the correct path. Replace the UUID in rbd_user_secret
with the UUID in the uuid-secret.txt
file.
3.5. Restarting OpenStack Services
To activate the Ceph block device drivers, load the block device pool names and Ceph user names into the configuration, restart the appropriate OpenStack services after modifying the corresponding configuration files.
systemctl restart openstack-cinder-volume systemctl restart openstack-cinder-backup systemctl restart openstack-glance-api systemctl restart openstack-nova-compute
# systemctl restart openstack-cinder-volume
# systemctl restart openstack-cinder-backup
# systemctl restart openstack-glance-api
# systemctl restart openstack-nova-compute