Este contenido no está disponible en el idioma seleccionado.
Block Device to OpenStack Guide
Configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a back end for OpenStack.
Abstract
Chapter 1. Ceph block devices and OpenStack Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat Enterprise Linux OpenStack Platform Director provides two methods for using Ceph as a backend for Glance, Cinder, Cinder Backup and Nova:
- OpenStack creates the Ceph storage cluster: OpenStack Director can create a Ceph storage cluster. This requires configuring templates for the Ceph OSDs. OpenStack handles the installation and configuration of Ceph hosts. With this scenario, OpenStack will install the Ceph monitors with the OpenStack controller hosts.
- OpenStack connects to an existing Ceph storage cluster: OpenStack Director, using Red Hat OpenStack Platform 9 and higher, can connect to a Ceph monitor and configure the Ceph storage cluster for use as a backend for OpenStack.
The foregoing methods are the preferred methods for configuring Ceph as a backend for OpenStack, because they will handle much of the installation and configuration automatically.
This document details the manual procedure for configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a backend. This document is intended for use for those who do not intend to use the RHEL OSP Director.
A running Ceph storage cluster and at least one OpenStack host is required to use Ceph block devices as a backend for OpenStack.
Three parts of OpenStack integrate with Ceph’s block devices:
- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly.
- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. Ceph can serve as a black end for OpenStack Cinder and Cinder Backup.
-
Guest Disks: Guest disks are guest operating system disks. By default, when booting a virtual machine, its disk appears as a file on the file system of the hypervisor, by default, under
/var/lib/nova/instances/<uuid>/directory. OpenStack Glance can store images in a Ceph block device, and can use Cinder to boot a virtual machine using a copy-on-write clone of an image.
Ceph doesn’t support QCOW2 for hosting a virtual machine disk. To boot virtual machines, either ephemeral backend or booting from a volume, the Glance image format must be RAW.
OpenStack can use Ceph for images, volumes or guest disks virtual machines. There is no requirement for using all three.
Additional Resources
- See the Red Hat OpenStack Platform documentation for additional details.
Chapter 2. Installing and configuring Ceph for OpenStack Copiar enlaceEnlace copiado en el portapapeles!
As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.
Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
2.1. Creating Ceph pools for OpenStack Copiar enlaceEnlace copiado en el portapapeles!
You can create Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Verify the Red Hat Ceph Storage cluster is running, and is in a
HEALTH_OKstate:ceph -s
[root@mon ~]# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Ceph pools:
Example
ceph osd pool create volumes 128 ceph osd pool create backups 128 ceph osd pool create images 128 ceph osd pool create vms 128
[root@mon ~]# ceph osd pool create volumes 128 [root@mon ~]# ceph osd pool create backups 128 [root@mon ~]# ceph osd pool create images 128 [root@mon ~]# ceph osd pool create vms 128Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above example,
128is the number of placement groups.ImportantRed Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.
Additional Resources
- See the Pools chapter in the Storage Strategies guide for more details on creating pools.
2.2. Installing the Ceph client on OpenStack Copiar enlaceEnlace copiado en el portapapeles!
You can install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.
Procedure
On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:
dnf install python-rbd ceph-common
[root@nova ~]# dnf install python-rbd ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the OpenStack Glance host install the
python-rbdpackage:dnf install python-rbd
[root@glance ~]# dnf install python-rbdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Copying the Ceph configuration file to OpenStack Copiar enlaceEnlace copiado en el portapapeles!
Copying the Ceph configuration file to the nova-compute, cinder-backup, cinder-volume, and glance-api nodes.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Access to the Ceph software repository.
- Root-level access to the OpenStack Nova, Cinder, and Glance nodes.
Procedure
Copy the Ceph configuration file from the Ceph Monitor host to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:
scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
[root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Configuring Ceph client authentication Copiar enlaceEnlace copiado en el portapapeles!
You can configure authentication for the Ceph client to access the Red Hat OpenStack Platform.
Prerequisites
- Root-level access to the Ceph Monitor host.
- A running Red Hat Ceph Storage cluster.
Procedure
From a Ceph Monitor host, create new users for Cinder, Cinder Backup and Glance:
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the keyrings for
client.cinder,client.cinder-backupandclient.glanceto the appropriate nodes and change their ownership:Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenStack Nova nodes need the keyring file for the
nova-computeprocess:ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
[root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow The OpenStack Nova nodes also need to store the secret key of the
client.cinderuser inlibvirt. Thelibvirtprocess needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
[root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the storage cluster contains Ceph block device images that use the
exclusive-lockfeature, ensure that all Ceph block device users have permissions to blocklist clients:ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
[root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Return to the OpenStack Nova host:
ssh NOVA_NODE
[root@mon ~]# ssh NOVA_NODECopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a UUID for the secret, and save the UUID of the secret for configuring
nova-computelater:uuidgen > uuid-secret.txt
[root@nova ~]# uuidgen > uuid-secret.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.
On the OpenStack Nova nodes, add the secret key to
libvirtand remove the temporary copy of the key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set and define the secret for
libvirt:virsh secret-define --file secret.xml virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
[root@nova ~]# virsh secret-define --file secret.xml [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Configuring OpenStack to use Ceph block devices Copiar enlaceEnlace copiado en el portapapeles!
As a storage administrator, you must configure the Red Hat OpenStack Platform to use the Ceph block devices. The Red Hat OpenStack Platform can use Ceph block devices for Cinder, Cinder Backup, Glance, and Nova.
Prerequisites
- A new or existing Red Hat Ceph Storage cluster.
- A running Red Hat OpenStack Platform environment.
3.1. Configuring Cinder to use Ceph block devices Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Platform can use Ceph block devices to provide back-end storage for Cinder volumes.
Prerequisites
- Root-level access to the Cinder node.
-
A Ceph
volumepool. - The user and UUID of the secret to interact with Ceph block devices.
Procedure
Edit the Cinder configuration file:
vim /etc/cinder/cinder.conf
[root@cinder ~]# vim /etc/cinder/cinder.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
[DEFAULT]section, enable Ceph as a backend for Cinder:enabled_backends = ceph
enabled_backends = cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in
enabled_backends, theglance_api_version = 2setting must be in the[DEFAULT]section and not the[ceph]section.glance_api_version = 2
glance_api_version = 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
[ceph]section in thecinder.conffile. Add the Ceph settings in the following steps under the[ceph]section. Specify the
volume_driversetting and set it to use the Ceph block device driver:volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_driver = cinder.volume.drivers.rbd.RBDDriverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of
cephand a Ceph configuration file at/etc/ceph/ceph.conf. If the Ceph cluster name is notceph, specify the cluster name and configuration file path appropriately:rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf
rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow By default, Red Hat OpenStack Platform stores Ceph volumes in the
rbdpool. To use thevolumespool created earlier, specify therbd_poolsetting and set thevolumespool:rbd_pool = volumes
rbd_pool = volumesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat OpenStack Platform does not have a default user name or a UUID of the secret for volumes. Specify
rbd_userand set it to thecinderuser. Then, specify therbd_secret_uuidsetting and set it to the generated UUID stored in theuuid-secret.txtfile:rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the following settings:
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1
rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you configure Cinder to use Ceph block devices, the configuration file might look similar to this:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteConsider removing the default
[lvm]section and its settings.
3.2. Configuring Cinder backup to use Ceph block devices Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Platform can configure Cinder backup to use Ceph block devices.
Prerequisites
- Root-level access to the Cinder node.
Procedure
Edit the Cinder configuration file:
vim /etc/cinder/cinder.conf
[root@cinder ~]# vim /etc/cinder/cinder.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Go to the
[ceph]section of the configuration file. Specify the
backup_driversetting and set it to the Ceph driver:backup_driver = cinder.backup.drivers.ceph
backup_driver = cinder.backup.drivers.cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
backup_ceph_confsetting and specify the path to the Ceph configuration file:backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_conf = /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it can point to a different Ceph storage cluster.
Specify the Ceph pool for backups:
backup_ceph_pool = backups
backup_ceph_pool = backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Ceph configuration file used for Cinder backup might be different from the Ceph configuration file used for Cinder.
Specify the
backup_ceph_usersetting and specify the user ascinder-backup:backup_ceph_user = cinder-backup
backup_ceph_user = cinder-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the following settings:
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow When you include the Cinder options, the
[ceph]section of thecinder.conffile might look similar to this:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if Cinder backup is enabled:
cat /etc/openstack-dashboard/local_settings | grep enable_backup
[root@cinder ~]# cat /etc/openstack-dashboard/local_settings | grep enable_backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
enable_backupis set toFalse, then edit thelocal_settingsfile and set it toTrue.Example
OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Configuring Glance to use Ceph block devices Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Platform can configure Glance to use Ceph block devices.
Prerequisites
- Root-level access to the Glance node.
Procedure
To use Ceph block devices by default, edit the
/etc/glance/glance-api.conffile. If you used different pool, user or Ceph configuration file settings apply the appropriate values. Uncomment the following settings if necessary and change their values accordingly:vim /etc/glance/glance-api.conf
[root@glance ~]# vim /etc/glance/glance-api.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable copy-on-write (CoW) cloning set
show_image_direct_urltoTrue.show_image_direct_url = True
show_image_direct_url = TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEnabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.
Disable cache management if necessary. The
flavorshould be set tokeystoneonly, notkeystone+cachemanagement.flavor = keystone
flavor = keystoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat recommends the following properties for images:
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes
hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
virtio-scsicontroller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every Cinder block device to that controller. Also, enable the QEMU guest agent and sendfs-freeze/thawcalls through the QEMU guest agent.
3.4. Configuring Nova to use Ceph block devices Copiar enlaceEnlace copiado en el portapapeles!
The Red Hat OpenStack Platform can configure Nova to use Ceph block devices.
You must configure each Nova node to use ephemeral back-end storage devices, which allows all virtual machines to use the Ceph block devices.
Prerequisites
- Root-level access to the Nova nodes.
Procedure
Edit the Ceph configuration file:
vim /etc/ceph/ceph.conf
[root@nova ~]# vim /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following section to the
[client]section of the Ceph configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create new directories for the admin socket and log file, and change the directory permissions to use the
qemuuser andlibvirtdgroup:mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
[root@nova ~]# mkdir -p /var/run/ceph/guests/ /var/log/ceph/ [root@nova ~]# chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe directories must be allowed by SELinux or AppArmor.
On each Nova node, edit the
/etc/nova/nova.conffile. Under the[libvirt]section, configure the following settings:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the UUID in
rbd_user_secretwith the UUID in theuuid-secret.txtfile.
3.5. Restarting the OpenStack services Copiar enlaceEnlace copiado en el portapapeles!
Restarting the Red Hat OpenStack Platform services enables you to activate the Ceph block device drivers.
Prerequisites
- Root-level access to the Red Hat OpenStack Platform nodes.
Procedure
- Load the block device pool names and Ceph user names into the configuration file.
Restart the appropriate OpenStack services after modifying the corresponding configuration files:
systemctl restart openstack-cinder-volume systemctl restart openstack-cinder-backup systemctl restart openstack-glance-api systemctl restart openstack-nova-compute
[root@osp ~]# systemctl restart openstack-cinder-volume [root@osp ~]# systemctl restart openstack-cinder-backup [root@osp ~]# systemctl restart openstack-glance-api [root@osp ~]# systemctl restart openstack-nova-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow