Chapter 3. Configuring OpenStack to use Ceph block devices


As a storage administrator, you must configure the Red Hat OpenStack Platform to use the Ceph block devices. The Red Hat OpenStack Platform can use Ceph block devices for Cinder, Cinder Backup, Glance, and Nova.

3.1. Prerequisites

  • A new or existing Red Hat Ceph Storage cluster.
  • A running Red Hat OpenStack Platform environment.

3.2. Configuring Cinder to use Ceph block devices

The Red Hat OpenStack Platform can use Ceph block devices to provide back-end storage for Cinder volumes.

Prerequisites

  • Root-level access to the Cinder node.
  • A Ceph volume pool.
  • The user and UUID of the secret to interact with Ceph block devices.

Procedure

  1. Edit the Cinder configuration file:

    [root@cinder ~]# vim /etc/cinder/cinder.conf
    Copy to Clipboard
  2. In the [DEFAULT] section, enable Ceph as a backend for Cinder:

    enabled_backends = ceph
    Copy to Clipboard
  3. Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in enabled_backends, the glance_api_version = 2 setting must be in the [DEFAULT] section and not the [ceph] section.

    glance_api_version = 2
    Copy to Clipboard
  4. Create a [ceph] section in the cinder.conf file. Add the Ceph settings in the following steps under the [ceph] section.
  5. Specify the volume_driver setting and set it to use the Ceph block device driver:

    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    Copy to Clipboard
  6. Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of ceph and a Ceph configuration file at /etc/ceph/ceph.conf. If the Ceph cluster name is not ceph, specify the cluster name and configuration file path appropriately:

    rbd_cluster_name = us-west
    rbd_ceph_conf = /etc/ceph/us-west.conf
    Copy to Clipboard
  7. By default, Red Hat OpenStack Platform stores Ceph volumes in the rbd pool. To use the volumes pool created earlier, specify the rbd_pool setting and set the volumes pool:

    rbd_pool = volumes
    Copy to Clipboard
  8. Red Hat OpenStack Platform does not have a default user name or a UUID of the secret for volumes. Specify rbd_user and set it to the cinder user. Then, specify the rbd_secret_uuid setting and set it to the generated UUID stored in the uuid-secret.txt file:

    rbd_user = cinder
    rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
    Copy to Clipboard
  9. Specify the following settings:

    rbd_flatten_volume_from_snapshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    Copy to Clipboard

    When you configure Cinder to use Ceph block devices, the configuration file might look similar to this:

    Example

    [DEFAULT]
    enabled_backends = ceph
    glance_api_version = 2
    …
    
    [ceph]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    rbd_cluster_name = ceph
    rbd_pool = volumes
    rbd_user = cinder
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot = false
    rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    Copy to Clipboard

    Note

    Consider removing the default [lvm] section and its settings.

3.3. Configuring Cinder backup to use Ceph block devices

The Red Hat OpenStack Platform can configure Cinder backup to use Ceph block devices.

Prerequisites

  • Root-level access to the Cinder node.

Procedure

  1. Edit the Cinder configuration file:

    [root@cinder ~]# vim /etc/cinder/cinder.conf
    Copy to Clipboard
  2. Go to the [ceph] section of the configuration file.
  3. Specify the backup_driver setting and set it to the Ceph driver:

    backup_driver = cinder.backup.drivers.ceph
    Copy to Clipboard
  4. Specify the backup_ceph_conf setting and specify the path to the Ceph configuration file:

    backup_ceph_conf = /etc/ceph/ceph.conf
    Copy to Clipboard
    Note

    The Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it can point to a different Ceph storage cluster.

  5. Specify the Ceph pool for backups:

    backup_ceph_pool = backups
    Copy to Clipboard
    Note

    The Ceph configuration file used for Cinder backup might be different from the Ceph configuration file used for Cinder.

  6. Specify the backup_ceph_user setting and specify the user as cinder-backup:

    backup_ceph_user = cinder-backup
    Copy to Clipboard
  7. Specify the following settings:

    backup_ceph_chunk_size = 134217728
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
    Copy to Clipboard

    When you include the Cinder options, the [ceph] section of the cinder.conf file might look similar to this:

    Example

    [ceph]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    rbd_cluster_name = ceph
    rbd_pool = volumes
    rbd_user = cinder
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot = false
    rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    
    backup_driver = cinder.backup.drivers.ceph
    backup_ceph_user = cinder-backup
    backup_ceph_conf = /etc/ceph/ceph.conf
    backup_ceph_chunk_size = 134217728
    backup_ceph_pool = backups
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true
    Copy to Clipboard

  8. Verify if Cinder backup is enabled:

    [root@cinder ~]# cat /etc/openstack-dashboard/local_settings | grep enable_backup
    Copy to Clipboard

    If enable_backup is set to False, then edit the local_settings file and set it to True.

    Example

    OPENSTACK_CINDER_FEATURES = {
        'enable_backup': True,
    }
    Copy to Clipboard

3.4. Configuring Glance to use Ceph block devices

The Red Hat OpenStack Platform can configure Glance to use Ceph block devices.

Prerequisites

  • Root-level access to the Glance node.

Procedure

  1. To use Ceph block devices by default, edit the /etc/glance/glance-api.conf file. If you used different pool, user or Ceph configuration file settings apply the appropriate values. Uncomment the following settings if necessary and change their values accordingly:

    [root@glance ~]# vim /etc/glance/glance-api.conf
    Copy to Clipboard
    stores = rbd
    default_store = rbd
    rbd_store_chunk_size = 8
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    Copy to Clipboard
  2. To enable copy-on-write (CoW) cloning set show_image_direct_url to True.

    show_image_direct_url = True
    Copy to Clipboard
    Important

    Enabling CoW exposes the back end location via Glance’s API, so the endpoint should not be publicly accessible.

  3. Disable cache management if necessary. The flavor should be set to keystone only, not keystone+cachemanagement.

    flavor = keystone
    Copy to Clipboard
  4. Red Hat recommends the following properties for images:

    hw_scsi_model=virtio-scsi
    hw_disk_bus=scsi
    hw_qemu_guest_agent=yes
    os_require_quiesce=yes
    Copy to Clipboard

    The virtio-scsi controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every Cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw calls through the QEMU guest agent.

3.5. Configuring Nova to use Ceph block devices

The Red Hat OpenStack Platform can configure Nova to use Ceph block devices.

You must configure each Nova node to use ephemeral back-end storage devices, which allows all virtual machines to use the Ceph block devices.

Prerequisites

  • Root-level access to the Nova nodes.

Procedure

  1. Edit the Ceph configuration file:

    [root@nova ~]# vim /etc/ceph/ceph.conf
    Copy to Clipboard
  2. Add the following section to the [client] section of the Ceph configuration file:

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    rbd concurrent management ops = 20
    admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/qemu-guest-$pid.log
    Copy to Clipboard
  3. Create new directories for the admin socket and log file, and change the directory permissions to use the qemu user and libvirtd group:

    [root@nova ~]# mkdir -p /var/run/ceph/guests/ /var/log/ceph/
    [root@nova ~]# chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/
    Copy to Clipboard
    Note

    The directories must be allowed by SELinux or AppArmor.

  4. On each Nova node, edit the /etc/nova/nova.conf file. Under the [libvirt] section, configure the following settings:

    Example

    [libvirt]
    images_type = rbd
    images_rbd_pool = vms
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = cinder
    rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964
    disk_cachemodes="network=writeback"
    inject_password = false
    inject_key = false
    inject_partition = -2
    live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
    hw_disk_discard = unmap
    Copy to Clipboard

    Replace the UUID in rbd_user_secret with the UUID in the uuid-secret.txt file.

3.6. Restarting the OpenStack services

Restarting the Red Hat OpenStack Platform services enables you to activate the Ceph block device drivers.

Prerequisites

  • Root-level access to the Red Hat OpenStack Platform nodes.

Procedure

  1. Load the block device pool names and Ceph user names into the configuration file.
  2. Restart the appropriate OpenStack services after modifying the corresponding configuration files:

    [root@osp ~]# systemctl restart openstack-cinder-volume
    [root@osp ~]# systemctl restart openstack-cinder-backup
    [root@osp ~]# systemctl restart openstack-glance-api
    [root@osp ~]# systemctl restart openstack-nova-compute
    Copy to Clipboard
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.