Chapter 2. Installing and configuring Ceph for OpenStack


As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices.

2.1. Prerequisites

  • A new or existing Red Hat Ceph Storage cluster.

2.2. Creating Ceph pools for Openstack

Creating Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. Verify the Red Hat Ceph Storage cluster is running, and is in a HEALTH_OK state:

    [root@mon ~]# ceph -s
    Copy to Clipboard
  2. Create the Ceph pools:

    Example

    [root@mon ~]# ceph osd pool create volumes 128
    [root@mon ~]# ceph osd pool create backups 128
    [root@mon ~]# ceph osd pool create images 128
    [root@mon ~]# ceph osd pool create vms 128
    Copy to Clipboard

    In the above example, 128 is the number of placement groups.

    Important

    Red Hat recommends using the Ceph Placement Group’s per Pool Calculator to calculate a suitable number of placement groups for the pools.

Additional Resources

  • See the Pools chapter in the Storage Strategies guide for more details on creating pools.

2.3. Installing the Ceph client on Openstack

Install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to the Ceph software repository.
  • Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes.

Procedure

  1. On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages:

    [root@nova ~]# yum install python-rbd
    [root@nova ~]# yum install ceph-common
    Copy to Clipboard
  2. On the OpenStack Glance node install the python-rbd package:

    [root@glance ~]# yum install python-rbd
    Copy to Clipboard

2.4. Copying the Ceph configuration file to Openstack

Copying the Ceph configuration file to the nova-compute, cinder-backup, cinder-volume, and glance-api nodes.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Access to the Ceph software repository.
  • Root-level access to the OpenStack Nova, Cinder, and Glance nodes.

Procedure

  1. Copy the Ceph configuration file from the Ceph Monitor node to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes:

    [root@mon ~]# scp /etc/ceph/ceph.conf OPENSTACK_NODES:/etc/ceph
    Copy to Clipboard

2.5. Configuring Ceph client authentication

Configure authentication for the Ceph client to access the Red Hat OpenStack Platform.

Prerequisites

  • Root-level access to the Ceph Monitor node.
  • A running Red Hat Ceph Storage cluster.

Procedure

  1. From a Ceph Monitor node, create new users for Cinder, Cinder Backup and Glance:

    [root@mon ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    
    [root@mon ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
    
    [root@mon ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
    Copy to Clipboard
  2. Add the keyrings for client.cinder, client.cinder-backup and client.glance to the appropriate nodes and change their ownership:

    [root@mon ~]# ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring
    [root@mon ~]# ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
    
    [root@mon ~]# ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring
    [root@mon ~]# ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
    
    [root@mon ~]# ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring
    [root@mon ~]# ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring
    Copy to Clipboard
  3. OpenStack Nova nodes need the keyring file for the nova-compute process:

    [root@mon ~]# ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring
    Copy to Clipboard
  4. The OpenStack Nova nodes also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes:

    [root@mon ~]# ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key
    Copy to Clipboard

    If the storage cluster contains Ceph block device images that use the exclusive-lock feature, ensure that all Ceph block device users have permissions to blacklist clients:

    [root@mon ~]# ceph auth caps client.ID mon 'allow r, allow command "osd blacklist"' osd 'EXISTING_OSD_USER_CAPS'
    Copy to Clipboard
  5. Return to the OpenStack Nova node:

    [root@mon ~]# ssh NOVA_NODE
    Copy to Clipboard
  6. Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later:

    [root@nova ~]# uuidgen > uuid-secret.txt
    Copy to Clipboard
    Note

    You do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it’s better to keep the same UUID.

  7. On the OpenStack Nova nodes, add the secret key to libvirt and remove the temporary copy of the key:

    cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
      <uuid>`cat uuid-secret.txt`</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    EOF
    Copy to Clipboard
  8. Set and define the secret for libvirt:

    [root@nova ~]# virsh secret-define --file secret.xml
    [root@nova ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
    Copy to Clipboard

Additional Resources

  • See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat