Chapter 1. Deploying an overcloud and Red Hat Ceph Storage

download PDF

Red Hat OpenStack Platform (RHOSP) director deploys the cloud environment, also known as the overcloud, and Red Hat Ceph Storage. Director uses Ansible playbooks provided through the tripleo-ansible package to deploy the Ceph Storage cluster. The director also manages the configuration and scaling operations of the Ceph Storage cluster.

For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage Architecture Guide.

For more information about services in the Red Hat OpenStack Platform, see Configuring a basic overcloud with the CLI tools in Installing and managing Red Hat OpenStack Platform with director.

1.1. Red Hat Ceph Storage clusters

Red Hat Ceph Storage is a distributed data object store designed for performance, reliability, and scalability. Distributed object stores use unstructured data to simultaneously service modern and legacy object interfaces.

Ceph Storage is deployed as a cluster. A Ceph Storage cluster consists of two primary types of daemons:

  • Ceph Object Storage Daemon (CephOSD) - The CephOSD performs data storage, data replication, rebalancing, recovery, monitoring, and reporting tasks.
  • Ceph Monitor (CephMon) - The CephMon maintains the primary copy of the cluster map with the current state of the cluster.

For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

1.2. Red Hat Ceph Storage nodes and RHEL compatibility

RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release.

1.3. Red Hat Ceph Storage compatibility

RHOSP 17.1 supports Red Hat Ceph Storage 6 for new deployments. RHOSP 17.1 only supports Red Hat Ceph Storage 5 in deployments upgrading from RHOSP 16.2 and Red Hat Ceph Storage 4.

1.4. Deploying Red Hat Ceph Storage

You deploy Red Hat Ceph Storage in two phases:

  • Create the Red Hat Ceph Storage cluster before deploying the overcloud.
  • Configure the Red Hat Ceph Storage cluster during overcloud deployment.

A Ceph Storage cluster is created ready to serve the Ceph RADOS Block Device (RBD) service. Additionally, the following services are running on the appropriate nodes:

  • Ceph Monitor (CephMon)
  • Ceph Manager (CephMgr)
  • Ceph OSD (CephOSD)

Pools and cephx keys are created during the configuration phase.

The following Ceph Storage components are not available until after the configuration phase:

  • Ceph Dashboard (CephDashboard)
  • Ceph Object Gateway (CephRGW)
  • Ceph MDS (CephMds)

Red Hat Ceph Storage cluster configuration finalizes during overcloud deployment. Daemons and services such as Ceph Object Gateway and Ceph Dashboard deploy according to the overcloud definition. Red Hat OpenStack Platform (RHOSP) services are configured as Ceph Storage cluster clients.

1.5. Red Hat Ceph Storage deployment requirements

Provisioning of network resources and bare metal instances is required before Ceph Storage cluster creation. Configure the following before creating a Red Hat Ceph Storage cluster:

  • Provision networks with the openstack overcloud network provision command and the cli-overcloud-network-provision.yaml ansible playbook.
  • Provision bare metal instances with the openstack overcloud node provision command to provision bare metal instances using the cli-overcloud-node-provision.yaml ansible playbook.

For more information about these tasks, see:

The following elements must be present in the overcloud environment to finalize the Ceph Storage cluster configuration:

  • Red Hat OpenStack Platform director installed on an undercloud host. See Installing director in Installing and managing Red Hat OpenStack Platform with director.
  • Installation of recommended hardware to support Red Hat Ceph Storage. For more information about recommended hardware, see the Red Hat Ceph Storage Hardware Guide.

1.6. Post deployment verification

Director deploys a Ceph Storage cluster ready to serve Ceph RADOS Block Device (RBD) using tripleo-ansible roles executed by the cephadm command.

Verify the following are in place after cephadm completes Ceph Storage deployment:

  • SSH access to a CephMon service node to use the sudo cephadm shell command.
  • All OSDs operational.


    Check inoperative OSDs for environmental issues like uncleaned disks.

  • A Ceph configuration file and client administration keyring file in the /etc/ceph directory of CephMon service nodes.
  • The Ceph Storage cluster is ready to serve RBD.

Pools, cephx keys, CephDashboard, and CephRGW are configured during overcloud deployment by the openstack overcloud deploy command. This is for two reasons:

  • The Dashboard and RGW services must integrate with haproxy. This is deployed with the overcloud.
  • The creation of pools and cephx keys are dependent on which OpenStack clients are deployed.

These resources are created in the Ceph Storage cluster using the client administration keyring file and the ~/deployed_ceph.yaml file output by the openstack overcloud ceph deploy command.

For more information about cephadm, see Red Hat Ceph Storage Installation Guide.

Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.