Chapter 1. Deploying an overcloud and Red Hat Ceph Storage
Red Hat OpenStack Platform (RHOSP) director deploys the cloud environment, also known as the overcloud, and Red Hat Ceph Storage. Director uses Ansible playbooks provided through the tripleo-ansible
package to deploy the Ceph Storage cluster. The director also manages the configuration and scaling operations of the Ceph Storage cluster.
For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage Architecture Guide.
For more information about services in the Red Hat OpenStack Platform, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage.
1.1. Red Hat Ceph Storage clusters
Red Hat Ceph Storage is a distributed data object store designed for performance, reliability, and scalability. Distributed object stores use unstructured data to simultaneously service modern and legacy object interfaces.
Ceph Storage is deployed as a cluster. A Ceph Storage cluster consists of two primary types of daemons:
- Ceph Object Storage Daemon (CephOSD) - The CephOSD performs data storage, data replication, rebalancing, recovery, monitoring, and reporting tasks.
- Ceph Monitor (CephMon) - The CephMon maintains the primary copy of the cluster map with the current state of the cluster.
For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.
1.2. Red Hat Ceph Storage node requirements
There are additional node requirements using director to create a Ceph Storage cluster:
- Hardware requirements including processor, memory, and network interface card selection and disk layout are available in the Red Hat Ceph Storage Hardware Guide.
- Each Ceph Storage node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server.
-
Each Ceph Storage node must have at least two disks. RHOSP director uses
cephadm
to deploy the Ceph Storage cluster. The cephadm functionality does not support installing Ceph OSD on the root disk of the node.
1.3. Ceph Storage nodes and RHEL compatibility
RHOSP 17.0 is supported on RHEL 9.0. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. Before upgrading, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations.
1.4. Deploying Red Hat Ceph Storage
You deploy Red Hat Ceph Storage in two phases:
- Create the Red Hat Ceph Storage cluster before deploying the overcloud.
- Configure the Red Hat Ceph Storage cluster during overcloud deployment.
A Ceph Storage cluster is created ready to serve the Ceph RADOS Block Device (RBD) service. Additionally, the following services are running on the appropriate nodes:
- Ceph Monitor (CephMon)
- Ceph Manager (CephMgr)
- Ceph OSD (CephOSD)
Pools and cephx keys are created during the configuration phase.
The following Ceph Storage components are not available until after the configuration phase:
- Ceph Dashboard (CephDashboard)
- Ceph Object Gateway (CephRGW)
- Ceph MDS (CephMds)
Red Hat Ceph Storage cluster configuration finalizes during overcloud deployment. Daemons and services such as Ceph Object Gateway and Ceph Dashboard deploy according to the overcloud definition. Red Hat OpenStack Platform (RHOSP) services are configured as Ceph Storage cluster clients.
1.5. Red Hat Ceph Storage deployment requirements
Provisioning of network resources and bare metal instances is required before Ceph Storage cluster creation. Configure the following before creating a Red Hat Ceph Storage cluster:
-
Provision networks with the
openstack overcloud network provision
command and thecli-overcloud-network-provision.yaml
ansible playbook. -
Provision bare metal instances with the
openstack overcloud node provision
command to provision bare metal instances using thecli-overcloud-node-provision.yaml
ansible playbook.
For more information about these tasks, see:
The following elements must be present in the overcloud environment to finalize the Ceph Storage cluster configuration:
- Red Hat OpenStack Platform director installed on an undercloud host. See Installing director in Director Installation and Usage.
- Installation of recommended hardware to support Red Hat Ceph Storage. For more information about recommended hardware, see the Red Hat Ceph Storage Hardware Guide.
1.6. Post deployment verification
Director deploys a Ceph Storage cluster ready to serve Ceph RADOS Block Device (RBD) using tripleo-ansible
roles executed by the cephadm
command.
Verify the following are in place after cephadm
completes Ceph Storage deployment:
-
SSH access to a CephMon service node to use the
sudo cephadm shell
command. All OSDs operational.
NoteCheck inoperative OSDs for environmental issues like uncleaned disks.
-
A Ceph configuration file and client administration keyring file in the
/etc/ceph
directory of CephMon service nodes. - The Ceph Storage cluster is ready to serve RBD.
Pools, cephx keys, CephDashboard, and CephRGW are configured during overcloud deployment by the openstack overcloud deploy
command. This is for two reasons:
-
The Dashboard and RGW services must integrate with
haproxy
. This is deployed with the overcloud. - The creation of pools and cephx keys are dependent on which OpenStack clients are deployed.
These resources are created in the Ceph Storage cluster using the client administration keyring file and the ~/deployed_ceph.yaml
file output by the openstack overcloud ceph deploy
command.
For more information about cephadm
, see Red Hat Ceph Storage Installation Guide.