Chapter 2. Preparing overcloud nodes
The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage.
2.1. Verifying available Red Hat Ceph Storage packages
To help avoid overcloud deployment failures, verify that the required packages exist on your servers.
2.1.1. Verifying the ceph-ansible package version
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the
ceph-ansible
package version you want is installed:$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
2.1.2. Verifying packages for pre-provisioned nodes
Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.
For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.
Procedure
Verify that the pre-provisioned nodes contain the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
2.2. Configuring the existing Red Hat Ceph Storage cluster
To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.
Procedure
Create the following pools in your Ceph Storage cluster, relevant to your environment:
Storage for OpenStack Block Storage (cinder):
[root@ceph ~]# ceph osd pool create volumes <pgnum>
Storage for OpenStack Image Storage (glance):
[root@ceph ~]# ceph osd pool create images <pgnum>
Storage for instances:
[root@ceph ~]# ceph osd pool create vms <pgnum>
Storage for OpenStack Block Storage Backup (cinder-backup):
[root@ceph ~]# ceph osd pool create backups <pgnum>
Optional: Storage for OpenStack Telemetry Metrics (gnocchi):
[root@ceph ~]# ceph osd pool create metrics <pgnum>
Use this storage option only if metrics are enabled through OpenStack. If your overcloud deploys OpenStack Telemetry Metrics with CephFS, create CephFS data and metadata pools.
If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 4 (Ceph package 14) or earlier, create CephFS data and metadata pools:
[root@ceph ~]# ceph osd pool create manila_data <pgnum> [root@ceph ~]# ceph osd pool create manila_metadata <pgnum>
Replace
<pgnum>
with the number of placement groups. Red Hat recommends approximately 100 placement groups per OSD in the cluster, divided by the number of replicas (osd pool default size
). For example, if there are 10 OSDs, and the cluster has theosd pool default size
set to 3, use 333 placement groups. You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.- If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Create a
client.openstack
user in your Ceph cluster with the following capabilities:- cap_mgr: allow *
- cap_mon: profile rbd
cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups,
[root@ceph ~]# ceph auth add client.openstack mgr allow * mon profile rbd osd profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups
Note the Ceph client key created for the
client.openstack
user:[root@ceph ~]# ceph auth list ... [client.openstack] key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==> caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups ...
The
key
value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.If your overcloud deploys the Shared File Systems service with CephFS, create the
client.manila
user in your Ceph Storage cluster with the following capabilities:- cap_mds: allow *
- cap_mgr: allow *
- cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"`
cap_osd: allow rw
[root@ceph ~]# ceph auth add client.manila mon allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
Note the manila client name and the key value to use in overcloud deployment templates:
[root@ceph ~]# ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
Note the file system ID of your Ceph Storage cluster. This value is specified in the
fsid
field, under the[global]
section of the configuration file for your cluster:[global] fsid = <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> ...
Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.
Additional resources
- Creating a custom environment file
- Red Hat Ceph Storage releases and corresponding Ceph package versions
- Ceph configuration in the Red Hat Ceph Storage Configuration Guide.