Chapter 2. Preparing overcloud nodes
The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage.
2.1. Verifying available Red Hat Ceph Storage packages Copy linkLink copied to clipboard!
To help avoid overcloud deployment failures, verify that the required packages exist on your servers.
2.1.1. Verifying the ceph-ansible package version Copy linkLink copied to clipboard!
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the
ceph-ansiblepackage version you want is installed:ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.2. Verifying packages for pre-provisioned nodes Copy linkLink copied to clipboard!
Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.
For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.
Procedure
Verify that the pre-provisioned nodes contain the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Configuring the existing Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.
Procedure
Create the following pools in your Ceph Storage cluster, relevant to your environment:
Storage for OpenStack Block Storage (cinder):
ceph osd pool create volumes <pgnum>
[root@ceph ~]# ceph osd pool create volumes <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for OpenStack Image Storage (glance):
ceph osd pool create images <pgnum>
[root@ceph ~]# ceph osd pool create images <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for instances:
ceph osd pool create vms <pgnum>
[root@ceph ~]# ceph osd pool create vms <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for OpenStack Block Storage Backup (cinder-backup):
ceph osd pool create backups <pgnum>
[root@ceph ~]# ceph osd pool create backups <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Storage for OpenStack Telemetry Metrics (gnocchi):
ceph osd pool create metrics <pgnum>
[root@ceph ~]# ceph osd pool create metrics <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this storage option only if metrics are enabled through OpenStack. If your overcloud deploys OpenStack Telemetry Metrics with CephFS, create CephFS data and metadata pools.
If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 4 (Ceph package 14) or earlier, create CephFS data and metadata pools:
ceph osd pool create manila_data <pgnum> ceph osd pool create manila_metadata <pgnum>
[root@ceph ~]# ceph osd pool create manila_data <pgnum> [root@ceph ~]# ceph osd pool create manila_metadata <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<pgnum>with the number of placement groups. Red Hat recommends approximately 100 placement groups per OSD in the cluster, divided by the number of replicas (osd pool default size). For example, if there are 10 OSDs, and the cluster has theosd pool default sizeset to 3, use 333 placement groups. You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.- If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Create a
client.openstackuser in your Ceph cluster with the following capabilities:- cap_mgr: allow *
- cap_mon: profile rbd
cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups,
ceph auth add client.openstack mgr allow * mon profile rbd osd profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups
[root@ceph ~]# ceph auth add client.openstack mgr allow * mon profile rbd osd profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the Ceph client key created for the
client.openstackuser:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
keyvalue in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.If your overcloud deploys the Shared File Systems service with CephFS, create the
client.manilauser in your Ceph Storage cluster with the following capabilities:- cap_mds: allow *
- cap_mgr: allow *
- cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"`
cap_osd: allow rw
ceph auth add client.manila mon allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
[root@ceph ~]# ceph auth add client.manila mon allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the manila client name and the key value to use in overcloud deployment templates:
ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>[root@ceph ~]# ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the file system ID of your Ceph Storage cluster. This value is specified in the
fsidfield, under the[global]section of the configuration file for your cluster:[global] fsid = <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> ...
[global] fsid = <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.
Additional resources
- Creating a custom environment file
- Red Hat Ceph Storage releases and corresponding Ceph package versions
- Ceph configuration in the Red Hat Ceph Storage Configuration Guide.