Chapter 2. Preparing overcloud nodes
The overcloud deployed in this scenario consists of six nodes:
- Three Controller nodes with high availability.
- Three Compute nodes.
Director integrates a separate Ceph Storage cluster with its own nodes into the overcloud. You manage this cluster independently from the overcloud. For example, you scale the Ceph Storage cluster with the Ceph management tools, not through director. For more information, see the Red Hat Ceph Storage documentation library.
2.1. Pre-deployment validations for Ceph Storage Copy linkLink copied to clipboard!
To help avoid overcloud deployment failures, verify that the required packages exist on your servers.
2.1.1. Verifying the ceph-ansible package version Copy linkLink copied to clipboard!
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the correction version of the ceph-ansible package is installed:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
2.1.2. Verifying packages for pre-provisioned nodes Copy linkLink copied to clipboard!
Ceph can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.
For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.
Procedure
Verify that the servers contained the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
2.2. Configuring the existing Ceph Storage cluster Copy linkLink copied to clipboard!
Create OSD pools, define capabilities, and create keys and IDs for your Ceph Storage cluster.
Procedure
Create the following pools in your Ceph cluster relevant to your environment:
-
volumes: Storage for OpenStack Block Storage (cinder) -
images: Storage for OpenStack Image Storage (glance) -
vms: Storage for instances -
backups: Storage for OpenStack Block Storage Backup (cinder-backup) metrics: Storage for OpenStack Telemetry Metrics (gnocchi)Use the following commands as a guide:
ceph osd pool create volumes <_pgnum_> ceph osd pool create images <_pgnum_> ceph osd pool create vms <_pgnum_> ceph osd pool create backups <_pgnum_> ceph osd pool create metrics <_pgnum_>
[root@ceph ~]# ceph osd pool create volumes <_pgnum_> [root@ceph ~]# ceph osd pool create images <_pgnum_> [root@ceph ~]# ceph osd pool create vms <_pgnum_> [root@ceph ~]# ceph osd pool create backups <_pgnum_> [root@ceph ~]# ceph osd pool create metrics <_pgnum_>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your overcloud deploys the Shared File Systems (manila) backed by CephFS, also create CephFS data and metadata pools:
ceph osd pool create manila_data PGNUM ceph osd pool create manila_metadata PGNUM
[root@ceph ~]# ceph osd pool create manila_data PGNUM [root@ceph ~]# ceph osd pool create manila_metadata PGNUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Replace <_pgnum_> with the number of placement groups. Approximately 100 placement groups per OSD is the best practice. For example, the total number of OSDs multiplied by 100, divided by the number of replicas,
osd pool default size. You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.-
Create a
client.openstackuser in your Ceph cluster with the following capabilities:- cap_mgr: “allow *”
- cap_mon: profile rbd
cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics
Use the following command as a guide:
ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'
[root@ceph ~]# ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the Ceph client key created for the
client.openstackuser:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
keyvalue in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.If your overcloud deploys the Shared File Systems service backed by CephFS, create the
client.manilauser in your Ceph cluster with the following capabilities:-
cap_mds:
allow * -
cap_mgr:
allow * -
cap_mon:
allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create" cap_osd:
allow rwUse the following command as a guide:ceph auth add client.manila mon 'allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
[root@ceph ~]# ceph auth add client.manila mon 'allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
cap_mds:
Note the manila client name and the key value to use in overcloud deployment templates:
ceph auth get-key client.manila
[root@ceph ~]# ceph auth get-key client.manila AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the file system ID of your Ceph Storage cluster. This value is specified with the
fsidsetting in the configuration file of your cluster in the[global]section:[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information about the Ceph Storage cluster configuration file, see Ceph configuration in the Red Hat Ceph Storage Configuration Guide.
Use the Ceph client key and file system ID; and the manila client IDS and key in the following procedure: Section 3.1, “Installing the ceph-ansible package”.
2.3. Initializing the stack user Copy linkLink copied to clipboard!
Initialize the stack user to configure the authentication details used to access director CLI tools.
Procedure
-
Log in to the director host as the
stackuser. Enter the following command to initialize your director configuration:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Registering nodes Copy linkLink copied to clipboard!
An inventory file contains hardware and power management details about nodes. Create an inventory file to configure and register nodes in director.
Procedure
Create an inventory file. Use the example node definition template,
instackenv.jsonas a reference:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the file to the home directory of the stack user:
/home/stack/instackenv.json. Initialize the stack user, then import the
instackenv.jsoninventory file into director:source ~/stackrc openstack overcloud node import ~/instackenv.json
$ source ~/stackrc $ openstack overcloud node import ~/instackenv.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
openstack overcloud node importcommand imports the inventory file and registers each node with the director.Assign the kernel and ramdisk images to each node:
openstack overcloud node configure <node>
$ openstack overcloud node configure <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Result
- The nodes are registered and configured in director.
2.5. Manually tagging nodes Copy linkLink copied to clipboard!
After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors and then assign flavors to deployment roles.
Procedure
Trigger hardware introspection to retrieve the hardware attributes of each node:
openstack overcloud node introspect --all-manageable --provide
$ openstack overcloud node introspect --all-manageable --provideCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
--all-manageableoption introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The
--provideoption resets all nodes to anactivestate after introspection.ImportantEnsure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.
-
The
Retrieve a list of your nodes to identify their UUIDs:
openstack baremetal node list
$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a profile option to the
properties/capabilitiesparameter for each node to manually tag a node to a specific profile. The addition of theprofileoption tags the nodes into each respective profile.As an alternative to manual tagging, you can configure the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
For example, to tag three nodes to use the
controlprofile and another three nodes to use thecomputeprofile, create the followingprofileoptions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow