이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Preparing Overcloud Nodes
The scenario described in this chapter consists of six nodes in the Overcloud:
- Three Controller nodes with high availability.
- Three Compute nodes.
The director will integrate a separate Ceph Storage Cluster with its own nodes into the Overcloud. You manage this cluster independently from the Overcloud. For example, you scale the Ceph Storage cluster using the Ceph management tools and not through the OpenStack Platform director. Consult the Red Hat Ceph documentation for more information.
2.1. Pre-deployment validations for Ceph Storage 링크 복사링크가 클립보드에 복사되었습니다!
To help avoid overcloud deployment failures, validate that the required packages exist on your servers.
2.1.1. Verifying the ceph-ansible package version 링크 복사링크가 클립보드에 복사되었습니다!
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the correction version of the ceph-ansible package is installed:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/openstack-tripleo-validations/validations/ceph-ansible-installed.yaml
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/openstack-tripleo-validations/validations/ceph-ansible-installed.yaml
2.1.2. Verifying packages for pre-provisioned nodes 링크 복사링크가 클립보드에 복사되었습니다!
When you use pre-provisioned nodes in your overcloud deployment, you can verify that the servers have the packages required to be overcloud nodes that host Ceph services.
For more information about pre-provisioned nodes, see Configuring a Basic Overcloud using Pre-Provisioned Nodes.
Procedure
Verify that the servers contained the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/openstack-tripleo-validations/validations/ceph-dependencies-installed.yaml
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/openstack-tripleo-validations/validations/ceph-dependencies-installed.yaml
2.2. Configuring the Existing Ceph Storage Cluster 링크 복사링크가 클립보드에 복사되었습니다!
Create the following pools in your Ceph cluster relevant to your environment:
-
volumes: Storage for OpenStack Block Storage (cinder) -
images: Storage for OpenStack Image Storage (glance) -
vms: Storage for instances -
backups: Storage for OpenStack Block Storage Backup (cinder-backup) metrics: Storage for OpenStack Telemetry Metrics (gnocchi)Use the following commands as a guide:
ceph osd pool create volumes PGNUM ceph osd pool create images PGNUM ceph osd pool create vms PGNUM ceph osd pool create backups PGNUM ceph osd pool create metrics PGNUM
[root@ceph ~]# ceph osd pool create volumes PGNUM [root@ceph ~]# ceph osd pool create images PGNUM [root@ceph ~]# ceph osd pool create vms PGNUM [root@ceph ~]# ceph osd pool create backups PGNUM [root@ceph ~]# ceph osd pool create metrics PGNUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace PGNUM with the number of placement groups. We recommend approximately 100 per OSD. For example, the total number of OSDs multiplied by 100 divided by the number of replicas (
osd pool default size). You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.
-
Create a
client.openstackuser in your Ceph cluster with the following capabilities:- cap_mgr: “allow *”
- cap_mon: profile rbd
cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics
Use the following command as a guide:
ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'
[root@ceph ~]# ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups, profile rbd pool=metrics'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the Ceph client key created for the
client.openstackuser:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.
Finally, note the file system ID of your Ceph Storage cluster. This value is specified with the
fsidsetting in the configuration file of your cluster (under the[global]section):[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor more information about the Ceph Storage cluster configuration file, see Configuration Reference (from the Red Hat Ceph Storage Configuration Guide).
The Ceph client key and file system ID will both be used later in Chapter 3, Integrating with the Existing Ceph Cluster.
2.3. Initializing the Stack User 링크 복사링크가 클립보드에 복사되었습니다!
Log into the director host as the stack user and run the following command to initialize your director configuration:
source ~/stackrc
$ source ~/stackrc
This sets up environment variables containing authentication details to access the director’s CLI tools.
2.4. Registering Nodes 링크 복사링크가 클립보드에 복사되었습니다!
A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:
Procedure
-
After you create the inventory file, save the file to the home directory of the stack user (
/home/stack/instackenv.json). Initialize the stack user, then import the
instackenv.jsoninventory file into the director:source ~/stackrc openstack overcloud node import ~/instackenv.json
$ source ~/stackrc $ openstack overcloud node import ~/instackenv.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
openstack overcloud node importcommand imports the inventory file and registers each node with the director.- Assign the kernel and ramdisk images to each node:
openstack overcloud node configure <node>
$ openstack overcloud node configure <node>
The nodes are now registered and configured in the director.
2.5. Manually Tagging the Nodes 링크 복사링크가 클립보드에 복사되었습니다!
After you register each node, you must inspect the hardware and tag the node into a specific profile. Use profile tags to match your nodes to flavors, and then assign flavors to deployment roles.
To inspect and tag new nodes, complete the following steps:
Trigger hardware introspection to retrieve the hardware attributes of each node:
openstack overcloud node introspect --all-manageable --provide
$ openstack overcloud node introspect --all-manageable --provideCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
--all-manageableoption introspects only the nodes that are in a managed state. In this example, all nodes are in a managed state. The
--provideoption resets all nodes to anactivestate after introspection.ImportantEnsure that this process completes successfully. This process usually takes 15 minutes for bare metal nodes.
-
The
Retrieve a list of your nodes to identify their UUIDs:
openstack baremetal node list
$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a profile option to the
properties/capabilitiesparameter for each node to manually tag a node to a specific profile. The addition of theprofileoption tags the nodes into each respective profile.NoteAs an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.
For example, to tag three nodes to use the
controlprofile and another three nodes to use thecomputeprofile, run:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The addition of the profile option tags the nodes into each respective profiles.
As an alternative to manual tagging, use the Automated Health Check (AHC) Tools to automatically tag larger numbers of nodes based on benchmarking data.