Questo contenuto non è disponibile nella lingua selezionata.
Chapter 7. Creating the overcloud
When your custom environment files are ready, you can specify the flavors and nodes that each role uses and then execute the deployment. The following subsections explain both steps in greater detail.
7.1. Assigning nodes and flavors to roles Copia collegamentoCollegamento copiato negli appunti!
Planning an overcloud deployment involves specifying how many nodes and which flavors to assign to each role. Like all Heat template parameters, these role specifications are declared in the parameter_defaults section of your environment file (in this case, ~/templates/storage-config.yaml).
For this purpose, use the following parameters:
| Heat Template Parameter | Description |
|---|---|
| ControllerCount | The number of Controller nodes to scale out |
| OvercloudControlFlavor |
The flavor to use for Controller nodes ( |
| ComputeCount | The number of Compute nodes to scale out |
| OvercloudComputeFlavor |
The flavor to use for Compute nodes ( |
| CephStorageCount | The number of Ceph storage (OSD) nodes to scale out |
| OvercloudCephStorageFlavor |
The flavor to use for Ceph Storage (OSD) nodes ( |
| CephMonCount | The number of dedicated Ceph MON nodes to scale out |
| OvercloudCephMonFlavor |
The flavor to use for dedicated Ceph MON nodes ( |
| CephMdsCount | The number of dedicated Ceph MDS nodes to scale out |
| OvercloudCephMdsFlavor |
The flavor to use for dedicated Ceph MDS nodes ( |
The CephMonCount, CephMdsCount, OvercloudCephMonFlavor, and OvercloudCephMdsFlavor parameters (along with the ceph-mon and ceph-mds flavors) will only be valid if you created a custom CephMON and CephMds role, as described in Chapter 3, Deploying Ceph services on dedicated nodes.
For example, to configure the overcloud to deploy three nodes for each role (Controller, Compute, Ceph-Storage, and CephMon), add the following to your parameter_defaults:
See Creating the Overcloud with the CLI Tools from the Director Installation and Usage guide for a more complete list of Heat template parameters.
7.2. Initiating overcloud deployment Copia collegamentoCollegamento copiato negli appunti!
During undercloud installation, set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must inject a trust anchor when you deploy the overcloud, as described in Enabling SSL/TLS on Overcloud Public Endpoints in the Advanced Overcloud Customization guide.
- Note
- If you want to add Ceph Dashboard during your overcloud deployment, see Chapter 8, Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment.
The creation of the overcloud requires additional arguments for the openstack overcloud deploy command. For example:
The above command uses the following options:
-
--templates- Creates the Overcloud from the default Heat template collection (namely,/usr/share/openstack-tripleo-heat-templates/). -
-r /home/stack/templates/roles_data_custom.yaml- Specifies the customized roles definition file from Chapter 3, Deploying Ceph services on dedicated nodes, which adds custom roles for either Ceph MON or Ceph MDS services. These roles allow either service to be installed on dedicated nodes. -
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml- Sets the director to create a Ceph cluster. In particular, this environment file will deploy a Ceph cluster with containerized Ceph Storage nodes. -
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-rgw.yaml- Enables the Ceph Object Gateway, as described in Section 4.2, “Enabling the Ceph Object Gateway”. -
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml- Enables the Ceph Metadata Server, as described in Section 4.1, “Enabling the Ceph Metadata Server”. -
-e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml- Enables the Block Storage Backup service (cinder-backup), as described in Section 4.4, “Configuring the Backup Service to use Ceph”. -
-e /home/stack/templates/storage-config.yaml- Adds the environment file containing your custom Ceph Storage configuration. -
-e /home/stack/templates/ceph-config.yaml- Adds the environment file containing your custom Ceph cluster settings, as described in Chapter 5, Customizing the Ceph Storage cluster. -
--ntp-server pool.ntp.org- Sets our NTP server.
You can also use an answers file to invoke all your templates and environment files. For example, you can use the following command to deploy an identical overcloud:
openstack overcloud deploy -r /home/stack/templates/roles_data_custom.yaml \ --answers-file /home/stack/templates/answers.yaml --ntp-server pool.ntp.org
$ openstack overcloud deploy -r /home/stack/templates/roles_data_custom.yaml \
--answers-file /home/stack/templates/answers.yaml --ntp-server pool.ntp.org
In this case, the answers file /home/stack/templates/answers.yaml contains:
See Including environment files in an overcloud deployment for more details.
For a full list of options, enter:
openstack help overcloud deploy
$ openstack help overcloud deploy
For more information, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide.
The overcloud creation process begins and director provisions your nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and enter the following commands:
source ~/stackrc openstack stack list --nested
$ source ~/stackrc
$ openstack stack list --nested
7.2.1. Limiting the nodes on which ceph-ansible runs Copia collegamentoCollegamento copiato negli appunti!
You can reduce deployment update time by limiting the nodes where ceph-ansible runs. When Red Hat OpenStack Platform (RHOSP) uses config-download to configure Ceph, you can use the --limit option to specify a list of nodes, instead of running config-download and ceph-ansible across your entire deployment. This feature is useful, for example, as part of scaling up your overcloud, or replacing a failed disk. In these scenarios, the deployment can run only on the new nodes that you add to the environment.
Example scenario that uses --limit in a failed disk replacement
In the following example procedure, the Ceph storage node oc0-cephstorage-0 has a disk failure so it receives a new factory clean disk. Ansible needs to run on the oc0-cephstorage-0 node so that the new disk can be used as an OSD but it does not need to run on all of the other Ceph storage nodes. Replace the example environment files and node names with those appropriate to your environment.
Procedure
Log in to the undercloud node as the
stackuser and source thestackrccredentials file:source stackrc
# source stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Complete one of the following steps so that the new disk is used to start the missing OSD.
Run a stack update and include the
--limitoption to specify the nodes where you wantceph-ansibleto run:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the Controllers are included because the Ceph mons need Ansible to change their OSD definitions.
If
config-downloadhas generated anansible-playbook-command.shscript, you can also run the script with the--limitoption to pass the specified nodes toceph-ansible:./ansible-playbook-command.sh --limit oc0-controller-0:oc0-controller-2:oc0-controller-1:oc0-cephstorage-0:undercloud
./ansible-playbook-command.sh --limit oc0-controller-0:oc0-controller-2:oc0-controller-1:oc0-cephstorage-0:undercloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Warning
-
You must always include the undercloud in the limit list otherwise
ceph-ansiblecannot be executed when you use--limit. This is necessary because theceph-ansibleexecution occurs through theexternal_deploy_steps_tasksplaybook, which runs only on the undercloud.