Chapter 11. Provisioning and deploying your overcloud
To create an overcloud you must perform the following tasks:
Provision the network resources for your physical networks:
- If you are deploying network isolation or a custom composable network, then create a network definition file in YAML format.
- Run the network provisioning command, including the network definition file.
- Create a network Virtual IP (VIP) definition file in YAML format.
- Run the network VIP provisioning command, including the network VIP definition file.
Provision your bare metal nodes:
- Create a node definition file in YAML format.
- Run the bare-metal node provisioning command, including the node definition file.
Deploy your overcloud.
- Run the deployment command, including the heat environment files that the provisioning commands generate.
11.1. Provisioning the overcloud networks
To configure the network resources for your Red Hat OpenStack Platform (RHOSP) physical network environment, you must perform the following tasks:
- Configure and provision the network resources for your overcloud.
- Configure and provision the network Virtual IPs for your overcloud.
11.1.1. Configuring and provisioning overcloud network definitions
You configure the physical network for your overcloud in a network definition file in YAML format. The provisioning process creates a heat environment file from your network definition file that contains your network specifications. When you deploy your overcloud, include this heat environment file in the deployment command.
Prerequisites
- The undercloud is installed. For more information, see Installing director.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Copy the sample network definition template you require from
/usr/share/openstack-tripleo-heat-templates/network-data-samples
to your environment file directory:(undercloud)$ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/default-network-isolation.yaml /home/stack/templates/network_data.yaml
Configure your network definition file for your network environment. For example, you can update the external network definition:
- name: External name_lower: external vip: true mtu: 1500 subnets: external_subnet: ip_subnet: 10.0.0.0/24 allocation_pools: - start: 10.0.0.4 end: 10.0.0.250 gateway_ip: 10.0.0.1 vlan: 10
- Configure any other networks and network attributes for your environment. For more information about the properties you can use to configure network attributes in your network definition file, see Configuring overcloud networking.
Provision the overcloud networks:
(undercloud)$ openstack overcloud network provision \ [--templates <templates_directory> \] --output <deployment_file> \ /home/stack/templates/<networks_definition_file>
-
Optional: Include the
--templates
option to use your own templates instead of the default templates located in/usr/share/openstack-tripleo-heat-templates
. Replace<templates_directory>
with the path to the directory that contains your templates. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-networks-deployed.yaml
. -
Replace
<networks_definition_file>
with the name of your networks definition file, for example,network_data.yaml
.
-
Optional: Include the
When network provisioning is complete, you can use the following commands to check the created networks and subnets:
(undercloud)$ openstack network list (undercloud)$ openstack subnet list (undercloud)$ openstack network show <network> (undercloud)$ openstack subnet show <subnet>
-
Replace
<network>
with the name or UUID of the network you want to check. -
Replace
<subnet>
with the name or UUID of the subnet you want to check.
-
Replace
11.1.2. Configuring and provisioning network VIPs for the overcloud
You configure the network Virtual IPs (VIPs) for your overcloud in a network VIP definition file in YAML format. The provisioning process creates a heat environment file from your VIP definition file that contains your VIP specifications. When you deploy your overcloud, include this heat environment file in the deployment command.
Prerequisites
- The undercloud is installed. For more information, see Installing director.
- Your overcloud networks are provisioned. For more information, see Configuring and provisioning overcloud network definitions.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Copy the sample network VIP definition template you require from
/usr/share/openstack-tripleo-heat-templates/network-data-samples
to your environment file directory:(undercloud)$ cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/vip-data-default-network-isolation.yaml /home/stack/templates/vip_data.yaml
Optional: Configure your VIP definition file for your environment. For example, the following defines the external network and control plane VIPs:
- network: external dns_name: overcloud - network: ctlplane dns_name: overcloud
- Configure any other network VIP attributes for your environment. For more information about the properties you can use to configure VIP attributes in your VIP definition file, see Adding a composable network.
Provision the network VIPs:
(undercloud)$ openstack overcloud network vip provision \ [--templates <templates_directory> \] --stack <stack> \ --output <deployment_file> \ /home/stack/templates/<vip_definition_file>
-
Optional: Include the
--templates
option to use your own templates instead of the default templates located in/usr/share/openstack-tripleo-heat-templates
. Replace<templates_directory>
with the path to the directory that contains your templates. -
Replace
<stack>
with the name of the stack for which the network VIPs are provisioned, for example,overcloud
. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-vip-deployed.yaml
. -
Replace
<vip_definition_file>
with the name of your VIP definition file, for example,vip_data.yaml
.
-
Optional: Include the
When the network VIP provisioning is complete, you can use the following commands to check the created VIPs:
(undercloud)$ openstack port list (undercloud)$ openstack port show <port>
-
Replace
<port>
with the name or UUID of the port you want to check.
-
Replace
Next steps
11.2. Provisioning bare metal overcloud nodes
To configure a Red Hat OpenStack Platform (RHOSP) environment, you must perform the following tasks:
- Register the bare-metal nodes for your overcloud.
- Provide director with an inventory of the hardware of the bare-metal nodes.
- Configure the quantity, attributes, and network layout of the bare-metal nodes in a node definition file.
- Assign each bare metal node a resource class that matches the node to its designated role.
You can also perform additional optional tasks, such as matching profiles to designate overcloud nodes.
11.2.1. Registering nodes for the overcloud
Director requires a node definition template that specifies the hardware and power management details of your nodes. You can create this template in JSON format, nodes.json
, or YAML format, nodes.yaml
.
Procedure
Create a template named
nodes.json
ornodes.yaml
that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template:Example JSON template
{ "nodes": [{ "name": "node01", "ports": [{ "address": "aa:aa:aa:aa:aa:aa", "physical_network": "ctlplane", "local_link_connection": { "switch_id": "52:54:00:00:00:00", "port_id": "p0" } }], "cpu": "4", "memory": "6144", "disk": "40", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "admin", "pm_password": "p@55w0rd!", "pm_addr": "192.168.24.205" }, { "name": "node02", "ports": [{ "address": "bb:bb:bb:bb:bb:bb", "physical_network": "ctlplane", "local_link_connection": { "switch_id": "52:54:00:00:00:00", "port_id": "p0" } }], "cpu": "4", "memory": "6144", "disk": "40", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "admin", "pm_password": "p@55w0rd!", "pm_addr": "192.168.24.206" }] }
Example YAML template
nodes: - name: "node01" ports: - address: "aa:aa:aa:aa:aa:aa" physical_network: ctlplane local_link_connection: switch_id: 52:54:00:00:00:00 port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: "x86_64" pm_type: "ipmi" pm_user: "admin" pm_password: "p@55w0rd!" pm_addr: "192.168.24.205" - name: "node02" ports: - address: "bb:bb:bb:bb:bb:bb" physical_network: ctlplane local_link_connection: switch_id: 52:54:00:00:00:00 port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: "x86_64" pm_type: "ipmi" pm_user: "admin" pm_password: "p@55w0rd!" pm_addr: "192.168.24.206"
This template contains the following attributes:
- name
- The logical name for the node.
- ports
The port to access the specific IPMI device. You can define the following optional port attributes:
-
address
: The MAC address for the network interface on the node. Use only the MAC address for the Provisioning NIC of each system. -
physical_network
: The physical network that is connected to the Provisioning NIC. -
local_link_connection
: If you use IPv6 provisioning and LLDP does not correctly populate the local link connection during introspection, you must include fake data with theswitch_id
andport_id
fields in thelocal_link_connection
parameter. For more information on how to include fake data, see Using director introspection to collect bare metal node hardware information.
-
- cpu
- (Optional) The number of CPUs on the node.
- memory
- (Optional) The amount of memory in MB.
- disk
- (Optional) The size of the hard disk in GB.
- arch
- (Optional) The system architecture.
- pm_type
The power management driver that you want to use. This example uses the IPMI driver (
ipmi
).NoteIPMI is the preferred supported power management driver. For more information about supported power management types and their options, see Power management drivers. If these power management drivers do not work as expected, use IPMI for your power management.
- pm_user; pm_password
- The IPMI username and password.
- pm_addr
- The IP address of the IPMI device.
Verify the template formatting and syntax:
$ source ~/stackrc (undercloud)$ openstack overcloud node import --validate-only ~/nodes.json
-
Save the template file to the home directory of the
stack
user (/home/stack/nodes.json
). Import the template to director to register each node from the template into director:
(undercloud)$ openstack overcloud node import ~/nodes.json
Wait for the node registration and configuration to complete. When complete, confirm that director has successfully registered the nodes:
(undercloud)$ openstack baremetal node list
11.2.2. Creating an inventory of the bare-metal node hardware
Director needs the hardware inventory of the nodes in your Red Hat OpenStack Platform (RHOSP) deployment for profile tagging, benchmarking, and manual root disk assignment.
You can provide the hardware inventory to director by using one of the following methods:
- Automatic: You can use director’s introspection process, which collects the hardware information from each node. This process boots an introspection agent on each node. The introspection agent collects hardware data from the node and sends the data back to director. Director stores the hardware data in the OpenStack internal database.
- Manual: You can manually configure a basic hardware inventory for each bare metal machine. This inventory is stored in the Bare Metal Provisioning service (ironic) and is used to manage and deploy the bare-metal machines.
The director automatic introspection process provides the following advantages over the manual method for setting the Bare Metal Provisioning service ports:
-
Introspection records all of the connected ports in the hardware information, including the port to use for PXE boot if it is not already configured in
nodes.yaml
. -
Introspection sets the
local_link_connection
attribute for each port if the attribute is discoverable using LLDP. When you use the manual method, you must configurelocal_link_connection
for each port when you register the nodes. -
Introspection sets the
physical_network
attribute for the Bare Metal Provisioning service ports when deploying a spine-and-leaf or DCN architecture.
11.2.2.1. Using director introspection to collect bare metal node hardware information
After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using director introspection.
As an alternative to automatic introspection, you can manually provide director with the hardware information for your bare metal nodes. For more information, see Manually configuring bare metal node hardware information.
Prerequisites
- You have registered the bare-metal nodes for your overcloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the pre-introspection validation group to check the introspection requirements:
(undercloud)$ validation run --group pre-introspection \ --inventory <inventory_file>
Replace
<inventory_file>
with the name and location of the Ansible inventory file, for example,~/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml
.NoteWhen you run a validation, the
Reasons
column in the output is limited to 79 characters. To view the validation result in full, view the validation log files.
- Review the results of the validation report.
Optional: Review detailed output from a specific validation:
(undercloud)$ validation history get --full <UUID>
Replace
<UUID>
with the UUID of the specific validation from the report that you want to review.ImportantA
FAILED
validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, aFAILED
validation can indicate a potential issue with a production environment.
Inspect the hardware attributes of each node. You can inspect the hardware attributes of all nodes, or specific nodes:
Inspect the hardware attributes of all nodes:
(undercloud)$ openstack overcloud node introspect --all-manageable --provide
-
Use the
--all-manageable
option to introspect only the nodes that are in a managed state. In this example, all nodes are in a managed state. -
Use the
--provide
option to reset all nodes to anavailable
state after introspection.
-
Use the
Inspect the hardware attributes of specific nodes:
(undercloud)$ openstack overcloud node introspect --provide <node1> [node2] [noden]
-
Use the
--provide
option to reset all the specified nodes to anavailable
state after introspection. -
Replace
<node1>
,[node2]
, and all nodes up to[noden]
with the UUID of each node that you want to introspect.
-
Use the
Monitor the introspection progress logs in a separate terminal window:
(undercloud)$ sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log
ImportantEnsure that the introspection process runs to completion. Introspection usually takes 15 minutes for bare metal nodes. However, incorrectly sized introspection networks can cause it to take much longer, which can result in the introspection failing.
Optional: If you have configured your undercloud for bare metal provisioning over IPv6, then you need to also check that LLDP has set the
local_link_connection
for Bare Metal Provisioning service (ironic) ports:$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
If the Local Link Connection field is empty for the port on your bare metal node, you must populate the
local_link_connection
value manually with fake data. The following example sets the fake switch ID to52:54:00:00:00:00
, and the fake port ID top0
:$ openstack baremetal port set <port_uuid> \ --local-link-connection switch_id=52:54:00:00:00:00 \ --local-link-connection port_id=p0
Verify that the Local Link Connection field contains the fake data:
$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
After the introspection completes, all nodes change to an available
state.
11.2.2.2. Manually configuring bare-metal node hardware information
After you register a physical machine as a bare metal node, you can manually add its hardware details and create bare-metal ports for each of its Ethernet MAC addresses. You must create at least one bare-metal port before deploying the overcloud.
As an alternative to manual introspection, you can use the automatic director introspection process to collect the hardware information for your bare metal nodes. For more information, see Using director introspection to collect bare metal node hardware information.
Prerequisites
- You have registered the bare-metal nodes for your overcloud.
-
You have configured
local_link_connection
for each port on the registered nodes innodes.json
. For more information, see Registering nodes for the overcloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Specify the deploy kernel and deploy ramdisk for the node driver:
(undercloud)$ openstack baremetal node set <node> \ --driver-info deploy_kernel=<kernel_file> \ --driver-info deploy_ramdisk=<initramfs_file>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<kernel_file>
with the path to the.kernel
image, for example,file:///var/lib/ironic/httpboot/agent.kernel
. -
Replace
<initramfs_file>
with the path to the.initramfs
image, for example,file:///var/lib/ironic/httpboot/agent.ramdisk
.
-
Replace
Update the node properties to match the hardware specifications on the node:
(undercloud)$ openstack baremetal node set <node> \ --property cpus=<cpu> \ --property memory_mb=<ram> \ --property local_gb=<disk> \ --property cpu_arch=<arch>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<cpu>
with the number of CPUs. -
Replace
<ram>
with the RAM in MB. -
Replace
<disk>
with the disk size in GB. -
Replace
<arch>
with the architecture type.
-
Replace
Optional: Specify the IPMI cipher suite for each node:
(undercloud)$ openstack baremetal node set <node> \ --driver-info ipmi_cipher_suite=<version>
-
Replace
<node>
with the ID of the bare metal node. Replace
<version>
with the cipher suite version to use on the node. Set to one of the following valid values:-
3
- The node uses the AES-128 with SHA1 cipher suite. -
17
- The node uses the AES-128 with SHA256 cipher suite.
-
-
Replace
Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:
(undercloud)$ openstack baremetal node set <node> \ --property root_device='{"<property>": "<value>"}'
-
Replace
<node>
with the ID of the bare metal node. Replace
<property>
and<value>
with details about the disk that you want to use for deployment, for exampleroot_device='{"size": "128"}'
RHOSP supports the following properties:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). name
(String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.NoteIf you specify more than one property, the device must match all of those properties.
-
-
Replace
Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:
(undercloud)$ openstack baremetal port create --node <node_uuid> <mac_address>
-
Replace
<node_uuid>
with the unique ID of the bare metal node. -
Replace
<mac_address>
with the MAC address of the NIC used to PXE boot.
-
Replace
Validate the configuration of the node:
(undercloud)$ openstack baremetal node validate <node>
------------
-----------------------------------------------------
| Interface | Result | Reason |------------
-----------------------------------------------------
| bios | True | | | boot | True | | | console | True | | | deploy | False | Node 229f0c3d-354a-4dab-9a88-ebd318249ad6 | | | | failed to validate deploy image info. | | | | Some parameters were missing. Missing are:| | | | [instance_info.image_source] | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | True | | | rescue | True | | | storage | True | |------------
-----------------------------------------------------
The validation output
Result
indicates the following:-
False
: The interface has failed validation. If the reason provided includes missing theinstance_info.image_source
parameter, this might be because it is populated during provisioning, therefore it has not been set at this point. If you are using a whole disk image, then you might need to only setimage_source
to pass the validation. -
True
: The interface has passed validation. -
None
: The interface is not supported for your driver.
-
11.2.3. Provisioning bare metal nodes for the overcloud
To provision your bare metal nodes, you define the quantity and attributes of the bare metal nodes that you want to deploy in a node definition file in YAML format, and assign overcloud roles to these nodes. You also define the network layout of the nodes.
The provisioning process creates a heat environment file from your node definition file. This heat environment file contains the node specifications you configured in your node definition file, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command. The provisioning process also provisions the port resources for all networks defined for each node or role in the node definition file.
Prerequisites
- The undercloud is installed. For more information, see Installing director.
- The bare metal nodes are registered, introspected, and available for provisioning and deployment. For more information, see Registering nodes for the overcloud and Creating an inventory of the bare metal node hardware.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Create the
overcloud-baremetal-deploy.yaml
node definition file and define the node count for each role that you want to provision. For example, to provision three Controller nodes and three Compute nodes, add the following configuration to yourovercloud-baremetal-deploy.yaml
file:- name: Controller count: 3 - name: Compute count: 3
Optional: Configure predictive node placements. For example, use the following configuration to provision three Controller nodes on nodes
node00
,node01
, andnode02
, and three Compute nodes onnode04
,node05
, andnode06
:- name: Controller count: 3 instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 - hostname: overcloud-controller-2 name: node02 - name: Compute count: 3 instances: - hostname: overcloud-novacompute-0 name: node04 - hostname: overcloud-novacompute-1 name: node05 - hostname: overcloud-novacompute-2 name: node06
Optional: By default, the provisioning process uses the
overcloud-hardened-uefi-full.qcow2
image. You can change the image used on specific nodes, or the image used for all nodes for a role, by specifying the local or remote URL for the image. The following examples change the image to a local QCOW2 image:Specific nodes
- name: Controller count: 3 instances: - hostname: overcloud-controller-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-custom.qcow2 - hostname: overcloud-controller-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 - hostname: overcloud-controller-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
All nodes for a role
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-custom.qcow2 instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 - hostname: overcloud-controller-2 name: node02
Define the network layout for all nodes for a role, or the network layout for specific nodes:
Specific nodes
The following example provisions the networks for a specific Controller node, and allocates a predictable IP to the node for the Internal API network:
- name: Controller count: 3 defaults: network_config: template: /home/stack/templates/nic-config/myController.j2 default_route_network: - external instances: - hostname: overcloud-controller-0 name: node00 networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internal_api subnet: internal_api_subnet01 fixed_ip: 172.21.11.100 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01
All nodes for a role
The following example provisions the networks for the Controller and Compute roles:
- name: Controller count: 3 defaults: networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internal_api subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01 network_config: template: /home/stack/templates/nic-config/myController.j2 1 default_route_network: - external - name: Compute count: 3 defaults: networks: - network: ctlplane vif: true - network: internal_api subnet: internal_api_subnet02 - network: tenant subnet: tenant_subnet02 - network: storage subnet: storage_subnet02 network_config: template: /home/stack/templates/nic-config/myCompute.j2
- 1
- You can use the example NIC templates located in
/usr/share/ansible/roles/tripleo_network_config/templates
to create your own NIC templates in your local environment file directory.
Optional: Configure the disk partition size allocations if the default disk partition sizes do not meet your requirements. For example, the default partition size for the
/var/log
partition is 10 GB. Consider your log storage and retention requirements to determine if 10 GB meets your requirements. If you need to increase the allocated disk size for your log storage, add the following configuration to your node definition file to override the defaults:ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-growvols.yaml extra_vars: role_growvols_args: default: /=8GB /tmp=1GB /var/log=<log_size>GB /var/log/audit=2GB /home=1GB /var=100%
-
Replace
<log_size>
with the size of the disk to allocate to log files.
-
Replace
-
If you use the Object Storage service (swift) and the whole disk overcloud image,
overcloud-hardened-uefi-full
, you need to configure the size of the/srv
partition based on the size of your disk and your storage requirements for/var
and/srv
. For more information, see Configuring whole disk partitions for the Object Storage service. - Optional: Designate the overcloud nodes for specific roles by using custom resource classes or the profile capability. For more information, see Designating overcloud nodes for roles by matching resource classes and Designating overcloud nodes for roles by matching profiles.
- Define any other attributes that you want to assign to your nodes. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes. For an example node definition file, see Example node definition file.
Provision the overcloud nodes:
(undercloud)$ openstack overcloud node provision \ [--templates <templates_directory> \] --stack <stack> \ --network-config \ --output <deployment_file> \ /home/stack/templates/<node_definition_file>
-
Optional: Include the
--templates
option to use your own templates instead of the default templates located in/usr/share/openstack-tripleo-heat-templates
. Replace<templates_directory>
with the path to the directory that contains your templates. -
Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud
. -
Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml
. -
Replace
<node_definition_file>
with the name of your node definition file, for example,overcloud-baremetal-deploy.yaml
.
-
Optional: Include the
Monitor the provisioning progress in a separate terminal:
(undercloud)$ watch openstack baremetal node list
-
When provisioning is successful, the node state changes from
available
toactive
. - If the node provisioning fails because of a node hardware or network configuration failure, then you can remove the failed node before running the provisioning step again. For more information, see Removing failed bare-metal nodes from the node definition file.
-
When provisioning is successful, the node state changes from
Use the
metalsmith
tool to obtain a unified view of your nodes, including allocations and ports:(undercloud)$ metalsmith list
Verify the association of nodes to hostnames:
(undercloud)$ openstack baremetal allocation list
Next steps
11.2.4. Bare-metal node provisioning attributes
Use the following tables to understand the available properties for configuring node attributes, and the values that are available for you to use when you provision bare-metal nodes with the openstack baremetal node provision
command.
- Role properties: Use the role properties to define each role.
- Default and instance properties for each role: Use the default or instance properties to specify the selection criteria for allocating nodes from the pool of available nodes, and to set attributes and network configuration properties on the bare-metal nodes.
For information on creating baremetal definition files, see Provisioning bare metal nodes for the overcloud.
Property | Value |
---|---|
| (Mandatory) Role name. |
|
The number of nodes that you want to provision for this role. The default value is |
|
A dictionary of default values for |
|
A dictionary of values that you can use to specify attributes for specific nodes. For more information about supported properties in the |
|
Overrides the default hostname format for this role. The default generated hostname is derived from the overcloud stack name, the role, and an incrementing index, all in lower case. For example, the default format for the Controller role is |
|
A dictionary of values for Ansible playbooks and Ansible variables. The playbooks are run against the role instances after node provisioning, prior to the node network configuration. For more information about specifying Ansible playbooks, see |
Property | Value |
---|---|
|
( |
|
( |
|
Details of the image that you want to provision onto the node. For information about supported |
| Selection criteria to match the node capabilities. |
|
Add data and first-boot commands to the config-drive passed to the node. For more information, see Note
Only use |
|
Set to |
|
List of dictionaries that represent instance networks. For more information about configuring network attributes, see |
|
Link to the network configuration file for the role or instance. For more information about configuring the link to the network configuration file, see |
| Selection criteria to for profile matching. For more information, see Designating overcloud nodes for roles by matching profiles. |
|
Set to |
|
Selection criteria to match the resource class of the node. The default value is |
|
Size of the root partition in GiB. The default value is |
| Size of the swap partition in MiB. |
| A list of traits as selection criteria to match the node traits. |
Property | Value |
---|---|
|
Specifies the URL of the root partition or whole disk image that you want to provision onto the node. Supported URL schemes: Note
If you use the |
|
Specifies the MD5 checksum of the root partition or whole disk image. Required when the |
| Specifies the image reference or URL of the kernel image. Use this property only for partition images. |
| Specifies the image reference or URL of the ramdisk image. Use this property only for partition images. |
Property | Value |
---|---|
| The specific IP address that you want to use for this network. |
| The network where you want to create the network port. |
| The subnet where you want to create the network port. |
| Existing port to use instead of creating a new port. |
|
Set to |
Property | Value |
---|---|
| Specifies the Ansible J2 NIC configuration template to use when applying node network configuration. For information on configuring the NIC template, see Configuring overcloud networking. |
|
The name of the OVS bridge to create for accessing external networks. The default bridge name is |
|
Specifies the name of the interface to add to the public bridge. The default interface is |
|
Set to |
|
Specifies the NIC mapping configuration, |
|
The network to use for the default route. The default route network is |
| List of networks to skip when configuring the node networking. |
|
A list of DNS search domains to be added to |
|
The OVS options or bonding options to use for the bond interface, for example, |
| Specifies the number of required RX queues for DPDK bonds or DPDK ports. |
Property | Value |
---|---|
|
Dictionary of config_drive: cloud_config: manage_resolv_conf: true resolv_conf: nameservers: - 8.8.8.8 - 8.8.4.4 searchdomains: - abc.example.com - xyz.example.com domain: example.com sortlist: - 10.0.0.1/255 - 10.0.0.2 options: rotate: true timeout: 1 |
|
Extra metadata to include with the config-drive |
Property | Value |
---|---|
| The path to the Ansible playbook, relative to the roles definition YAML file. |
| Extra Ansible variables to set when running the playbook. Use the following syntax to specify extra variables: ansible_playbooks: - playbook: a_playbook.yaml extra_vars: param1: value1 param2: value2
For example, to grow the LVM volumes of any node deployed with the whole disk overcloud image ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-growvols.yaml extra_vars: role_growvols_args: default: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /var=100% Controller: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /srv=50GB /var=100% |
11.2.5. Removing failed bare-metal nodes from the node definition file
If the node provisioning fails because of a node hardware or network configuration failure, then you can remove the failed node before running the provisioning step again. To remove a bare-metal node that has failed during provisioning, tag the node that you want to remove from the stack in the node definition file, and unprovision the node before provisioning the working bare-metal nodes.
Prerequisites
- The undercloud is installed. For more information, see Installing director.
- The bare-metal node provisioning failed because of a node hardware failure.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
-
Open your
overcloud-baremetal-deploy.yaml
node definition file. Decrement the
count
parameter for the role that the node is allocated to. For example, the following configuration updates the count parameter for theObjectStorage
role to reflect that the number of nodes dedicated toObjectStorage
is reduced to 3:- name: ObjectStorage count: 3
-
Define the
hostname
andname
of the node that you want to remove from the stack, if it is not already defined in theinstances
attribute for the role. Add the attribute
provisioned: false
to the node that you want to remove. For example, to remove the nodeovercloud-objectstorage-1
from the stack, include the following snippet in yourovercloud-baremetal-deploy.yaml
file:- name: ObjectStorage count: 3 instances: - hostname: overcloud-objectstorage-0 name: node00 - hostname: overcloud-objectstorage-1 name: node01 # Removed from cluster due to disk failure provisioned: false - hostname: overcloud-objectstorage-2 name: node02 - hostname: overcloud-objectstorage-3 name: node03
Unprovision the bare-metal nodes:
(undercloud)$ openstack overcloud node unprovision \ --stack <stack> \ --network-ports \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud
.
-
Replace
Provision the overcloud nodes to generate an updated heat environment file for inclusion in the deployment command:
(undercloud)$ openstack overcloud node provision \ --stack <stack> \ --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml
.
-
Replace
11.2.6. Designating overcloud nodes for roles by matching resource classes
You can designate overcloud nodes for specific roles by using custom resource classes. Resource classes match your nodes to deployment roles. By default all nodes are assigned the resource class of baremetal
.
To change the resource class assigned to a node after the node is provisioned you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Prerequisites
- You are performing the initial provisioning of your bare metal nodes for the overcloud.
Procedure
Assign each bare metal node that you want to designate for a role with a custom resource class:
(undercloud)$ openstack baremetal node set \ --resource-class <resource_class> <node>
-
Replace
<resource_class>
with a name for the resource class, for example,baremetal.CPU-PINNING
. -
Replace
<node>
with the ID of the bare metal node.
-
Replace
-
Add the role to your
overcloud-baremetal-deploy.yaml
file, if not already defined. Specify the resource class that you want to assign to the nodes for the role:
- name: <role> count: 1 defaults: resource_class: <resource_class>
-
Replace
<role>
with the name of the role. -
Replace
<resource_class>
with the name you specified for the resource class in step 1.
-
Replace
- Return to Provisioning bare metal nodes for the overcloud to complete the provisioning process.
11.2.7. Designating overcloud nodes for roles by matching profiles
You can designate overcloud nodes for specific roles by using the profile capability. Profiles match node capabilities to deployment roles.
You can also perform automatic profile assignment by using introspection rules. For more information, see Configuring automatic profile tagging.
To change the profile assigned to a node after the node is provisioned you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new profile assignment. For more information, see Scaling overcloud nodes.
Prerequisites
- You are performing the initial provisioning of your bare metal nodes for the overcloud.
Procedure
Existing node capabilities are overwritten each time you add a new node capability. Therefore, you must retrieve the existing capabilities of each registered node in order to set them again:
$ openstack baremetal node show <node> \ -f json -c properties | jq -r .properties.capabilities
Assign the profile capability to each bare metal node that you want to match to a role profile, by adding
profile:<profile>
to the existing capabilities of the node:(undercloud)$ openstack baremetal node set <node> \ --property capabilities="profile:<profile>,<capability_1>,...,<capability_n>"
-
Replace
<node>
with the name or ID of the bare metal node. -
Replace
<profile>
with the name of the profile that matches the role profile. -
Replace
<capability_1>
, and all capabilities up to<capability_n>
, with each capability that you retrieved in step 1.
-
Replace
-
Add the role to your
overcloud-baremetal-deploy.yaml
file, if not already defined. Define the profile that you want to assign to the nodes for the role:
- name: <role> count: 1 defaults: profile: <profile>
-
Replace
<role>
with the name of the role. -
Replace
<profile>
with the name of the profile that matches the node capability.
-
Replace
- Return to Provisioning bare metal nodes for the overcloud to complete the provisioning process.
11.2.8. Configuring whole disk partitions for the Object Storage service
The whole disk image, overcloud-hardened-uefi-full
, is partitioned into separate volumes. By default, the /var
partition of nodes deployed with the whole disk overcloud image is automatically increased until the disk is fully allocated. If you use the Object Storage service (swift), configure the size of the /srv
partition based on the size of your disk and your storage requirements for /var
and /srv
.
Prerequisites
- You are performing the initial provisioning of your bare metal nodes for the overcloud.
Procedure
Configure the
/srv
and/var
partitions by usingrole_growvols_args
as an extra Ansible variable in the Ansible_playbooks definition in yourovercloud-baremetal-deploy.yaml
node definition file. Set either/srv
or/var
to an absolute size in GB, and set the other to 100% to consume the remaining disk space.The following example configuration sets
/srv
to an absolute size for the Object Storage service deployed on the Controller node, and/var
to 100% to consume the remaining disk space:ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-growvols.yaml extra_vars: role_growvols_args: default: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /var=100% Controller: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /srv=50GB /var=100%
The following example configuration sets
/var
to an absolute size, and/srv
to 100% to consume the remaining disk space of the Object Storage node for the Object Storage service:ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-growvols.yaml extra_vars: role_growvols_args: default: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /var=100% ObjectStorage: /=8GB /tmp=1GB /var/log=10GB /var/log/audit=2GB /home=1GB /var=10GB /srv=100%
- Return to Provisioning bare metal nodes for the overcloud to complete the provisioning process.
11.2.9. Example node definition file
The following example node definition file defines predictive node placements for three Controller nodes and three Compute nodes, and the default networks they use. The example also illustrates how to define custom roles that have nodes designated based on matching a resource class or a node capability profile.
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-custom.qcow2 networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internal_api subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storagemgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01 network_config: template: /home/stack/templates/nic-config/myController.j2 default_route_network: - external profile: nodeCapability instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 - hostname: overcloud-controller-2 name: node02 - name: Compute count: 3 defaults: networks: - network: ctlplane vif: true - network: internal_api subnet: internal_api_subnet02 - network: tenant subnet: tenant_subnet02 - network: storage subnet: storage_subnet02 network_config: template: /home/stack/templates/nic-config/myCompute.j2 resource_class: baremetal.COMPUTE instances: - hostname: overcloud-novacompute-0 name: node04 - hostname: overcloud-novacompute-1 name: node05 - hostname: overcloud-novacompute-2 name: node06
11.2.10. Enabling virtual media boot
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.
Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.
To boot a node with the redfish
hardware type over virtual media, set the boot interface to redfish-virtual-media
and define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.
Prerequisites
-
Redfish driver enabled in the
enabled_hardware_types
parameter in theundercloud.conf
file. - A bare-metal node registered and enrolled.
- IPA and instance images in the Image Service (glance).
- For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance).
- A bare-metal flavor.
- A network for cleaning and provisioning.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Set the Bare Metal Provisioning service boot interface to
redfish-virtual-media
:(undercloud)$ openstack baremetal node set --boot-interface redfish-virtual-media <node>
-
Replace
<node>
with the name of the node.
-
Replace
Define the ESP image:
(undercloud)$ openstack baremetal node set --driver-info bootloader=<esp> <node>
-
Replace
<esp>
with the Image service (glance) image UUID or the URL for the ESP image. -
Replace
<node>
with the name of the node.
-
Replace
Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare-metal node:
(undercloud)$ openstack baremetal port create --pxe-enabled True \ --node <node_uuid> <mac_address>
-
Replace
<node_uuid>
with the UUID of the bare-metal node. -
Replace
<mac_address>
with the MAC address of the NIC on the bare-metal node.
-
Replace
11.2.11. Defining the root disk for multi-disk Ceph clusters
Ceph Storage nodes typically use multiple disks. Director must identify the root disk in multiple disk configurations. The overcloud image is written to the root disk during the provisioning process.
Hardware properties are used to identify the root disk. For more information about properties you can use to identify the root disk, see Section 11.2.12, “Properties that identify the root disk”.
Procedure
Verify the disk information from the hardware introspection of each node:
(undercloud)$ openstack baremetal introspection data save <node_uuid> | --file <output_file_name>
-
Replace
<node_uuid>
with the UUID of the node. Replace
<output_file_name>
with the name of the file that contains the output of the node introspection.For example, the data for one node might show three disks:
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
-
Replace
Set the root disk for the node by using a unique hardware property:
(undercloud)$ openstack baremetal node set --property root_device='{<property_value>}' <node-uuid>
-
Replace
<property_value>
with the unique hardware property value from the introspection data to use to set the root disk. Replace
<node_uuid>
with the UUID of the node.NoteA unique hardware property is any property from the hardware introspection step that uniquely identifies the disk. For example, the following command uses the disk serial number to set the root disk:
(undercloud)$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
-
Replace
- Configure the BIOS of each node to first boot from the network and then the root disk.
Director identifies the specific disk to use as the root disk. When you run the openstack overcloud node provision
command, director provisions and writes the overcloud image to the root disk.
11.2.12. Properties that identify the root disk
There are several properties that you can define to help director identify the root disk:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name
(String): The name of the device, for example: /dev/sdb1.
Use the name
property for devices with persistent names. Do not use the name
property to set the root disk for devices that do not have persistent names because the value can change when the node boots.
11.2.13. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement
The default image for a Red Hat OpenStack Platform (RHOSP) deployment is overcloud-hardened-uefi-full.qcow2
. The overcloud-hardened-uefi-full.qcow2
image uses a valid Red Hat OpenStack Platform (RHOSP) subscription. You can use the overcloud-minimal
image when you do not want to consume your subscription entitlements, to avoid reaching the limit of your paid Red Hat subscriptions. This is useful, for example, when you want to provision nodes with only Ceph daemons, or when you want to provision a bare operating system (OS) where you do not want to run any other OpenStack services. For information about how to obtain the overcloud-minimal
image, see Obtaining images for overcloud nodes.
The overcloud-minimal
image supports only standard Linux bridges. The overcloud-minimal
image does not support Open vSwitch (OVS) because OVS is an OpenStack service that requires a Red Hat OpenStack Platform subscription entitlement. OVS is not required to deploy Ceph Storage nodes. Use linux_bond
instead of ovs_bond
to define bonds. For more information about linux_bond
, see Creating Linux bonds.
Procedure
-
Open your
overcloud-baremetal-deploy.yaml
file. Add or update the
image
property for the nodes that you want to use theovercloud-minimal
image. You can set the image toovercloud-minimal
on specific nodes, or for all nodes for a role:Specific nodes
- name: Ceph count: 3 instances: - hostname: overcloud-ceph-0 name: node00 image: href: file:///var/lib/ironic/images/overcloud-minimal.qcow2 - hostname: overcloud-ceph-1 name: node01 image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 - hostname: overcloud-ceph-2 name: node02 image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2
All nodes for a role
- name: Ceph count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-minimal.qcow2 instances: - hostname: overcloud-ceph-0 name: node00 - hostname: overcloud-ceph-1 name: node01 - hostname: overcloud-ceph-2 name: node02
In the
roles_data.yaml
role definition file, set therhsm_enforce
parameter toFalse
.rhsm_enforce: False
Run the provisioning command:
(undercloud)$ openstack overcloud node provision \ --stack stack \ --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Pass the
overcloud-baremetal-deployed.yaml
environment file to theopenstack overcloud deploy
command.
11.3. Configuring and deploying the overcloud
After you have provisioned the network resources and bare-metal nodes for your overcloud, you can configure your overcloud by using the unedited heat template files provided with your director installation, and any custom environment files you create. When you have completed the configuration of your overcloud, you can deploy the overcloud environment.
A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. Red Hat recommends that you use an external storage solution, such as Red Hat Ceph Storage, for block storage.
11.3.1. Prerequisites
- You have provisioned the network resources and bare-metal nodes required for your overcloud.
11.3.2. Configuring your overcloud by using environment files
The undercloud includes a set of heat templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core heat template collection. You can include as many environment files as necessary. The environment file extension must be .yaml
or .template
.
Red Hat recommends that you organize your custom environment files in a separate directory, such as the templates
directory.
You include environment files in your overcloud deployment by using the -e
option. Any environment files that you add to the overcloud using the -e
option become part of the stack definition of the overcloud. The order of the environment files is important because the parameters and resources that you define in subsequent environment files take precedence.
To modify the overcloud configuration after initial deployment, perform the following actions:
- Modify parameters in the custom environment files and heat templates.
-
Run the
openstack overcloud deploy
command again with the same environment files.
Do not edit the overcloud configuration directly because director overrides any manual configuration when you update the overcloud stack.
Open Virtual Networking (OVN) is the default networking mechanism driver in Red Hat OpenStack Platform 17.0. If you want to use OVN with distributed virtual routing (DVR), you must include the environments/services/neutron-ovn-dvr-ha.yaml
file in the openstack overcloud deploy
command. If you want to use OVN without DVR, you must include the environments/services/neutron-ovn-ha.yaml
file in the openstack overcloud deploy
command.
11.3.3. Creating an environment file for undercloud CA trust
If your undercloud uses TLS and the Certificate Authority (CA) is not publicly trusted, you can use the CA for SSL endpoint encryption that the undercloud operates. To ensure that the undercloud endpoints are accessible to the rest of your deployment, configure your overcloud nodes to trust the undercloud CA.
For this approach to work, your overcloud nodes must have a network route to the public endpoint on the undercloud. It is likely that you must apply this configuration for deployments that rely on spine-leaf networking.
There are two types of custom certificates you can use in the undercloud:
-
User-provided certificates - This definition applies when you have provided your own certificate. This can be from your own CA, or it can be self-signed. This is passed using the
undercloud_service_certificate
option. In this case, you must either trust the self-signed certificate, or the CA (depending on your deployment). -
Auto-generated certificates - This definition applies when you use
certmonger
to generate the certificate using its own local CA. Enable auto-generated certificates with thegenerate_service_certificate
option in theundercloud.conf
file. In this case, director generates a CA certificate at/etc/pki/ca-trust/source/anchors/cm-local-ca.pem
and the director configures the undercloud’s HAProxy instance to use a server certificate. Add the CA certificate to theinject-trust-anchor-hiera.yaml
file to present the certificate to OpenStack Platform.
This example uses a self-signed certificate located in /home/stack/ca.crt.pem
. If you use auto-generated certificates, use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem
instead.
Procedure
Open the certificate file and copy only the certificate portion. Do not include the key:
$ vi /home/stack/ca.crt.pem
The certificate portion you need looks similar to this shortened example:
-----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----
Create a new YAML file called
/home/stack/inject-trust-anchor-hiera.yaml
with the following contents, and include the certificate you copied from the PEM file:parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----
Note- The certificate string must follow the PEM format.
-
The
CAMap
parameter might contain other certificates relevant to SSL/TLS configuration.
-
Add the
/home/stack/inject-trust-anchor-hiera.yaml
file to your deployment command. Director copies the CA certificate to each overcloud node during the overcloud deployment. As a result, each node trusts the encryption presented by the undercloud’s SSL endpoints.
11.3.4. Disabling TSX on new deployments
From Red Hat Enterprise Linux 8.3 onwards, the kernel disables support for the Intel Transactional Synchronization Extensions (TSX) feature by default.
You must explicitly disable TSX for new overclouds unless you strictly require it for your workloads or third party vendors.
Set the KernelArgs
heat parameter in an environment file.
parameter_defaults: ComputeParameters: KernelArgs: "tsx=off"
Include the environment file when you run your openstack overcloud deploy
command.
Additional resources
11.3.5. Validating your overcloud configuration
Before deploying your overcloud, validate your heat templates and environment files.
As a result of a change to the API in 17.0, the following validations are currently unstable:
- switch-vlans
- network-environment
- dhcp-provisioning
-
A
FAILED
validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, aFAILED
validation can indicate a potential issue with a production environment.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Update your overcloud stack with all the environment files your deployment requires:
$ openstack overcloud deploy --templates \ -e environment-file1.yaml \ -e environment-file2.yaml \ ... --stack-only
Validate your overcloud stack:
$ validation run \ --group pre-deployment \ --inventory <inventory_file>
-
Replace
<inventory_file>
with the name and location of the Ansible inventory file, for example,~/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml
.
NoteWhen you run a validation, the
Reasons
column in the output is limited to 79 characters. To view the validation result in full, view the validation log files.-
Replace
Review the results of the validation report:
$ validation history get [--full] [--validation-log-dir <log_dir>] <uuid>
-
Optional: Use the
--full
option to view detailed output from the validation run. -
Optional: Use the
--validation-log-dir
option to write the output from the validation run to the validation logs. -
Replace
<uuid>
with the UUID of the validation run.
-
Optional: Use the
11.3.6. Creating your overcloud
The final stage in creating your Red Hat OpenStack Platform (RHOSP) overcloud environment is to run the openstack overcloud deploy
command to create the overcloud. For information about the options available to use with the openstack overcloud deploy
command, see Deployment command options.
Procedure
Collate the environment files and configuration files that you need for your overcloud environment, both the unedited heat template files provided with your director installation, and the custom environment files you created. This should include the following files:
-
overcloud-baremetal-deployed.yaml
node definition file. -
overcloud-networks-deployed.yaml
network definition file. -
overcloud-vip-deployed.yaml
network VIP definition file. - The location of the container images for containerized OpenStack services.
- Any environment files for Red Hat CDN or Satellite registration.
- Any other custom environment files.
-
- Organize the environment files and configuration files by their order of precedence, listing unedited heat template files first, followed by your environment files that contain custom configuration, such as overrides to the default properties.
Construct your
openstack overcloud deploy
command, specifying the configuration files and templates in the required order, for example:(undercloud) $ openstack overcloud deploy --templates \ [-n /home/stack/templates/network_data.yaml \ ] -e /home/stack/templates/overcloud-baremetal-deployed.yaml\ -e /home/stack/templates/overcloud-networks-deployed.yaml\ -e /home/stack/templates/overcloud-vip-deployed.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e /home/stack/inject-trust-anchor-hiera.yaml \ [-r /home/stack/templates/roles_data.yaml ]
- -n /home/stack/templates/network_data.yaml
- Specifies the custom network configuration. Required if you use network isolation or custom composable networks. For information on configuring overcloud networks, see Configuring overcloud networking.
- -e /home/stack/containers-prepare-parameter.yaml
- Adds the container image preparation environment file. You generated this file during the undercloud installation and can use the same file for your overcloud creation.
- -e /home/stack/inject-trust-anchor-hiera.yaml
- Adds an environment file to install a custom certificate in the undercloud.
- -r /home/stack/templates/roles_data.yaml
- The generated roles data, if you use custom roles or want to enable a multi-architecture cloud.
When the overcloud creation completes, director provides a recap of the Ansible plays that were executed to configure the overcloud:
PLAY RECAP ************************************************************* overcloud-compute-0 : ok=160 changed=67 unreachable=0 failed=0 overcloud-controller-0 : ok=210 changed=93 unreachable=0 failed=0 undercloud : ok=10 changed=7 unreachable=0 failed=0 Tuesday 15 October 2018 18:30:57 +1000 (0:00:00.107) 1:06:37.514 ****** ========================================================================
When the overcloud creation completes, director provides details to access your overcloud:
Ansible passed. Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed
You can keep your deployment command in a file that you add to every time you update your configuration with a new env file.
11.3.7. Deployment command options
The following table lists the additional parameters for the openstack overcloud deploy
command.
Some options are available in this release as a Technology Preview and therefore are not fully supported by Red Hat. They should only be used for testing and should not be used in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Parameter | Description |
---|---|
|
The directory that contains the heat templates that you want to deploy. If blank, the deployment command uses the default template location at |
| The name of the stack that you want to create or update |
| The deployment timeout duration in minutes |
| The virtualization type that you want to use for hypervisors |
|
The Network Time Protocol (NTP) server that you want to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: |
|
Defines custom values for the environment variable |
|
Defines the SSH user to access the overcloud nodes. Normally SSH access occurs through the |
| Defines the key path for SSH access to overcloud nodes. |
| Defines the network name that you want to use for SSH access to overcloud nodes. |
|
Extra environment files that you want to pass to the overcloud deployment. You can specify this option more than once. Note that the order of environment files that you pass to the |
| A directory that contains environment files that you want to include in deployment. The deployment command processes these environment files in numerical order, then alphabetical order. |
|
Defines the roles file and overrides the default |
|
Defines the networks file and overrides the default network_data.yaml in the |
|
Defines the plan Environment file and overrides the default |
| Use this option if you do not want to delete temporary files after deployment, and log their location. |
| Use this option if you want to update the plan without performing the actual deployment. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks. openstack-tripleo-validations |
| Use this option if you want to perform a validation check on the overcloud without creating the overcloud. |
|
Use this option to run external validations from the |
| Use this option to skip the overcloud post-deployment configuration. |
| Use this option to force the overcloud post-deployment configuration. |
|
Use this option if you do not want the deployment command to generate a unique identifier for the |
| The path to a YAML file with arguments and parameters. |
| Use this option if you want to disable password generation for the overcloud services. |
|
Use this option if you want to deploy pre-provisioned overcloud nodes. Used in conjunction with |
|
Use this option if you want to disable the |
|
Use this option if you want to disable the overcloud stack creation and only run the |
|
The directory that you want to use for saved |
|
The path to an Ansible configuration file. The configuration in the file overrides any configuration that |
|
The timeout duration in minutes that you want to use for |
|
(Technology Preview) Use this option with a comma-separated list of nodes to limit the config-download playbook execution to a specific node or set of nodes. For example, the |
| (Technology Preview) Use this option with a comma-separated list of tags from the config-download playbook to run the deployment with a specific set of config-download tasks. |
| (Technology Preview) Use this option with a comma-separated list of tags that you want to skip from the config-download playbook. |
Run the following command to view a full list of options:
(undercloud) $ openstack help overcloud deploy
Some command line parameters are outdated or deprecated in favor of using heat template parameters, which you include in the parameter_defaults
section in an environment file. The following table maps deprecated parameters to their heat template equivalents.
Parameter | Description | Heat template parameter |
---|---|---|
| The number of Controller nodes to scale out |
|
| The number of Compute nodes to scale out |
|
| The number of Ceph Storage nodes to scale out |
|
| The number of Block Storage (cinder) nodes to scale out |
|
| The number of Object Storage (swift) nodes to scale out |
|
| The flavor that you want to use for Controller nodes |
|
| The flavor that you want to use for Compute nodes |
|
| The flavor that you want to use for Ceph Storage nodes |
|
| The flavor that you want to use for Block Storage (cinder) nodes |
|
| The flavor that you want to use for Object Storage (swift) nodes |
|
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option because any errors can cause your deployment to fail. | No parameter mapping |
|
Disable the pre-deployment validations entirely. These validations were built-in pre-deployment validations, which have been replaced with external validations from the | No parameter mapping |
|
Run deployment using the | No parameter mapping |
| Use this option to register overcloud nodes to the Customer Portal or Satellite 6. |
|
|
Use this option to define the registration method that you want to use for the overcloud nodes. |
|
| The organization that you want to use for registration. |
|
| Use this option to register the system even if it is already registered. |
|
|
The base URL of the Satellite server to register overcloud nodes. Use the Satellite HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If the server is a Red Hat Satellite 6 server, the overcloud obtains the |
|
| Use this option to define the activation key that you want to use for registration. |
|
These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.
11.3.8. Validating your overcloud deployment
Validate your deployed overcloud.
Prerequisites
- You have deployed your overcloud.
Procedure
Source the
stackrc
credentials file:$ source ~/stackrc
Validate your overcloud deployment:
$ validation run \ --group post-deployment \ [--inventory <inventory_file>]
Replace
<inventory_file>
with the name of your ansible inventory file. By default, the dynamic inventory is calledtripleo-ansible-inventory
.NoteWhen you run a validation, the
Reasons
column in the output is limited to 79 characters. To view the validation result in full, view the validation log files.
Review the results of the validation report:
$ validation show run [--full] <UUID>
-
Replace
<UUID>
with the UUID of the validation run. -
Optional: Use the
--full
option to view detailed output from the validation run.
-
Replace
A FAILED
validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED
validation can indicate a potential issue with a production environment.
Addtional resources
11.3.9. Accessing the overcloud
Director generates a credential file containing the credentials necessary to operate the overcloud from the undercloud. Director saves this file, overcloudrc
, in the home directory of the stack
user.
Procedure
Source the
overcloudrc
file:(undercloud)$ source ~/overcloudrc
The command prompt changes to indicate that you are accessing the overcloud:
(overcloud)$
To return to interacting with the undercloud, source the
stackrc
file:(overcloud)$ source ~/stackrc (undercloud)$
The command prompt changes to indicate that you are accessing the undercloud:
(undercloud)$
11.4. Configuring a basic overcloud with pre-provisioned nodes
This chapter contains basic configuration procedures that you can use to configure a Red Hat OpenStack Platform (RHOSP) environment with pre-provisioned nodes. This scenario differs from the standard overcloud creation scenarios in several ways:
- You can provision nodes with an external tool and let the director control the overcloud configuration only.
- You can use nodes without relying on the director provisioning methods. This is useful if you want to create an overcloud without power management control, or use networks with DHCP/PXE boot restrictions.
- The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) to manage nodes.
- Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2 overcloud-full image.
This scenario includes only basic configuration with no custom features.
You cannot combine pre-provisioned nodes with director-provisioned nodes.
11.4.1. Pre-provisioned node requirements
Before you begin deploying an overcloud with pre-provisioned nodes, ensure that the following configuration is present in your environment:
- The director node that you created in Chapter 7, Installing director on the undercloud.
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 9.0 installed as the host operating system. Red Hat recommends using the latest version available.
- One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to director. The examples for this scenario use following IP address assignments:
Table 11.10. Provisioning Network IP assignments Node name IP address Director
192.168.24.1
Controller 0
192.168.24.2
Compute 0
192.168.24.3
- Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with director over the Public API endpoint. For more information about the requirements for this scenario, see Section 11.4.6, “Using a separate network for pre-provisioned nodes”.
- All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
-
If any nodes use Pacemaker resources, the service user
hacluster
and the service grouphaclient
must have a UID/GID of 189. This is due to CVE-2018-16877. If you installed Pacemaker together with the operating system, the installation creates these IDs automatically. If the ID values are set incorrectly, follow the steps in the article OpenStack minor update / fast-forward upgrade can fail on the controller nodes at pacemaker step with "Could not evaluate: backup_cib" to change the ID values. -
To prevent some services from binding to an incorrect IP address and causing deployment failures, make sure that the
/etc/hosts
file does not include thenode-name=127.0.0.1
mapping.
11.4.2. Creating a user on pre-provisioned nodes
When you configure an overcloud with pre-provisioned nodes, director requires SSH access to the overcloud nodes. On the pre-provisioned nodes, you must create a user with SSH key authentication and configure passwordless sudo access for that user. After you create a user on pre-provisioned nodes, you can use the --overcloud-ssh-user
and --overcloud-ssh-key
options with the openstack overcloud deploy
command to create an overcloud with pre-provisioned nodes.
By default, the values for the overcloud SSH user and overcloud SSH key are the stack
user and ~/.ssh/id_rsa
. To create the stack
user, complete the following steps.
Procedure
On each overcloud node, create the
stack
user and set a password. For example, run the following commands on the Controller node:[root@controller-0 ~]# useradd stack [root@controller-0 ~]# passwd stack # specify a password
Disable password requirements for this user when using
sudo
:[root@controller-0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@controller-0 ~]# chmod 0440 /etc/sudoers.d/stack
After you create and configure the
stack
user on all pre-provisioned nodes, copy thestack
user’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node, run the following command:[stack@director ~]$ ssh-copy-id stack@192.168.24.2
To copy your SSH keys, you might have to temporarily set PasswordAuthentication Yes
in the SSH configuration of each overcloud node. After you copy the SSH keys, set PasswordAuthentication No
and use the SSH keys to authenticate in the future.
11.4.3. Registering the operating system for pre-provisioned nodes
Each node requires access to a Red Hat subscription. Complete the following steps on each node to register your nodes with the Red Hat Content Delivery Network. Execute the commands as the root
user or as a user with sudo
privileges.
Enable only the repositories listed. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Procedure
Run the registration command and enter your Customer Portal user name and password when prompted:
[root@controller-0 ~]# sudo subscription-manager register
Find the entitlement pool for Red Hat OpenStack Platform 17.0:
[root@controller-0 ~]# sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 16 entitlements:
[root@controller-0 ~]# sudo subscription-manager attach --pool=pool_id
Disable all default repositories:
[root@controller-0 ~]# sudo subscription-manager repos --disable=*
Enable the required Red Hat Enterprise Linux repositories:
[root@controller-0 ~]# sudo subscription-manager repos \ --enable=rhel-9-for-x86_64-baseos-eus-rpms \ --enable=rhel-9-for-x86_64-appstream-eus-rpms \ --enable=rhel-9-for-x86_64-highavailability-eus-rpms \ --enable=openstack-beta-for-rhel-9-x86_64-rpms \ --enable=fast-datapath-for-rhel-9-x86_64-rpms
If the overcloud uses Ceph Storage nodes, enable the relevant Ceph Storage repositories:
[root@cephstorage-0 ~]# sudo subscription-manager repos \ --enable=rhel-9-for-x86_64-baseos-rpms \ --enable=rhel-9-for-x86_64-appstream-rpms \ --enable=openstack-beta-deployment-tools-for-rhel-9-x86_64-rpms
Lock the RHEL version on all overcloud nodes except Red Hat Ceph Storage nodes:
[root@controller-0 ~]# sudo subscription-manager release --set=9.0
Update your system to ensure you have the latest base system packages:
[root@controller-0 ~]# sudo dnf update -y [root@controller-0 ~]# sudo reboot
The node is now ready to use for your overcloud.
11.4.4. Configuring SSL/TLS access to director
If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If you use your own certificate authority, perform the following actions on each overcloud node.
Procedure
-
Copy the certificate authority file to the
/etc/pki/ca-trust/source/anchors/
directory on each pre-provisioned node. Run the following command on each overcloud node:
[root@controller-0 ~]# sudo update-ca-trust extract
These steps ensure that the overcloud nodes can access the director’s Public API over SSL/TLS.
11.4.5. Configuring networking for the control plane
The pre-provisioned overcloud nodes obtain metadata from director using standard HTTP requests. This means all overcloud nodes require L3 access to either:
-
The director Control Plane network, which is the subnet that you define with the
network_cidr
parameter in yourundercloud.conf
file. The overcloud nodes require either direct access to this subnet or routable access to the subnet. -
The director Public API endpoint, that you specify with the
undercloud_public_host
parameter in yourundercloud.conf
file. This option is available if you do not have an L3 route to the Control Plane or if you want to use SSL/TLS communication. For more information about configuring your overcloud nodes to use the Public API endpoint, see Section 11.4.6, “Using a separate network for pre-provisioned nodes”.
Director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes.
Using network isolation
You can use network isolation to group services to use specific networks, including the Control Plane. You can also define specific IP addresses for nodes on the Control Plane. For more information about isolating networks and creating predictable node placement strategies, see Network isolation.
If you use network isolation, ensure that your NIC templates do not include the NIC used for undercloud access. These templates can reconfigure the NIC, which introduces connectivity and configuration problems during deployment.
Assigning IP addresses
If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If you are using the director Provisioning network as the Control Plane, ensure that the overcloud IP addresses that you choose are outside of the DHCP ranges for both provisioning (dhcp_start
and dhcp_end
) and introspection (inspection_iprange
).
During standard overcloud creation, director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause director to assign different IP addresses to the ones that you configure manually for each node. In this situation, use a predictable IP address strategy to force director to use the pre-provisioned IP assignments on the Control Plane.
If you are using network isolation, create a custom environment file, deployed-ports.yaml
, to implement a predictable IP strategy. The following example custom environment file, deployed-ports.yaml
, passes a set of resource registry mappings and parameters to director, and defines the IP assignments of the pre-provisioned nodes. The NodePortMap
, ControlPlaneVipData
, and VipPortMap
parameters define the IP addresses and subnet CIDRs that correspond to each overcloud node.
resource_registry: # Deployed Virtual IP port resources OS::TripleO::Network::Ports::ExternalVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_vip_external.yaml OS::TripleO::Network::Ports::InternalApiVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_vip_internal_api.yaml OS::TripleO::Network::Ports::StorageVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_vip_storage.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_vip_storage_mgmt.yaml # Deployed ControlPlane port resource OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml # Controller role port resources OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_external.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml # Compute role port resources OS::TripleO::Compute::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml OS::TripleO::Compute::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml OS::TripleO::Compute::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml # CephStorage role port resources OS::TripleO::CephStorage::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml OS::TripleO::CephStorage::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml parameter_defaults: NodePortMap: 1 # Controller node parameters controller-00-rack01: 2 ctlplane: 3 ip_address: 192.168.24.201 ip_address_uri: 192.168.24.201 ip_subnet: 192.168.24.0/24 external: ip_address: 10.0.0.201 ip_address_uri: 10.0.0.201 ip_subnet: 10.0.0.10/24 internal_api: ip_address: 172.16.2.201 ip_address_uri: 172.16.2.201 ip_subnet: 172.16.2.10/24 management: ip_address: 192.168.1.201 ip_address_uri: 192.168.1.201 ip_subnet: 192.168.1.10/24 storage: ip_address: 172.16.1.201 ip_address_uri: 172.16.1.201 ip_subnet: 172.16.1.10/24 storage_mgmt: ip_address: 172.16.3.201 ip_address_uri: 172.16.3.201 ip_subnet: 172.16.3.10/24 tenant: ip_address: 172.16.0.201 ip_address_uri: 172.16.0.201 ip_subnet: 172.16.0.10/24 ... # Compute node parameters compute-00-rack01: ctlplane: ip_address: 192.168.24.11 ip_address_uri: 192.168.24.11 ip_subnet: 192.168.24.0/24 internal_api: ip_address: 172.16.2.11 ip_address_uri: 172.16.2.11 ip_subnet: 172.16.2.10/24 storage: ip_address: 172.16.1.11 ip_address_uri: 172.16.1.11 ip_subnet: 172.16.1.10/24 tenant: ip_address: 172.16.0.11 ip_address_uri: 172.16.0.11 ip_subnet: 172.16.0.10/24 ... # Ceph node parameters ceph-00-rack01: ctlplane: ip_address: 192.168.24.101 ip_address_uri: 192.168.24.101 ip_subnet: 192.168.24.0/24 storage: ip_address: 172.16.1.101 ip_address_uri: 172.16.1.101 ip_subnet: 172.16.1.10/24 storage_mgmt: ip_address: 172.16.3.101 ip_address_uri: 172.16.3.101 ip_subnet: 172.16.3.10/24 ... # Virtual IP address parameters ControlPlaneVipData: fixed_ips: - ip_address: 192.168.24.5 name: control_virtual_ip network: tags: [192.168.24.0/24] subnets: - ip_version: 4 VipPortMap external: ip_address: 10.0.0.100 ip_address_uri: 10.0.0.100 ip_subnet: 10.0.0.100/24 internal_api: ip_address: 172.16.2.100 ip_address_uri: 172.16.2.100 ip_subnet: 172.16.2.100/24 storage: ip_address: 172.16.1.100 ip_address_uri: 172.16.1.100 ip_subnet: 172.16.1.100/24 storage_mgmt: ip_address: 172.16.3.100 ip_address_uri: 172.16.3.100 ip_subnet: 172.16.3.100/24 RedisVirtualFixedIPs: - ip_address: 192.168.24.6 use_neutron: false
- 1
- The
NodePortMap
mappings define the names of the node assignments. - 2
- The short host name for the node, which follows the format
<node_hostname>
. - 3
- The network definitions and IP assignments for the node. Networks include
ctlplane
,external
,internal_api
,management
,storage
,storage_mgmt
, andtenant
. The IP assignments include theip_address
, theip_address_uri
, and theip_subnet
:-
IPv4:
ip_address
andip_address_uri
should be set to the same value. IPv6:
-
ip_address
: Set to the IPv6 address without brackets. -
ip_address_uri
: Set to the IPv6 address in square brackets, for example,[2001:0db8:85a3:0000:0000:8a2e:0370:7334]
.
-
-
IPv4:
11.4.6. Using a separate network for pre-provisioned nodes
By default, director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.
There are several requirements for this scenario:
- The overcloud nodes must accommodate the basic network configuration from Section 11.4.5, “Configuring networking for the control plane”.
- You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Enabling SSL/TLS on overcloud public endpoints.
-
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the
undercloud_public_host
parameter in theundercloud.conf
file to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Node Name | IP address or FQDN |
---|---|
Director (Internal API) | 192.168.24.1 (Provisioning Network and Control Plane) |
Director (Public API) | 10.1.1.1 / director.example.com |
Overcloud Virtual IP | 192.168.100.1 |
Controller 0 | 192.168.100.2 |
Compute 0 | 192.168.100.3 |
The following sections provide additional configuration for situations that require a separate network for overcloud nodes.
IP address assignments
The method for IP assignments is similar to Section 11.4.5, “Configuring networking for the control plane”. However, since the Control Plane may not be routable from the deployed servers, you can use the NodePortMap
, ControlPlaneVipData
, and VipPortMap
parameters to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following example is a modified version of the deployed-ports.yaml
custom environment file from Section 11.4.5, “Configuring networking for the control plane” that accommodates this network architecture:
parameter_defaults:
NodePortMap:
controller-00-rack01
ctlplane
ip_address: 192.168.100.2
ip_address_uri: 192.168.100.2
ip_subnet: 192.168.100.0/24
...
compute-00-rack01:
ctlplane
ip_address: 192.168.100.3
ip_address_uri: 192.168.100.3
ip_subnet: 192.168.100.0/24
...
ControlPlaneVipData:
fixed_ips:
- ip_address: 192.168.100.1
name: control_virtual_ip
network:
tags: [192.168.100.0/24]
subnets:
- ip_version: 4
VipPortMap:
external:
ip_address: 10.0.0.100
ip_address_uri: 10.0.0.100
ip_subnet: 10.0.0.100/24
....
RedisVirtualFixedIPs:1
- ip_address: 192.168.100.10
use_neutron: false
- 1
- The
RedisVipPort
resource is mapped tonetwork/ports/noop.yaml
. This mapping is necessary because the default Redis VIP address comes from the Control Plane. In this situation, use anoop
to disable this Control Plane mapping.
11.4.7. Mapping pre-provisioned node hostnames
When you configure pre-provisioned nodes, you must map heat-based hostnames to their actual hostnames so that ansible-playbook
can reach a resolvable host. Use the HostnameMap
to map these values.
Procedure
Create an environment file, for example
hostname-map.yaml
, and include theHostnameMap
parameter and the hostname mappings. Use the following syntax:parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]
The
[HEAT HOSTNAME]
usually conforms to the following convention:[STACK NAME]-[ROLE]-[INDEX]
:parameter_defaults: HostnameMap: overcloud-controller-0: controller-00-rack01 overcloud-controller-1: controller-01-rack02 overcloud-controller-2: controller-02-rack03 overcloud-novacompute-0: compute-00-rack01 overcloud-novacompute-1: compute-01-rack01 overcloud-novacompute-2: compute-02-rack01
-
Save the
hostname-map.yaml
file.
11.4.8. Configuring Ceph Storage for pre-provisioned nodes
Complete the following steps on the undercloud host to configure Ceph for nodes that are already deployed.
Procedure
On the undercloud host, create an environment variable,
OVERCLOUD_HOSTS
, and set the variable to a space-separated list of IP addresses of the overcloud hosts that you want to use as Ceph clients:export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42"
The default overcloud plan name is
overcloud
. If you use a different name, create an environment variableOVERCLOUD_PLAN
to store your custom name:export OVERCLOUD_PLAN="<custom-stack-name>"
-
Replace
<custom-stack-name>
with the name of your stack.
-
Replace
Run the
enable-ssh-admin.sh
script to configure a user on the overcloud nodes that Ansible can use to configure Ceph clients:bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
When you run the openstack overcloud deploy
command, Ansible configures the hosts that you define in the OVERCLOUD_HOSTS
variable as Ceph clients.
11.4.9. Creating the overcloud with pre-provisioned nodes
The overcloud deployment uses the standard CLI methods. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core heat template collection:
-
--disable-validations
- Use this option to disable basic CLI validations for services not used with pre-provisioned infrastructure. If you do not disable these validations, the deployment fails. -
environments/deployed-server-environment.yaml
- Include this environment file to create and configure the pre-provisioned infrastructure. This environment file substitutes theOS::Nova::Server
resources withOS::Heat::DeployedServer
resources.
The following command is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:
$ source ~/stackrc (undercloud)$ openstack overcloud deploy \ --disable-validations \ -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e /home/stack/templates/deployed-ports.yaml \ -e /home/stack/templates/hostname-map.yaml \ --overcloud-ssh-user stack \ --overcloud-ssh-key ~/.ssh/id_rsa \ <OTHER OPTIONS>
The --overcloud-ssh-user
and --overcloud-ssh-key
options are used to SSH into each overcloud node during the configuration stage, create an initial tripleo-admin
user, and inject an SSH key into /home/tripleo-admin/.ssh/authorized_keys
. To inject the SSH key, specify the credentials for the initial SSH connection with --overcloud-ssh-user
and --overcloud-ssh-key
(defaults to ~/.ssh/id_rsa
). To limit exposure to the private key that you specify with the --overcloud-ssh-key
option, director never passes this key to any API service, such as heat, and only the director openstack overcloud deploy
command uses this key to enable access for the tripleo-admin
user.
11.4.10. Accessing the overcloud
Director generates a credential file containing the credentials necessary to operate the overcloud from the undercloud. Director saves this file, overcloudrc
, in the home directory of the stack
user.
Procedure
Source the
overcloudrc
file:(undercloud)$ source ~/overcloudrc
The command prompt changes to indicate that you are accessing the overcloud:
(overcloud)$
To return to interacting with the undercloud, source the
stackrc
file:(overcloud)$ source ~/stackrc (undercloud)$
The command prompt changes to indicate that you are accessing the undercloud:
(undercloud)$
11.4.11. Scaling pre-provisioned nodes
The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 19, Scaling overcloud nodes. However, the process to add new pre-provisioned nodes differs because pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).
Scaling up pre-provisioned nodes
When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director node count.
Perform the following actions to scale up overcloud nodes:
- Prepare the new pre-provisioned nodes according to Section 11.4.1, “Pre-provisioned node requirements”.
- Scale up the nodes. For more information, see Chapter 19, Scaling overcloud nodes.
- After you execute the deployment command, wait until the director creates the new node resources and launches the configuration.
Scaling down pre-provisioned nodes
When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions in Chapter 19, Scaling overcloud nodes.
In scale-down operations, you can use hostnames for both OSP provisioned or pre-provisioned nodes. You can also use the UUID for OSP provisioned nodes. However, there is no UUID for pre-provisoned nodes, so you always use hostnames. Pass the hostname or UUID value to the openstack overcloud node delete
command.
Procedure
Identify the name of the node that you want to remove.
$ openstack stack resource list overcloud -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
Pass the corresponding node name from the
stack_name
column to theopenstack overcloud node delete
command:$ openstack overcloud node delete --stack <overcloud> <stack>
-
Replace
<overcloud>
with the name or UUID of the overcloud stack. -
Replace
<stack_name>
with the name of the node that you want to remove. You can include multiple node names in theopenstack overcloud node delete
command.
-
Replace
Ensure that the
openstack overcloud node delete
command runs to completion:$ openstack stack list
The status of the
overcloud
stack showsUPDATE_COMPLETE
when the delete operation is complete.
After you remove overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shut down these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.
After you power off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.
Removing a pre-provisioned overcloud
To remove an entire overcloud that uses pre-provisioned nodes, see Section 15.7, “Removing an overcloud stack” for the standard overcloud removal procedure. After you remove the overcloud, power off all nodes and reprovision them to a base operating system configuration.
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.