Chapter 6. Configuring a Basic Overcloud with the CLI Tools
This chapter provides the basic configuration steps for an OpenStack Platform environment using the CLI tools. An overcloud with a basic configuration contains no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.
For the examples in this chapter, all nodes are bare metal systems using IPMI for power management. For more supported power management types and their options, see Appendix B, Power Management Drivers.
Workflow
- Create a node definition template and register blank nodes in the director.
- Inspect hardware of all nodes.
- Tag nodes into roles.
- Define additional node properties.
Requirements
- The director node created in Chapter 4, Installing the undercloud
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7 image to each node.
One network connection for the Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. The examples in this chapter use 192.168.24.0/24 as the Provisioning subnet with the following IP address assignments:
Table 6.1. Provisioning Network IP Assignments Node Name
IP Address
MAC Address
IPMI IP Address
Director
192.168.24.1
aa:aa:aa:aa:aa:aa
None required
Controller
DHCP defined
bb:bb:bb:bb:bb:bb
192.168.24.205
Compute
DHCP defined
cc:cc:cc:cc:cc:cc
192.168.24.206
- All other network types use the Provisioning network for OpenStack services. However, you can create additional networks for other network traffic types.
- A source for container images. See Chapter 5, Configuring a container image source for instructions on how to generate an environment file containing your container image source.
6.1. Registering Nodes for the Overcloud
The director requires a node definition template, which you create manually. This file (instackenv.json
) uses JSON format, and contains the hardware and power management details for your nodes. For example, a template for registering two nodes might look like this:
{ "nodes":[ { "mac":[ "bb:bb:bb:bb:bb:bb" ], "name":"node01", "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.168.24.205" }, { "mac":[ "cc:cc:cc:cc:cc:cc" ], "name":"node02", "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.168.24.206" } ] }
This template uses the following attributes:
- name
- The logical name for the node.
- pm_type
-
The power management driver to use. This example uses the IPMI driver (
ipmi
), which is the preferred driver for power management.
IPMI is the preferred supported power management driver. For more supported power management types and their options, see Appendix B, Power Management Drivers. If these power management drivers do not work as expected, use IPMI for your power management.
- pm_user; pm_password
- The IPMI username and password. These attributes are optional for IPMI and Redfish, and are mandatory for iLO and iDRAC.
- pm_addr
- The IP address of the IPMI device.
- pm_port
- (Optional) The port to access the specific IPMI device.
- mac
- (Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
- cpu
- (Optional) The number of CPUs on the node.
- memory
- (Optional) The amount of memory in MB.
- disk
- (Optional) The size of the hard disk in GB.
- arch
- (Optional) The system architecture.
When building a multi-architecture cloud, the arch
key is mandatory to distinguish nodes using x86_64
and ppc64le
architectures.
After creating the template, run the following commands to verify the formatting and syntax:
$ source ~/stackrc (undercloud) $ openstack overcloud node import --validate-only ~/instackenv.json
Save the file to the stack
user’s home directory (/home/stack/instackenv.json
), then run the following command to import the template to the director:
(undercloud) $ openstack overcloud node import ~/instackenv.json
This imports the template and registers each node from the template into the director.
After the node registration and configuration completes, view a list of these nodes in the CLI:
(undercloud) $ openstack baremetal node list
6.2. Inspecting the Hardware of Nodes
The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.
You can also create policy files to automatically tag nodes into profiles immediately after introspection. For more information on creating policy files and including them in the introspection process, see Appendix E, Automatic Profile Tagging. Alternatively, you can manually tag nodes into profiles as per the instructions in Section 6.5, “Tagging Nodes into Profiles”.
Run the following command to inspect the hardware attributes of each node:
(undercloud) $ openstack overcloud node introspect --all-manageable --provide
-
The
--all-manageable
option introspects only nodes in a managed state. In this example, it is all of them. -
The
--provide
option resets all nodes to anavailable
state after introspection.
Monitor the progress of the introspection using the following command in a separate terminal window:
(undercloud) $ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
After the introspection completes, all nodes change to an available
state.
To view introspection information about the node, run the following command:
(undercloud) $ openstack baremetal introspection data save <UUID> | jq .
Replace <UUID>
with the UUID of the node that you want to retrieve introspection information for.
Performing Individual Node Introspection
To perform a single introspection on an available
node, set the node to management mode and perform the introspection:
(undercloud) $ openstack baremetal node manage [NODE UUID] (undercloud) $ openstack overcloud node introspect [NODE UUID] --provide
After the introspection completes, the nodes changes to an available
state.
Performing Node Introspection after Initial Introspection
After an initial introspection, all nodes should enter an available
state due to the --provide
option. To perform introspection on all nodes after the initial introspection, set all nodes to a manageable
state and run the bulk introspection command
(undercloud) $ for node in $(openstack baremetal node list --fields uuid -f value) ; do openstack baremetal node manage $node ; done (undercloud) $ openstack overcloud node introspect --all-manageable --provide
After the introspection completes, all nodes change to an available
state.
Performing Network Introspection for Interface Information
Network introspection retrieves link layer discovery protocol (LLDP) data from network switches. The following commands show a subset of LLDP information for all interfaces on a node, or full information for a particular node and interface. This can be useful for troubleshooting. The director enables LLDP data collection by default.
To get a list of interfaces on a node:
(undercloud) $ openstack baremetal introspection interface list [NODE UUID]
For example:
(undercloud) $ openstack baremetal introspection interface list c89397b7-a326-41a0-907d-79f8b86c7cd9 +-----------+-------------------+------------------------+-------------------+----------------+ | Interface | MAC Address | Switch Port VLAN IDs | Switch Chassis ID | Switch Port ID | +-----------+-------------------+------------------------+-------------------+----------------+ | p2p2 | 00:0a:f7:79:93:19 | [103, 102, 18, 20, 42] | 64:64:9b:31:12:00 | 510 | | p2p1 | 00:0a:f7:79:93:18 | [101] | 64:64:9b:31:12:00 | 507 | | em1 | c8:1f:66:c7:e8:2f | [162] | 08:81:f4:a6:b3:80 | 515 | | em2 | c8:1f:66:c7:e8:30 | [182, 183] | 08:81:f4:a6:b3:80 | 559 | +-----------+-------------------+------------------------+-------------------+----------------+
To see interface data and switch port information:
(undercloud) $ openstack baremetal introspection interface show [NODE UUID] [INTERFACE]
For example:
(undercloud) $ openstack baremetal introspection interface show c89397b7-a326-41a0-907d-79f8b86c7cd9 p2p1 +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | interface | p2p1 | | mac | 00:0a:f7:79:93:18 | | node_ident | c89397b7-a326-41a0-907d-79f8b86c7cd9 | | switch_capabilities_enabled | [u'Bridge', u'Router'] | | switch_capabilities_support | [u'Bridge', u'Router'] | | switch_chassis_id | 64:64:9b:31:12:00 | | switch_port_autonegotiation_enabled | True | | switch_port_autonegotiation_support | True | | switch_port_description | ge-0/0/2.0 | | switch_port_id | 507 | | switch_port_link_aggregation_enabled | False | | switch_port_link_aggregation_id | 0 | | switch_port_link_aggregation_support | True | | switch_port_management_vlan_id | None | | switch_port_mau_type | Unknown | | switch_port_mtu | 1514 | | switch_port_physical_capabilities | [u'1000BASE-T fdx', u'100BASE-TX fdx', u'100BASE-TX hdx', u'10BASE-T fdx', u'10BASE-T hdx', u'Asym and Sym PAUSE fdx'] | | switch_port_protocol_vlan_enabled | None | | switch_port_protocol_vlan_ids | None | | switch_port_protocol_vlan_support | None | | switch_port_untagged_vlan_id | 101 | | switch_port_vlan_ids | [101] | | switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}] | | switch_protocol_identities | None | | switch_system_name | rhos-compute-node-sw1 | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+
Retrieving Hardware Introspection Details
The Bare Metal service hardware inspection extras (inspection_extras) is enabled by default to retrieve hardware details. You can use these hardware details to configure your overcloud. For more information about the inspection_extras parameter in the undercloud.conf
file, see Configuring the Director in the Director Installation and Usage guide.
For example, the numa_topology collector is part of these hardware inspection extras and includes the following information for each NUMA node:
- RAM (in kilobytes)
- Physical CPU cores and their sibling threads
- NICs associated with the NUMA node
Use the openstack baremetal introspection data save _UUID_ | jq .numa_topology
command to retrieve this information, with the UUID of the bare-metal node.
The following example shows the retrieved NUMA information for a bare-metal node:
{ "cpus": [ { "cpu": 1, "thread_siblings": [ 1, 17 ], "numa_node": 0 }, { "cpu": 2, "thread_siblings": [ 10, 26 ], "numa_node": 1 }, { "cpu": 0, "thread_siblings": [ 0, 16 ], "numa_node": 0 }, { "cpu": 5, "thread_siblings": [ 13, 29 ], "numa_node": 1 }, { "cpu": 7, "thread_siblings": [ 15, 31 ], "numa_node": 1 }, { "cpu": 7, "thread_siblings": [ 7, 23 ], "numa_node": 0 }, { "cpu": 1, "thread_siblings": [ 9, 25 ], "numa_node": 1 }, { "cpu": 6, "thread_siblings": [ 6, 22 ], "numa_node": 0 }, { "cpu": 3, "thread_siblings": [ 11, 27 ], "numa_node": 1 }, { "cpu": 5, "thread_siblings": [ 5, 21 ], "numa_node": 0 }, { "cpu": 4, "thread_siblings": [ 12, 28 ], "numa_node": 1 }, { "cpu": 4, "thread_siblings": [ 4, 20 ], "numa_node": 0 }, { "cpu": 0, "thread_siblings": [ 8, 24 ], "numa_node": 1 }, { "cpu": 6, "thread_siblings": [ 14, 30 ], "numa_node": 1 }, { "cpu": 3, "thread_siblings": [ 3, 19 ], "numa_node": 0 }, { "cpu": 2, "thread_siblings": [ 2, 18 ], "numa_node": 0 } ], "ram": [ { "size_kb": 66980172, "numa_node": 0 }, { "size_kb": 67108864, "numa_node": 1 } ], "nics": [ { "name": "ens3f1", "numa_node": 1 }, { "name": "ens3f0", "numa_node": 1 }, { "name": "ens2f0", "numa_node": 0 }, { "name": "ens2f1", "numa_node": 0 }, { "name": "ens1f1", "numa_node": 0 }, { "name": "ens1f0", "numa_node": 0 }, { "name": "eno4", "numa_node": 0 }, { "name": "eno1", "numa_node": 0 }, { "name": "eno3", "numa_node": 0 }, { "name": "eno2", "numa_node": 0 } ] }
6.3. Automatically Discover Bare Metal Nodes
You can use auto-discovery to register undercloud nodes and generate their metadata, without first having to create an instackenv.json
file. This improvement can help reduce the time spent initially collecting the node’s information, for example, removing the need to collate the IPMI IP addresses and subsequently create the instackenv.json
.
Requirements
- All overcloud nodes must have their BMCs configured to be accessible to director through the IPMI.
- All overcloud nodes must be configured to PXE boot from the NIC connected to the undercloud control plane network.
Enable Auto-discovery
Bare Metal auto-discovery is enabled in
undercloud.conf
:enable_node_discovery = True discovery_default_driver = ipmi
-
enable_node_discovery
- When enabled, any node that boots the introspection ramdisk using PXE will be enrolled in ironic. -
discovery_default_driver
- Sets the driver to use for discovered nodes. For example,ipmi
.
-
Add your IPMI credentials to ironic:
Add your IPMI credentials to a file named
ipmi-credentials.json
. You will need to replace the username and password values in this example to suit your environment:[ { "description": "Set default IPMI credentials", "conditions": [ {"op": "eq", "field": "data://auto_discovered", "value": true} ], "actions": [ {"action": "set-attribute", "path": "driver_info/ipmi_username", "value": "SampleUsername"}, {"action": "set-attribute", "path": "driver_info/ipmi_password", "value": "RedactedSecurePassword"}, {"action": "set-attribute", "path": "driver_info/ipmi_address", "value": "{data[inventory][bmc_address]}"} ] } ]
Import the IPMI credentials file into ironic:
$ openstack baremetal introspection rule import ipmi-credentials.json
Test Auto-discovery
- Power on the required nodes.
Run
openstack baremetal node list
. You should see the new nodes listed in anenrolled
state:$ openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | c6e63aec-e5ba-4d63-8d37-bd57628258e8 | None | None | power off | enroll | False | | 0362b7b2-5b9c-4113-92e1-0b34a2535d9b | None | None | power off | enroll | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+
Set the resource class for each node:
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node set $NODE --resource-class baremetal ; done
Configure the kernel and ramdisk for each node:
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node manage $NODE ; done $ openstack overcloud node configure --all-manageable
Set all nodes to available:
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node provide $NODE ; done
Use Rules to Discover Different Vendor Hardware
If you have a heterogeneous hardware environment, you can use introspection rules to assign credentials and remote management credentials. For example, you might want a separate discovery rule to handle your Dell nodes that use DRAC:
Create a file named
dell-drac-rules.json
, with the following contents. You will need to replace the username and password values in this example to suit your environment:[ { "description": "Set default IPMI credentials", "conditions": [ {"op": "eq", "field": "data://auto_discovered", "value": true}, {"op": "ne", "field": "data://inventory.system_vendor.manufacturer", "value": "Dell Inc."} ], "actions": [ {"action": "set-attribute", "path": "driver_info/ipmi_username", "value": "SampleUsername"}, {"action": "set-attribute", "path": "driver_info/ipmi_password", "value": "RedactedSecurePassword"}, {"action": "set-attribute", "path": "driver_info/ipmi_address", "value": "{data[inventory][bmc_address]}"} ] }, { "description": "Set the vendor driver for Dell hardware", "conditions": [ {"op": "eq", "field": "data://auto_discovered", "value": true}, {"op": "eq", "field": "data://inventory.system_vendor.manufacturer", "value": "Dell Inc."} ], "actions": [ {"action": "set-attribute", "path": "driver", "value": "idrac"}, {"action": "set-attribute", "path": "driver_info/drac_username", "value": "SampleUsername"}, {"action": "set-attribute", "path": "driver_info/drac_password", "value": "RedactedSecurePassword"}, {"action": "set-attribute", "path": "driver_info/drac_address", "value": "{data[inventory][bmc_address]}"} ] } ]
Import the rule into ironic:
$ openstack baremetal introspection rule import dell-drac-rules.json
6.4. Generate architecture specific roles
When building a multi-architecture cloud, it is necessary to add any architecture specific roles into the roles_data.yaml
. Below is an example to include the ComputePPC64LE
role along with the default roles. The Creating a Custom Role File section has information on roles.
openstack overcloud roles generate \ --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o ~/templates/roles_data.yaml \ Controller Compute ComputePPC64LE BlockStorage ObjectStorage CephStorage
6.5. Tagging Nodes into Profiles
After registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. The following example shows the relationship across roles, flavors, profiles, and nodes for Controller nodes:
Type | Description |
---|---|
Role |
The |
Flavor |
The |
Profile |
The |
Node |
You also apply the |
Default profile flavors compute
, control
, swift-storage
, ceph-storage
, and block-storage
are created during undercloud installation and are usable without modification in most environments.
For a large number of nodes, use automatic profile tagging. See Appendix E, Automatic Profile Tagging for more details.
To tag a node into a specific profile, add a profile
option to the properties/capabilities
parameter for each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the following commands:
(undercloud) $ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 (undercloud) $ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
The addition of the profile:compute
and profile:control
options tag the two nodes into each respective profiles.
These commands also set the boot_option:local
parameter, which defines how each node boots. Depending on your hardware, you might also need to add the boot_mode
parameter to uefi
so that nodes boot using UEFI instead of the default BIOS mode. For more information, see Section D.2, “UEFI Boot Mode”.
After completing node tagging, check the assigned profiles or possible profiles:
(undercloud) $ openstack overcloud profiles list
Custom Role Profiles
If using custom roles, you might need to create additional flavors and profiles to accommodate these new roles. For example, to create a new flavor for a Networker role, run the following command:
(undercloud) $ openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 networker (undercloud) $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="networker" networker
Assign nodes with this new profile:
(undercloud) $ openstack baremetal node set --property capabilities='profile:networker,boot_option:local' dad05b82-0c74-40bf-9d12-193184bfc72d
6.6. Defining the root disk
Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, the director writes the overcloud image to the root disk during the provisioning process.
There are several properties that you can define to help the director identify the root disk:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). -
name
(String): The name of the device, for example: /dev/sdb1. -
by_path
(String): The unique PCI path of the device. Use this property if you do not want to use the UUID of the device.
Use the name
property only for devices with persistent names. Do not use name
to set the root disk for any other device because this value can change when the node boots.
Complete the following steps to specify the root device using its serial number.
Procedure
Check the disk information from the hardware introspection of each node. Run the following command to display the disk information of a node:
(undercloud) $ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"
For example, the data for one node might show three disks:
[ { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sda", "wwn_vendor_extension": "0x1ea4dcc412a9632b", "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b", "model": "PERC H330 Mini", "wwn": "0x61866da04f380700", "serial": "61866da04f3807001ea4dcc412a9632b" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdb", "wwn_vendor_extension": "0x1ea4e13c12e36ad6", "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6", "model": "PERC H330 Mini", "wwn": "0x61866da04f380d00", "serial": "61866da04f380d001ea4e13c12e36ad6" } { "size": 299439751168, "rotational": true, "vendor": "DELL", "name": "/dev/sdc", "wwn_vendor_extension": "0x1ea4e31e121cfb45", "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45", "model": "PERC H330 Mini", "wwn": "0x61866da04f37fc00", "serial": "61866da04f37fc001ea4e31e121cfb45" } ]
Change to the
root_device
parameter for the node definition. The following example shows how to set the root device to disk 2, which has61866da04f380d001ea4e13c12e36ad6
as the serial number:(undercloud) $ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0
NoteEnsure that you configure the BIOS of each node to include booting from the root disk that you choose. Configure the boot order to boot from the network first, then to boot from the root disk.
The director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy
command, the director provisions and writes the Overcloud image to the root disk.
6.7. Using the overcloud-minimal image to avoid using a Red Hat subscription entitlement
By default, director writes the QCOW2 overcloud-full
image to the root disk during the provisioning process. The overcloud-full
image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal
image, for example, to provision a bare OS where you do not want to run any other OpenStack services and consume your subscription entitlements.
A common use case for this occurs when you want to provision nodes with only Ceph daemons. For this and similar use cases, you can use the overcloud-minimal
image option to avoid reaching the limit of your paid Red Hat subscriptions. For information about how to obtain the overcloud-minimal
image, see Obtaining images for overcloud nodes.
Procedure
To configure director to use the
overcloud-minimal
image, create an environment file that contains the following image definition:parameter_defaults: <roleName>Image: overcloud-minimal
Replace
<roleName>
with the name of the role and appendImage
to the name of the role. The following example shows anovercloud-minimal
image for Ceph storage nodes:parameter_defaults: CephStorageImage: overcloud-minimal
-
Pass the environment file to the
openstack overcloud deploy
command.
The overcloud-minimal
image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires an OpenStack subscription entitlement.
6.8. Creating an Environment File that Defines Node Counts and Flavors
By default, the director deploys an overcloud with 1 Controller node and 1 Compute node using the baremetal
flavor. However, this is only suitable for a proof-of-concept deployment. You can override the default configuration by specifying different node counts and flavors. For a small scale production environment, you might want to consider to have at least 3 Controller nodes and 3 Compute nodes, and assign specific flavors to make sure the nodes are created with the appropriate resource specifications. This procedure shows how to create an environment file named node-info.yaml
that stores the node counts and flavor assignments.
Create a
node-info.yaml
file under the/home/stack/templates/
directory:(undercloud) $ touch /home/stack/templates/node-info.yaml
Edit the file to include the node counts and flavors your need. This example deploys 3 Controller nodes, 3 Compute nodes, and 3 Ceph Storage nodes.
parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute OvercloudCephStorageFlavor: ceph-storage ControllerCount: 3 ComputeCount: 3 CephStorageCount: 3
This file is later used in Section 6.12, “Including Environment Files in Overcloud Creation”.
6.9. Configure overcloud nodes to trust the undercloud CA
You will need to follow the following procedure if your undercloud uses TLS, and the CA is not publicly trusted. The undercloud operates its own Certificate Authority (CA) for SSL endpoint encryption. To make the undercloud endpoints accessible to the rest of your deployment, configure your overcloud nodes to trust the undercloud CA.
For this approach to work, your overcloud nodes need a network route to the undercloud’s public endpoint. It is likely that deployments that rely on spine-leaf networking will need to apply this configuration.
Understanding undercloud certificates
There are two types of custom certificates that can be used in the undercloud: user-provided certificates, and automatically generated certificates.
-
User-provided certificates - This definition applies when you have provided your own certificate. This could be from your own CA, or it might be self-signed. This is passed using the
undercloud_service_certificate
option. In this case, you will need to either trust the self-signed certificate, or the CA (depending on your deployment). -
Auto-generated certificates - This definition applies when you use
certmonger
to generate the certificate using its own local CA. This is enabled using thegenerate_service_certificate
option. In this case, there will be a CA certificate (/etc/pki/ca-trust/source/anchors/cm-local-ca.pem
), and there will be a server certificate used by the undercloud’s HAProxy instance. To present this certificate to OpenStack, you will need to add the CA certificate to theinject-trust-anchor-hiera.yaml
file.
See Section 4.9, “Director configuration parameters” for descriptions and usage of the undercloud_service_certificate
and generate_service_certificate
options.
Use a custom certificate in the undercloud
This example uses a self-signed certificate located in /home/stack/ca.crt.pem
. If you use auto-generated certificates, you will need to use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem
instead.
Open the certificate file and copy only the certificate portion. Do not include the key:
$ vi /home/stack/ca.crt.pem
The certificate portion you need will look similar to this shortened example:
-----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----
Create a new YAML file called
/home/stack/inject-trust-anchor-hiera.yaml
with the following contents, and include the certificate you copied from the PEM file:parameter_defaults: CAMap: overcloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE----- undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3 -----END CERTIFICATE-----
NoteThe certificate string must follow the PEM format and use the correct YAML indentation within the
content
parameter.
The CA certificate is copied to each overcloud node during the overcloud deployment, causing it to trust the encryption presented by the undercloud’s SSL endpoints. For more information on including environment files, see Section 6.12, “Including Environment Files in Overcloud Creation”.
6.10. Customizing the Overcloud with Environment Files
The undercloud includes a set of Heat templates that acts as a plan for your overcloud creation. You can customize aspects of the overcloud using environment files, which are YAML-formatted files that override parameters and resources in the core Heat template collection. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:
- The amount of nodes per each role and their flavors. It is vital to include this information for overcloud creation.
- The location of the container images for containerized OpenStack services. This is the file created from one of the options in Chapter 5, Configuring a container image source.
-
Any network isolation files, starting with the initialization file (
environments/network-isolation.yaml
) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. - Any external load balancing environment files if you are using an external load balancer. See "External Load Balancing for the Overcloud" for more information.
- Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
- Any environment files for Red Hat CDN or Satellite registration. See "Overcloud Registration" for more information.
- Any other custom environment files.
The /usr/share/openstack-tripleo-heat-templates/environments
directory contains environment files to enable containerized services (docker.yaml
and docker-ha.yaml
). OpenStack Platform director automatically includes these files during overcloud deployment. Do not manually include these files with your deployment command.
It is recommended to keep your custom environment files organized in a separate directory, such as the templates
directory.
You can customize advanced features for your overcloud using the Advanced Overcloud Customization guide.
For more detailed information on Heat templates and environment files, see the Understanding Heat Templates section of the Advanced Overcloud Customization guide.
A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution, such as Red Hat Ceph Storage, for block storage.
6.11. Creating the Overcloud with the CLI Tools
The final stage in creating your OpenStack environment is to run the openstack overcloud deploy
command to create it. Before running this command, you should familiarize yourself with key options and how to include custom environment files.
Do not run openstack overcloud deploy
as a background process. The overcloud creation might hang in mid-deployment if started as a background process.
Setting Overcloud Parameters
The following table lists the additional parameters when using the openstack overcloud deploy
command.
Parameter | Description |
---|---|
|
The directory containing the Heat templates to deploy. If blank, the command uses the default template location at |
| The name of the stack to create or update |
| Deployment timeout in minutes. Do not set this option to a value higher than the keystone token timeout limit, which is 240 minutes by default. |
| Virtualization type to use for hypervisors |
|
Network Time Protocol (NTP) server to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: |
| Defines custom values for the environment variable no_proxy, which excludes certain hostnames from proxy communication. |
|
Defines the SSH user to access the overcloud nodes. Normally SSH access occurs through the |
|
Extra environment files to pass to the overcloud deployment. Can be specified more than once. Note that the order of environment files passed to the |
| The directory containing environment files to include in deployment. The command processes these environment files in numerical, then alphabetical order. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks. |
| Performs validation check on the overcloud but does not actually create the overcloud. |
| Skip the overcloud post-deployment configuration. |
| Force the overcloud post-deployment configuration. |
|
Skip generation of a unique identifier for the |
| Path to a YAML file with arguments and parameters. |
| Register overcloud nodes to the Customer Portal or Satellite 6. |
|
Registration method to use for the overcloud nodes. |
| Organization to use for registration. |
| Register the system even if it is already registered. |
|
The base URL of the Satellite server to register overcloud nodes. Use the Satellite’s HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the overcloud obtains the |
| Activation key to use for registration. |
Some command line parameters are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults
section on an environment file. The following table maps deprecated parameters to their Heat Template equivalents.
Parameter | Description | Heat Template Parameter |
---|---|---|
| The number of Controller nodes to scale out |
|
| The number of Compute nodes to scale out |
|
| The number of Ceph Storage nodes to scale out |
|
| The number of Cinder nodes to scale out |
|
| The number of Swift nodes to scale out |
|
| The flavor to use for Controller nodes |
|
| The flavor to use for Compute nodes |
|
| The flavor to use for Ceph Storage nodes |
|
| The flavor to use for Cinder nodes |
|
| The flavor to use for Swift storage nodes |
|
| Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation |
|
| An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed |
|
| The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network |
|
| Defines the interface to bridge onto br-ex for network nodes |
|
| The tenant network type for Neutron |
|
| The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string |
|
| Ranges of GRE tunnel IDs to make available for tenant network allocation |
|
| Ranges of VXLAN VNI IDs to make available for tenant network allocation |
|
| The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network |
|
| The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string |
|
| Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron | No parameter mapping. |
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. | No parameter mapping |
These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.
Run the following command for a full list of options:
(undercloud) $ openstack help overcloud deploy
6.12. Including Environment Files in Overcloud Creation
The -e
includes an environment file to customize your overcloud. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:
- The amount of nodes per each role and their flavors. It is vital to include this information for overcloud creation.
- The location of the container images for containerized OpenStack services. This is the file created from one of the options in Chapter 5, Configuring a container image source.
-
Any network isolation files, starting with the initialization file (
environments/network-isolation.yaml
) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. - Any external load balancing environment files if you are using an external load balancer. See "External Load Balancing for the Overcloud" for more information.
- Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
- Any environment files for Red Hat CDN or Satellite registration. See "Overcloud Registration" for more information.
- Any other custom environment files.
The /usr/share/openstack-tripleo-heat-templates/environments
directory contains environment files to enable containerized services (docker.yaml
and docker-ha.yaml
). OpenStack Platform director automatically includes these files during overcloud deployment. Do not manually include these files with your deployment command.
Any environment files added to the overcloud using the -e
option become part of your overcloud’s stack definition. The following command is an example of how to start the overcloud creation with custom environment files included:
(undercloud) $ openstack overcloud deploy --templates \ -e /home/stack/templates/node-info.yaml\ -e /home/stack/templates/overcloud_images.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /home/stack/templates/ceph-custom-config.yaml \ -e /home/stack/inject-trust-anchor-hiera.yaml \ -r /home/stack/templates/roles_data.yaml \ --ntp-server pool.ntp.org \
This command contains the following additional options:
- --templates
-
Creates the overcloud using the Heat template collection in
/usr/share/openstack-tripleo-heat-templates
as a foundation - -e /home/stack/templates/node-info.yaml
Adds an environment file to define how many nodes and which flavors to use for each role. For example:
parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute OvercloudCephStorageFlavor: ceph-storage ControllerCount: 3 ComputeCount: 3 CephStorageCount: 3
- -e /home/stack/templates/overcloud_images.yaml
- Adds an environment file containing the container image sources. See Chapter 5, Configuring a container image source for more information.
- -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
Adds an environment file to initialize network isolation in the overcloud deployment.
NoteThe
network-isolation.j2.yaml
is the Jinja2 version of this template. Theopenstack overcloud deploy
command renders Jinja2 templates into a plain YAML files. This means you need to include the resulting rendered YAML file name (in this case,network-isolation.yaml
) when you run theopenstack overcloud deploy
command.- -e /home/stack/templates/network-environment.yaml
Adds an environment file to customize network isolation.
NoteRun the
openstack overcloud netenv validate
command to validate the syntax of yournetwork-environment.yaml
file. This command also validates the individual nic-config files for compute, controller, storage, and composable roles network files. Use the-f
or--file
options to specify the file that you want to validate:$ openstack overcloud netenv validate -f ~/templates/network-environment.yaml
- -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
- Adds an environment file to enable Ceph Storage services.
- -e /home/stack/templates/ceph-custom-config.yaml
- Adds an environment file to customize our Ceph Storage configuration.
- -e /home/stack/inject-trust-anchor-hiera.yaml
- Adds an environment file to install a custom certificate in the undercloud.
- --ntp-server pool.ntp.org
- Use an NTP server for time synchronization. This is required for keeping the Controller node cluster in synchronization.
- -r /home/stack/templates/roles_data.yaml
- (optional) The generated roles data if using custom roles or enabling a multi architecture cloud. See Section 6.4, “Generate architecture specific roles” for more information.
The director requires these environment files for re-deployment and post-deployment functions in Chapter 9, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your overcloud.
If you aim to later modify the overcloud configuration, you should:
- Modify parameters in the custom environment files and Heat templates
-
Run the
openstack overcloud deploy
command again with the same environment files
Including an Environment File Directory
You can add a whole directory containing environment files using the --environment-directory
option. The deployment command processes the environment files in this directory in numerical, then alphabetical order. If using this method, it is recommended to use filenames with a numerical prefix to order how they are processed. For example:
(undercloud) $ ls -1 ~/templates 00-node-info.yaml 10-overcloud_images.yaml 20-network-isolation.yaml 30-network-environment.yaml 40-storage-environment.yaml 50-rhel-registration.yaml
Run the following deployment command to include the directory:
(undercloud) $ openstack overcloud deploy --templates --environment-directory ~/templates
Using an Answers File
An answers file is a YAML format file that simplifies the inclusion of templates and environment files. The answers file uses the following parameters:
- templates
-
The core Heat template collection to use. This acts as a substitute for the
--templates
command line option. - environments
-
A list of environment files to include. This acts as a substitute for the
--environment-file
(-e
) command line option.
For example, an answers file might contain the following:
templates: /usr/share/openstack-tripleo-heat-templates/ environments: - ~/templates/00-node-info.yaml - ~/templates/10-network-isolation.yaml - ~/templates/20-network-environment.yaml - ~/templates/30-storage-environment.yaml - ~/templates/40-rhel-registration.yaml
Run the following deployment command to include the answers file:
(undercloud) $ openstack overcloud deploy --answers-file ~/answers.yaml
Guidelines for Overcloud Configuration and Environment File Management
Use the following guidelines to help you manage your environment files and overcloud configuration:
- Do not modify core heat template directly as this can lead to undesirable results and break your environment. Modify overcloud configuration through environment files.
- Do not edit the overcloud configuration directly as such manual configuration gets overridden by the director’s configuration when updating the overcloud stack with the director. Modify overcloud configuration through environment files and rerun your deployment command.
-
Create a bash script that includes your deploy command and use this script when you perform an update to the overcloud. This script helps you keep the exact options and environment files consistent when you rerun the
openstack overcloud deploy
command and helps you avoid breaking your overcloud. - Maintain revisions of the directory holding your environment files to avoid unwanted changes and track the changes made in the past.
6.13. Managing Overcloud Plans
As an alternative to using the openstack overcloud deploy
command, the director can also manage imported plans.
To create a new plan, run the following command as the stack
user:
(undercloud) $ openstack overcloud plan create --templates /usr/share/openstack-tripleo-heat-templates my-overcloud
This creates a plan from the core Heat template collection in /usr/share/openstack-tripleo-heat-templates
. The director names the plan based on your input. In this example, it is my-overcloud
. The director uses this name as a label for the object storage container, the workflow environment, and overcloud stack names.
Add parameters from environment files using the following command:
(undercloud) $ openstack overcloud parameters set my-overcloud ~/templates/my-environment.yaml
Deploy your plans using the following command:
(undercloud) $ openstack overcloud plan deploy my-overcloud
Delete existing plans using the following command:
(undercloud) $ openstack overcloud plan delete my-overcloud
The openstack overcloud deploy
command essentially uses all of these commands to remove the existing plan, upload a new plan with environment files, and deploy the plan.
6.14. Validating Overcloud Templates and Plans
Before executing an overcloud creation or stack update, validate your Heat templates and environment files for any errors.
Creating a Rendered Template
The core Heat templates for the overcloud are in a Jinja2 format. To validate your templates, render a version without Jinja2 formatting using the following commands:
(undercloud) $ openstack overcloud plan create --templates /usr/share/openstack-tripleo-heat-templates overcloud-validation (undercloud) $ mkdir ~/overcloud-validation (undercloud) $ cd ~/overcloud-validation (undercloud) $ openstack container save overcloud-validation
Use the rendered template in ~/overcloud-validation
for the validation tests that follow.
Validating Template Syntax
Use the following command to validate the template syntax:
(undercloud) $ openstack orchestration template validate --show-nested --template ~/overcloud-validation/overcloud.yaml -e ~/overcloud-validation/overcloud-resource-registry-puppet.yaml -e [ENVIRONMENT FILE] -e [ENVIRONMENT FILE]
The validation requires the overcloud-resource-registry-puppet.yaml
environment file to include overcloud-specific resources. Add any additional environment files to this command with -e
option. Also include the --show-nested
option to resolve parameters from nested templates.
This command identifies any syntax errors in the template. If the template syntax validates successfully, the output shows a preview of the resulting overcloud template.
6.15. Monitoring the Overcloud Creation
The overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack
user and run:
(undercloud) $ source ~/stackrc (undercloud) $ openstack stack list --nested
The openstack stack list --nested
command shows the current stage of the overcloud creation.
If the initial overcloud creation fails, you can delete the partially deployed overcloud with the openstack stack delete overcloud
command and try again. Only run this command if these initial overcloud creation fails. Do not run this command on a fully deployed and operational overcloud or else you will delete the entire overcloud.
6.16. Viewing the overcloud deployment output
After a successful overcloud deployment, the shell returns the following information that you can use to access your overcloud:
Overcloud configuration completed. Overcloud Endpoint: http://192.168.24.113:5000 Overcloud Horizon Dashboard URL: http://192.168.24.113:80/dashboard Overcloud rc file: /home/stack/overcloudrc Overcloud Deployed
6.17. Accessing the Overcloud
The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc
, in your stack
user’s home director. Run the following command to use this file:
(undercloud) $ source ~/overcloudrc
This loads the necessary environment variables to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:
(overcloud) $
To return to interacting with the director’s host, run the following command:
(overcloud) $ source ~/stackrc (undercloud) $
Each node in the overcloud also contains a user called heat-admin
. The stack
user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:
(undercloud) $ openstack server list
Then connect to the node using the heat-admin
user and the node’s IP address:
(undercloud) $ ssh heat-admin@192.168.24.23
6.18. Completing the Overcloud Creation
This concludes the creation of the overcloud using the command line tools. For post-creation functions, see Chapter 9, Performing Tasks after Overcloud Creation.