Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 7. Configuring a Basic Overcloud using Pre-Provisioned Nodes
This chapter contains basic configuration procedures for using pre-provisioned nodes to configure an OpenStack Platform environment. This scenario differs from the standard overcloud creation scenarios in several ways:
- You can provision nodes using an external tool and let the director control the overcloud configuration only.
- You can use nodes without relying on the director’s provisioning methods. This is useful if you want to create an overcloud without power management control or use networks with DHCP/PXE boot restrictions.
- The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) to manage nodes.
-
Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2
overcloud-full
image.
This scenario includes only basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.
Combining pre-provisioned nodes with director-provisioned nodes in an overcloud is not supported.
Requirements
- The director node created in Chapter 4, Installing director.
- A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 7.6 or later installed as the host operating system. Red Hat recommends using the latest version available.
- One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to the director. The examples for this scenario use following IP address assignments:
Expand Table 7.1. Provisioning Network IP Assignments Node Name IP Address Director
192.168.24.1
Controller 0
192.168.24.2
Compute 0
192.168.24.3
- Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with the director over the Public API endpoint. There are certain caveats to this scenario, which this chapter examines later in Section 7.5, “Using a Separate Network for Overcloud Nodes”.
- All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.
7.1. Creating a User for Configuring Nodes Link kopierenLink in die Zwischenablage kopiert!
When configuring an overcloud with pre-provisioned nodes, the director requires SSH access to the overcloud nodes as the stack
user. To create the stack
user, complete the following steps:
On each overcloud node, create the
stack
user and set a password on each node. For example, run the following commands on the Controller node:useradd stack passwd stack # specify a password
[root@controller-0 ~]# useradd stack [root@controller-0 ~]# passwd stack # specify a password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable password requirements for this user when using
sudo
:echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack
[root@controller-0 ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@controller-0 ~]# chmod 0440 /etc/sudoers.d/stack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After creating and configuring the
stack
user on all pre-provisioned nodes, copy thestack
user’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node, run the following command:ssh-copy-id stack@192.168.24.2
[stack@director ~]$ ssh-copy-id stack@192.168.24.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Registering the Operating System for Nodes Link kopierenLink in die Zwischenablage kopiert!
Each node requires access to a Red Hat subscription.
Standalone Ceph nodes are an exception and do not require a Red Hat OpenStack Platform subscription. For standalone Ceph nodes, director requires newer ansible packages to be installed. It is essential to enable rhel-7-server-openstack-14-deployment-tools-rpms
repository on all Ceph nodes without active Red Hat OpenStack Platform subscriptions to obtain Red Hat OpenStack Platform-compatible deployment tools.
Complete the following steps on each node to register each respective node to the Red Hat Content Delivery Network:
Run the registration command and enter your Customer Portal user name and password when prompted:
sudo subscription-manager register
[root@controller-0 ~]# sudo subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the entitlement pool for the Red Hat OpenStack Platform 14:
sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
[root@controller-0 ~]# sudo subscription-manager list --available --all --matches="Red Hat OpenStack"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 14 entitlements:
sudo subscription-manager attach --pool=pool_id
[root@controller-0 ~]# sudo subscription-manager attach --pool=pool_id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all default repositories:
sudo subscription-manager repos --disable=*
[root@controller-0 ~]# sudo subscription-manager repos --disable=*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the required Red Hat Enterprise Linux repositories.
For x86_64 systems, run:
sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-14-rpms --enable=rhel-7-server-rhceph-3-osd-rpms --enable=rhel-7-server-rhceph-3-mon-rpms --enable=rhel-7-server-rhceph-3-tools-rpms
[root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-14-rpms --enable=rhel-7-server-rhceph-3-osd-rpms --enable=rhel-7-server-rhceph-3-mon-rpms --enable=rhel-7-server-rhceph-3-tools-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For POWER systems, run:
sudo subscription-manager repos --enable=rhel-7-for-power-le-rpms --enable=rhel-7-server-openstack-14-for-power-le-rpms
[root@controller-0 ~]# sudo subscription-manager repos --enable=rhel-7-for-power-le-rpms --enable=rhel-7-server-openstack-14-for-power-le-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
ImportantEnable only the repositories listed. Additional repositories can cause package and software conflicts. Do not enable any additional repositories.
Update your system to ensure you have the latest base system packages:
sudo yum update -y sudo reboot
[root@controller-0 ~]# sudo yum update -y [root@controller-0 ~]# sudo reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The node is now ready to use for your overcloud.
7.3. Configuring SSL/TLS Access to the Director Link kopierenLink in die Zwischenablage kopiert!
If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If using your own certificate authority, perform the following actions on each overcloud node:
-
Copy the certificate authority file to the
/etc/pki/ca-trust/source/anchors/
directory on each pre-provisioned node. Run the following command on each overcloud node:
sudo update-ca-trust extract
[root@controller-0 ~]# sudo update-ca-trust extract
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
These steps ensure the overcloud nodes can access the director’s Public API over SSL/TLS.
7.4. Configuring Networking for the Control Plane Link kopierenLink in die Zwischenablage kopiert!
The pre-provisioned overcloud nodes obtain metadata from the director using standard HTTP requests. This means all overcloud nodes require L3 access to either:
-
The director’s Control Plane network, which is the subnet defined with the
network_cidr
parameter in yourundercloud.conf
file. The overcloud nodes require either direct access to this subnet or routable access to the subnet. -
The director’s Public API endpoint, specified as the
undercloud_public_host
parameter in yourundercloud.conf
file. This option is available if you do not have an L3 route to the Control Plane or you aim to use SSL/TLS communication. See Section 7.5, “Using a Separate Network for Overcloud Nodes” for additional information about configuring your overcloud nodes to use the Public API endpoint.
The director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes.
Using Network Isolation
You can use network isolation to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies in the The Advanced Overcloud Customization guide. You can also define specific IP addresses for nodes on the control plane. For more information about isolating networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:
If you use network isolation, ensure your NIC templates do not include the NIC used for undercloud access. These template can reconfigure the NIC, which introduces connectivity and configuration problems during deployment.
Assigning IP Addresses
If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If using the director’s Provisioning network as the Control Plane, ensure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning (dhcp_start
and dhcp_end
) and introspection (inspection_iprange
).
During standard overcloud creation, the director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause the director to assign different IP addresses to the ones you configure manually for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-provisioned IP assignments on the Control Plane.
For example, you can use an environment file ctlplane-assignments.yaml
with the following IP assignments to implement a predictable IP strategy:
In this example, the OS::TripleO::DeployedServer::ControlPlanePort
resource passes a set of parameters to the director and defines the IP assignments of our pre-provisioned nodes. The DeployedServerPortMap
parameter defines the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines the following attributes:
-
The name of the assignment, which follows the format
<node_hostname>-<network>
where the<node_hostname>
value matches the short hostname for the node and<network>
matches the lowercase name of the network. For example:controller-0-ctlplane
forcontroller-0.example.com
andcompute-0-ctlplane
forcompute-0.example.com
. The IP assignments, which use the following parameter patterns:
-
fixed_ips/ip_address
- Defines the fixed IP addresses for the control plane. Use multipleip_address
parameters in a list to define multiple IP addresses. -
subnets/cidr
- Defines the CIDR value for the subnet.
-
A later section in this chapter uses the resulting environment file (ctlplane-assignments.yaml
) as part of the openstack overcloud deploy
command.
7.5. Using a Separate Network for Overcloud Nodes Link kopierenLink in die Zwischenablage kopiert!
By default, the director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director’s Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.
There are several requirements for this scenario:
- The overcloud nodes must accommodate the basic network configuration from Section 7.4, “Configuring Networking for the Control Plane”.
- You must enable SSL/TLS on the director for Public API endpoint usage. For more information, see Section 4.2, “Director configuration parameters” and Appendix A, SSL/TLS Certificate Configuration.
-
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the
undercloud_public_host
parameter in theundercloud.conf
file to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Node Name | IP Address or FQDN |
---|---|
Director (Internal API) | 192.168.24.1 (Provisioning Network and Control Plane) |
Director (Public API) | 10.1.1.1 / director.example.com |
Overcloud Virtual IP | 192.168.100.1 |
Controller 0 | 192.168.100.2 |
Compute 0 | 192.168.100.3 |
The following sections provide additional configuration for situations that require a separate network for overcloud nodes.
IP Address Assignments
The method for IP assignments is similar to Section 7.4, “Configuring Networking for the Control Plane”. However, since the Control Plane is not routable from the deployed servers, you must use the DeployedServerPortMap
parameter to assign IP addresses from your chosen overcloud node subnet, including the virtual IP address to access the Control Plane. The following example is a modified version of the ctlplane-assignments.yaml
environment file from Section 7.4, “Configuring Networking for the Control Plane” that accommodates this network architecture:
- 1
- The
RedisVipPort
resource is mapped tonetwork/ports/noop.yaml
. This mapping is necessary because the default Redis VIP address comes from the Control Plane. In this situation, we use anoop
to disable this Control Plane mapping. - 2
- The
EC2MetadataIp
andControlPlaneDefaultRoute
parameters are set to the value of the Control Plane virtual IP address. The default NIC configuration templates require these parameters and you must set them to use a pingable IP address to pass the validations performed during deployment. Alternatively, customize the NIC configuration so they do not require these parameters.
7.6. Mapping pre-provisioned node hostnames Link kopierenLink in die Zwischenablage kopiert!
When configuring pre-provisioned nodes, you must map Heat-based hostnames to their actual hostnames so that ansible-playbook
can reach a resolvable host. Use the HostnameMap
to map these values.
Procedure
Create an environment file, for example
hostname-map.yaml
, and include theHostnameMap
parameter and the hostname mappings. Use the following syntax:parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]
parameter_defaults: HostnameMap: [HEAT HOSTNAME]: [ACTUAL HOSTNAME] [HEAT HOSTNAME]: [ACTUAL HOSTNAME]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
[HEAT HOSTNAME]
usually conforms to the following convention:[STACK NAME]-[ROLE]-[INDEX]
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
hostname-map.yaml
file.
7.7. Configuring Ceph Storage for Pre-Provisioned Nodes Link kopierenLink in die Zwischenablage kopiert!
When using ceph-ansible
and servers that are already deployed, you must run commands, such as the following, from the undercloud before deployment:
export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42" bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
export OVERCLOUD_HOSTS="192.168.1.8 192.168.1.42"
bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
Using the example export
command, set the OVERCLOUD_HOSTS variable to a space-separated list of IP addresses of the overcloud hosts intended to be used as Ceph clients (such as the Compute, Block Storage, Image, File System, Telemetry services, and so forth). The enable-ssh-admin.sh
script configures a user on the overcloud nodes that Ansible uses to configure Ceph clients.
7.8. Creating the Overcloud with Pre-Provisioned Nodes Link kopierenLink in die Zwischenablage kopiert!
The overcloud deployment uses the standard CLI methods from Section 6.11, “Deployment command”. For pre-provisioned nodes, the deployment command requires some additional options and environment files from the core Heat template collection:
-
--disable-validations
- Disables basic CLI validations for services not used with pre-provisioned infrastructure, otherwise the deployment will fail. -
environments/deployed-server-environment.yaml
- Primary environment file for creating and configuring pre-provisioned infrastructure. This environment file substitutes theOS::Nova::Server
resources withOS::Heat::DeployedServer
resources. -
environments/deployed-server-bootstrap-environment-rhel.yaml
- Environment file to execute a bootstrap script on the pre-provisioned servers. This script installs additional packages and includes basic configuration for overcloud nodes. -
environments/deployed-server-pacemaker-environment.yaml
- Environment file for Pacemaker configuration on pre-provisioned Controller nodes. The namespace for the resources registered in this file use the Controller role name fromdeployed-server/deployed-server-roles-data.yaml
, which isControllerDeployedServer
by default. deployed-server/deployed-server-roles-data.yaml
- An example custom roles file. This file replicates the defaultroles_data.yaml
but also includes thedisable_constraints: True
parameter for each role. This parameter disables orchestration constraints in the generated role templates. These constraints are for services that pre-provisioned infrastructure does not use.If you want to use a custom roles file, ensure you include the
disable_constraints: True
parameter for each role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The following command is an example overcloud deployment command with the environment files specific to the pre-provisioned architecture:
The --overcloud-ssh-user
and --overcloud-ssh-key
options are used to SSH into each overcloud node during the configuration stage, create an initial tripleo-admin
user, and inject an SSH key into /home/tripleo-admin/.ssh/authorized_keys
. To inject the SSH key, specify the credentials for the initial SSH connection with --overcloud-ssh-user
and --overcloud-ssh-key
(defaults to ~/.ssh/id_rsa
). To limit exposure to the private key you specify with the --overcloud-ssh-key
option, the director never passes this key to any API service, such as Heat or Mistral, and only the director’s openstack overcloud deploy
command uses this key to enable access for the tripleo-admin
user.
7.9. Overcloud deployment output Link kopierenLink in die Zwischenablage kopiert!
Once the overcloud creation completes, the director provides a recap of the Ansible plays executed to configure the overcloud:
The director also provides details to access your overcloud.
7.10. Accessing the Overcloud Link kopierenLink in die Zwischenablage kopiert!
The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc
, in your stack
user’s home director. Run the following command to use this file:
(undercloud) $ source ~/overcloudrc
(undercloud) $ source ~/overcloudrc
This loads environment variables necessary to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:
(overcloud) $
(overcloud) $
To return to interacting with the director’s host, run the following command:
(overcloud) $ source ~/stackrc (undercloud) $
(overcloud) $ source ~/stackrc
(undercloud) $
7.11. Scaling Pre-Provisioned Nodes Link kopierenLink in die Zwischenablage kopiert!
The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 10, Scaling overcloud nodes. However, the process for adding new pre-provisioned nodes differs since pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).
Scaling Up Pre-Provisioned Nodes
When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director’s node count.
Perform the following actions to scale up overcloud nodes:
- Prepare the new pre-provisioned nodes according to the Requirements.
- Scale up the nodes. See Chapter 10, Scaling overcloud nodes for these instructions.
- After executing the deployment command, wait until the director creates the new node resources and launches the configuration.
Scaling Down Pre-Provisioned Nodes
When scaling down the overcloud with pre-provisioned nodes, follow the scale down instructions as normal as shown in Chapter 10, Scaling overcloud nodes.
In most scaling operations, you must obtain the UUID value of the node you want to remove and pass this value to the openstack overcloud node delete
command. To obtain this UUID, list the resources for the specific role:
openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::<RoleName>Server
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::<RoleName>Server
Replace <RoleName>
with the actual name of the role that you want to scale down. For example, for the ComputeDeployedServer
role, run the following command:
openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
$ openstack stack resource list overcloud -c physical_resource_id -c stack_name -n5 --filter type=OS::TripleO::ComputeDeployedServerServer
Use the stack_name
column in the command output to identify the UUID associated with each node. The stack_name
includes the integer value of the index of the node in the Heat resource group:
The indices 0, 1, or 2 in the stack_name
column correspond to the node order in the Heat resource group. Pass the corresponding UUID value from the physical_resource_id
column to openstack overcloud node delete
command.
Once you have removed overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shutdown these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.
After powering off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.
7.12. Removing a Pre-Provisioned Overcloud Link kopierenLink in die Zwischenablage kopiert!
To remove an entire overcloud that uses pre-provisioned nodes, follow the same procedure as a standard overcloud. See Section 8.14, “Removing the Overcloud” for more details.
After removing the overcloud, power off all nodes and reprovision them to a base operating system configuration.
Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.
7.13. Completing the Overcloud Creation Link kopierenLink in die Zwischenablage kopiert!
This concludes the creation of the overcloud using pre-provisioned nodes. For post-creation functions, see Chapter 8, Performing Tasks after Overcloud Creation.