Advanced Overcloud Customization
Methods for configuring advanced features using Red Hat OpenStack Platform director
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback.
- Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.
- Click the following link to open a the Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Introduction to overcloud configuration Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director provides a set of tools that you can use to provision and create a fully featured OpenStack environment, also known as the overcloud. The Director Installation and Usage Guide covers the preparation and configuration of a basic overcloud. However, a production-level overcloud might require additional configuration:
- Basic network configuration to integrate the overcloud into your existing network infrastructure.
- Network traffic isolation on separate VLANs for certain OpenStack network traffic types.
- SSL configuration to secure communication on public endpoints
- Storage options such as NFS, iSCSI, Red Hat Ceph Storage, and multiple third-party storage devices.
- Red Hat Content Delivery Network node registration, or registration with your internal Red Hat Satellite 5 or 6 server.
- Various system-level options.
- Various OpenStack service options.
The examples in this guide are optional steps to configure the overcloud. These steps are necessary only if you want to provide the overcloud with additional functionality. Use the steps that apply to the requirements of your environment.
Chapter 2. Understanding heat templates Copy linkLink copied to clipboard!
The custom configurations in this guide use heat templates and environment files to define certain aspects of the overcloud. This chapter provides a basic introduction to heat templates so that you can understand the structure and format of these templates in the context of Red Hat OpenStack Platform director.
2.1. heat templates Copy linkLink copied to clipboard!
Director uses Heat Orchestration Templates (HOT) as the template format for the overcloud deployment plan. Templates in HOT format are usually expressed in YAML format. The purpose of a template is to define and create a stack, which is a collection of resources that OpenStack Orchestration (heat) creates, and the configuration of the resources. Resources are objects in Red Hat OpenStack Platform (RHOSP) and can include compute resources, network configuration, security groups, scaling rules, and custom resources.
A heat template has three main sections:
- parameters
-
These are settings passed to heat, which provide a way to customize a stack, and any default values for parameters without passed values. These settings are defined in the
parameterssection of a template. - resources
-
Use the
resourcessection to define the resources, such as compute instances, networks, and storage volumes, that you can create when you deploy a stack using this template. Red Hat OpenStack Platform (RHOSP) contains a set of core resources that span across all components. These are the specific objects to create and configure as part of a stack. - outputs
-
Use the
outputssection to declare the output parameters that your cloud users can access after the stack is created. Your cloud users can use these parameters to request details about the stack, such as the IP addresses of deployed instances, or URLs of web applications deployed as part of the stack.
Example of a basic heat template:
This template uses the resource type type: OS::Nova::Server to create an instance called my_instance with a particular flavor, image, and key that the cloud user specifies. The stack can return the value of instance_name, which is called My Cirros Instance.
When heat processes a template, it creates a stack for the template and a set of child stacks for resource templates. This creates a hierarchy of stacks that descend from the main stack that you define with your template. You can view the stack hierarchy with the following command:
openstack stack list --nested
$ openstack stack list --nested
2.2. Environment files Copy linkLink copied to clipboard!
An environment file is a special type of template that you can use to customize your heat templates. You can include environment files in the deployment command, in addition to the core heat templates. An environment file contains three main sections:
- resource_registry
- This section defines custom resource names, linked to other heat templates. This provides a method to create custom resources that do not exist within the core resource collection.
- parameters
- These are common settings that you apply to the parameters of the top-level template. For example, if you have a template that deploys nested stacks, such as resource registry mappings, the parameters apply only to the top-level template and not to templates for the nested resources.
- parameter_defaults
- These parameters modify the default values for parameters in all templates. For example, if you have a heat template that deploys nested stacks, such as resource registry mappings,the parameter defaults apply to all templates.
Use parameter_defaults instead of parameters when you create custom environment files for your overcloud, so that your parameters apply to all stack templates for the overcloud.
Example of a basic environment file:
This environment file (my_env.yaml) might be included when creating a stack from a certain heat template (my_template.yaml). The my_env.yaml file creates a new resource type called OS::Nova::Server::MyServer. The myserver.yaml file is a heat template file that provides an implementation for this resource type that overrides any built-in ones. You can include the OS::Nova::Server::MyServer resource in your my_template.yaml file.
MyIP applies a parameter only to the main heat template that deploys with this environment file. In this example, MyIP applies only to the parameters in my_template.yaml.
NetworkName applies to both the main heat template, my_template.yaml, and the templates that are associated with the resources that are included in the main template, such as the OS::Nova::Server::MyServer resource and its myserver.yaml template in this example.
For RHOSP to use the heat template file as a custom template resource, the file extension must be either .yaml or .template.
2.3. Core overcloud heat templates Copy linkLink copied to clipboard!
Director contains a core heat template collection and environment file collection for the overcloud. This collection is stored in /usr/share/openstack-tripleo-heat-templates.
The main files and directories in this template collection are:
overcloud.j2.yaml- This is the main template file that director uses to create the overcloud environment. This file uses Jinja2 syntax to iterate over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the overcloud deployment process.
overcloud-resource-registry-puppet.j2.yaml- This is the main environment file that director uses to create the overcloud environment. It provides a set of configurations for Puppet modules stored on the overcloud image. After director writes the overcloud image to each node, heat starts the Puppet configuration for each node by using the resources registered in this environment file. This file uses Jinja2 syntax to iterate over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the overcloud deployment process.
roles_data.yaml- This file contains the definitions of the roles in an overcloud and maps services to each role.
network_data.yaml-
This file contains the definitions of the networks in an overcloud and their properties such as subnets, allocation pools, and VIP status. The default
network_data.yamlfile contains the default networks: External, Internal Api, Storage, Storage Management, Tenant, and Management. You can create a customnetwork_data.yamlfile and add it to youropenstack overcloud deploycommand with the-noption. plan-environment.yaml- This file contains the definitions of the metadata for your overcloud plan. This includes the plan name, main template to use, and environment files to apply to the overcloud.
capabilities-map.yaml- This file contains a mapping of environment files for an overcloud plan.
deployment-
This directory contains heat templates. The
overcloud-resource-registry-puppet.j2.yamlenvironment file uses the files in this directory to drive the application of the Puppet configuration on each node. environments-
This directory contains additional heat environment files that you can use for your overcloud creation. These environment files enable extra functions for your resulting Red Hat OpenStack Platform (RHOSP) environment. For example, the directory contains an environment file to enable Cinder NetApp backend storage (
cinder-netapp-config.yaml). network- This directory contains a set of heat templates that you can use to create isolated networks and ports.
puppet-
This directory contains templates that control Puppet configuration. The
overcloud-resource-registry-puppet.j2.yamlenvironment file uses the files in this directory to drive the application of the Puppet configuration on each node. puppet/services-
This directory contains legacy heat templates for all service configuration. The templates in the
deploymentdirectory replace most of the templates in thepuppet/servicesdirectory. extraconfig- This directory contains templates that you can use to enable extra functionality.
firstboot-
This directory contains example
first_bootscripts that director uses when initially creating the nodes.
2.4. Plan environment metadata Copy linkLink copied to clipboard!
You can define metadata for your overcloud plan in a plan environment metadata file. Director applies metadata during the overcloud creation, and when importing and exporting your overcloud plan.
Use plan environment files to define workflows which director can execute with the OpenStack Workflow (Mistral) service. A plan environment metadata file includes the following parameters:
- version
- The version of the template.
- name
- The name of the overcloud plan and the container in OpenStack Object Storage (swift) that you want to use to store the plan files.
- template
-
The core parent template that you want to use for the overcloud deployment. This is most often
overcloud.yaml, which is the rendered version of theovercloud.yaml.j2template. - environments
-
Defines a list of environment files that you want to use. Specify the name and relative locations of each environment file with the
pathsub-parameter. - parameter_defaults
-
A set of parameters that you want to use in your overcloud. This functions in the same way as the
parameter_defaultssection in a standard environment file. - passwords
-
A set of parameters that you want to use for overcloud passwords. This functions in the same way as the
parameter_defaultssection in a standard environment file. Usually, the director populates this section automatically with randomly generated passwords. - workflow_parameters
- Use this parameter to provide a set of parameters to OpenStack Workflow (mistral) namespaces. You can use this to calculate and automatically generate certain overcloud parameters.
The following snippet is an example of the syntax of a plan environment file:
You can include the plan environment metadata file with the openstack overcloud deploy command with the -p option:
(undercloud) $ openstack overcloud deploy --templates \ -p /my-plan-environment.yaml \ [OTHER OPTIONS]
(undercloud) $ openstack overcloud deploy --templates \
-p /my-plan-environment.yaml \
[OTHER OPTIONS]
You can also view plan metadata for an existing overcloud plan with the following command:
(undercloud) $ openstack object save overcloud plan-environment.yaml --file -
(undercloud) $ openstack object save overcloud plan-environment.yaml --file -
2.5. Including environment files in overcloud creation Copy linkLink copied to clipboard!
Include environment files in the deployment command with the -e option. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources that you define in subsequent environment files take precedence. For example, you have two environment files that contain a common resource type OS::TripleO::NodeExtraConfigPost, and a common parameter TimeZone:
environment-file-1.yaml
environment-file-2.yaml
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/template-2.yaml parameter_defaults: TimeZone: 'Hongkong'
resource_registry:
OS::TripleO::NodeExtraConfigPost: /home/stack/templates/template-2.yaml
parameter_defaults:
TimeZone: 'Hongkong'
You include both environment files in the deployment command:
openstack overcloud deploy --templates -e environment-file-1.yaml -e environment-file-2.yaml
$ openstack overcloud deploy --templates -e environment-file-1.yaml -e environment-file-2.yaml
The openstack overcloud deploy command runs through the following process:
- Loads the default configuration from the core heat template collection.
-
Applies the configuration from
environment-file-1.yaml, which overrides any common settings from the default configuration. -
Applies the configuration from
environment-file-2.yaml, which overrides any common settings from the default configuration andenvironment-file-1.yaml.
This results in the following changes to the default configuration of the overcloud:
-
OS::TripleO::NodeExtraConfigPostresource is set to/home/stack/templates/template-2.yaml, as defined inenvironment-file-2.yaml. -
TimeZoneparameter is set toHongkong, as defined inenvironment-file-2.yaml. -
RabbitFDLimitparameter is set to65536, as defined inenvironment-file-1.yaml.environment-file-2.yamldoes not change this value.
You can use this mechanism to define custom configuration for your overcloud without values from multiple environment files conflicting.
2.6. Using customized core heat templates Copy linkLink copied to clipboard!
When creating the overcloud, director uses a core set of heat templates located in /usr/share/openstack-tripleo-heat-templates. If you want to customize this core template collection, use the following Git workflows to manage your custom template collection:
Procedure
Create an initial Git repository that contains the heat template collection:
Copy the template collection to the
/home/stack/templatesdirectory:cd ~/templates cp -r /usr/share/openstack-tripleo-heat-templates .
$ cd ~/templates $ cp -r /usr/share/openstack-tripleo-heat-templates .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the custom template directory and initialize a Git repository:
cd ~/templates/openstack-tripleo-heat-templates git init .
$ cd ~/templates/openstack-tripleo-heat-templates $ git init .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your Git user name and email address:
git config --global user.name "<USER_NAME>" git config --global user.email "<EMAIL_ADDRESS>"
$ git config --global user.name "<USER_NAME>" $ git config --global user.email "<EMAIL_ADDRESS>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<USER_NAME>with the user name that you want to use. Replace<EMAIL_ADDRESS>with your email address.Stage all templates for the initial commit:
git add *
$ git add *Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an initial commit:
git commit -m "Initial creation of custom core heat templates"
$ git commit -m "Initial creation of custom core heat templates"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This creates an initial
masterbranch that contains the latest core template collection. Use this branch as the basis for your custom branch and merge new template versions to this branch.
Use a custom branch to store your changes to the core template collection. Use the following procedure to create a
my-customizationsbranch and add customizations:Create the
my-customizationsbranch and switch to it:git checkout -b my-customizations
$ git checkout -b my-customizationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the files in the custom branch.
Stage the changes in git:
git add [edited files]
$ git add [edited files]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Commit the changes to the custom branch:
git commit -m "[Commit message for custom changes]"
$ git commit -m "[Commit message for custom changes]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This adds your changes as commits to the
my-customizationsbranch. When themasterbranch updates, you can rebasemy-customizationsoffmaster, which causes git to add these commits on to the updated template collection. This helps track your customizations and replay them on future template updates.
When you update the undercloud, the
openstack-tripleo-heat-templatespackage might also receive updates. When this occurs, you must also update your custom template collection:Save the
openstack-tripleo-heat-templatespackage version as an environment variable:export PACKAGE=$(rpm -qv openstack-tripleo-heat-templates)
$ export PACKAGE=$(rpm -qv openstack-tripleo-heat-templates)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to your template collection directory and create a new branch for the updated templates:
cd ~/templates/openstack-tripleo-heat-templates git checkout -b $PACKAGE
$ cd ~/templates/openstack-tripleo-heat-templates $ git checkout -b $PACKAGECopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove all files in the branch and replace them with the new versions:
git rm -rf * cp -r /usr/share/openstack-tripleo-heat-templates/* .
$ git rm -rf * $ cp -r /usr/share/openstack-tripleo-heat-templates/* .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add all templates for the initial commit:
git add *
$ git add *Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a commit for the package update:
git commit -m "Updates for $PACKAGE"
$ git commit -m "Updates for $PACKAGE"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Merge the branch into master. If you use a Git management system (such as GitLab), use the management workflow. If you use git locally, merge by switching to the
masterbranch and run thegit mergecommand:git checkout master git merge $PACKAGE
$ git checkout master $ git merge $PACKAGECopy to Clipboard Copied! Toggle word wrap Toggle overflow
The master branch now contains the latest version of the core template collection. You can now rebase the my-customization branch from this updated collection.
Update the
my-customizationbranch,:Change to the
my-customizationsbranch:git checkout my-customizations
$ git checkout my-customizationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Rebase the branch off
master:git rebase master
$ git rebase masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow This updates the
my-customizationsbranch and replays the custom commits made to this branch.
Resolve any conflicts that occur during the rebase:
Check which files contain the conflicts:
git status
$ git statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resolve the conflicts of the template files identified.
Add the resolved files:
git add [resolved files]
$ git add [resolved files]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Continue the rebase:
git rebase --continue
$ git rebase --continueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the custom template collection:
Ensure that you have switched to the
my-customizationbranch:git checkout my-customizations
git checkout my-customizationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
openstack overcloud deploycommand with the--templatesoption to specify your local template directory:openstack overcloud deploy --templates /home/stack/templates/openstack-tripleo-heat-templates [OTHER OPTIONS]
$ openstack overcloud deploy --templates /home/stack/templates/openstack-tripleo-heat-templates [OTHER OPTIONS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Director uses the default template directory (/usr/share/openstack-tripleo-heat-templates) if you specify the --templates option without a directory.
Red Hat recommends using the methods in Chapter 4, Configuration hooks instead of modifying the heat template collection.
2.7. Jinja2 rendering Copy linkLink copied to clipboard!
The core heat templates in /usr/share/openstack-tripleo-heat-templates contain a number of files that have the j2.yaml file extension. These files contain Jinja2 template syntax and director renders these files to their static heat template equivalents that have the .yaml extension. For example, the main overcloud.j2.yaml file renders into overcloud.yaml. Director uses the resulting overcloud.yaml file.
The Jinja2-enabled heat templates use Jinja2 syntax to create parameters and resources for iterative values. For example, the overcloud.j2.yaml file contains the following snippet:
When director renders the Jinja2 syntax, director iterates over the roles defined in the roles_data.yaml file and populates the {{role.name}}Count parameter with the name of the role. The default roles_data.yaml file contains five roles and results in the following parameters from our example:
-
ControllerCount -
ComputeCount -
BlockStorageCount -
ObjectStorageCount -
CephStorageCount
A example rendered version of the parameter looks like this:
Director renders Jinja2-enabled templates and environment files only from within the directory of your core heat templates. The following use cases demonstrate the correct method to render the Jinja2 templates.
Use case 1: Default core templates
Template directory: /usr/share/openstack-tripleo-heat-templates/
Environment file: /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.j2.yaml
Director uses the default core template location (--templates) and renders the network-isolation.j2.yaml file into network-isolation.yaml. When you run the openstack overcloud deploy command, use the -e option to include the name of the rendered network-isolation.yaml file.
openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
$ openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
...
Use case 2: Custom core templates
Template directory: /home/stack/tripleo-heat-templates
Environment file: /home/stack/tripleo-heat-templates/environments/network-isolation.j2.yaml
Director uses a custom core template location (--templates /home/stack/tripleo-heat-templates) and director renders the network-isolation.j2.yaml file within the custom core templates into network-isolation.yaml. When you run the openstack overcloud deploy command, use the -e option to include the name of the rendered network-isolation.yaml file.
openstack overcloud deploy --templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml
$ openstack overcloud deploy --templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml
...
Use case 3: Incorrect usage
Template directory: /usr/share/openstack-tripleo-heat-templates/
Environment file: /home/stack/tripleo-heat-templates/environments/network-isolation.j2.yaml
Director uses a default core template location (--templates /usr/share/openstack-tripleo-heat-templates). However, the chosen network-isolation.j2.yaml is not located within the custom core templates, so it will not render into network-isolation.yaml. This causes the deployment to fail.
Processing Jinja2 syntax into static templates
Use the process-templates.py script to render the Jinja2 syntax of the openstack-tripleo-heat-templates into a set of static templates. To render a copy of the openstack-tripleo-heat-templates collection with the process-templates.py script, change to the openstack-tripleo-heat-templates directory:
cd /usr/share/openstack-tripleo-heat-templates
$ cd /usr/share/openstack-tripleo-heat-templates
Run the process-templates.py script, which is located in the tools directory, along with the -o option to define a custom directory to save the static copy:
./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered
$ ./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered
This converts all Jinja2 templates to their rendered YAML versions and saves the results to ~/openstack-tripleo-heat-templates-rendered.
Chapter 3. Heat parameters Copy linkLink copied to clipboard!
Each heat template in the director template collection contains a parameters section. This section contains definitions for all parameters specific to a particular overcloud service. This includes the following:
-
overcloud.j2.yaml- Default base parameters -
roles_data.yaml- Default parameters for composable roles -
deployment/*.yaml- Default parameters for specific services
You can modify the values for these parameters using the following method:
- Create an environment file for your custom parameters.
-
Include your custom parameters in the
parameter_defaultssection of the environment file. -
Include the environment file with the
openstack overcloud deploycommand.
3.1. Example 1: Configuring the time zone Copy linkLink copied to clipboard!
The Heat template for setting the timezone (puppet/services/time/timezone.yaml) contains a TimeZone parameter. If you leave the TimeZone parameter blank, the overcloud sets the time to UTC as a default.
To obtain lists of timezones run the timedatectl list-timezones command. The following example command retrieves the timezones for Asia:
sudo timedatectl list-timezones|grep "Asia"
$ sudo timedatectl list-timezones|grep "Asia"
After you identify your timezone, set the TimeZone parameter in an environment file. The following example environment file sets the value of TimeZone to Asia/Tokyo:
parameter_defaults: TimeZone: 'Asia/Tokyo'
parameter_defaults:
TimeZone: 'Asia/Tokyo'
3.2. Example 2: Configuring RabbitMQ file descriptor limit Copy linkLink copied to clipboard!
For certain configurations, you might need to increase the file descriptor limit for the RabbitMQ server. Use the deployment/rabbitmq/rabbitmq-container-puppet.yaml heat template to set a new limit in the RabbitFDLimit parameter. Add the following entry to an environment file:
parameter_defaults: RabbitFDLimit: 65536
parameter_defaults:
RabbitFDLimit: 65536
3.3. Example 3: Enabling and disabling parameters Copy linkLink copied to clipboard!
You might need to initially set a parameter during a deployment, then disable the parameter for a future deployment operation, such as updates or scaling operations. For example, to include a custom RPM during the overcloud creation, include the following entry in an environment file:
parameter_defaults: DeployArtifactURLs: ["http://www.example.com/myfile.rpm"]
parameter_defaults:
DeployArtifactURLs: ["http://www.example.com/myfile.rpm"]
To disable this parameter from a future deployment, it is not sufficient to remove the parameter. Instead, you must set the parameter to an empty value:
parameter_defaults: DeployArtifactURLs: []
parameter_defaults:
DeployArtifactURLs: []
This ensures the parameter is no longer set for subsequent deployments operations.
3.4. Example 4: Role-based parameters Copy linkLink copied to clipboard!
Use the [ROLE]Parameters parameters, replacing [ROLE] with a composable role, to set parameters for a specific role.
For example, director configures sshd on both Controller and Compute nodes. To set a different sshd parameters for Controller and Compute nodes, create an environment file that contains both the ControllerParameters and ComputeParameters parameter and set the sshd parameters for each specific role:
parameter_defaults:
ControllerParameters:
BannerText: "This is a Controller node"
ComputeParameters:
BannerText: "This is a Compute node"
parameter_defaults:
ControllerParameters:
BannerText: "This is a Controller node"
ComputeParameters:
BannerText: "This is a Compute node"
3.5. Identifying parameters that you want to modify Copy linkLink copied to clipboard!
Red Hat OpenStack Platform director provides many parameters for configuration. In some cases, you might experience difficulty identifying a certain option that you want to configure, and the corresponding director parameter. If there is an option that you want to configure with director, use the following workflow to identify and map the option to a specific overcloud parameter:
- Identify the option that you want to configure. Make a note of the service that uses the option.
Check the corresponding Puppet module for this option. The Puppet modules for Red Hat OpenStack Platform are located under
/etc/puppet/moduleson the director node. Each module corresponds to a particular service. For example, thekeystonemodule corresponds to the OpenStack Identity (keystone).- If the Puppet module contains a variable that controls the chosen option, move to the next step.
- If the Puppet module does not contain a variable that controls the chosen option, no hieradata exists for this option. If possible, you can set the option manually after the overcloud completes deployment.
Check the core heat template collection for the Puppet variable in the form of hieradata. The templates in
deployment/*usually correspond to the Puppet modules of the same services. For example, thedeployment/keystone/keystone-container-puppet.yamltemplate provides hieradata to thekeystonemodule.- If the heat template sets hieradata for the Puppet variable, the template should also disclose the director-based parameter that you can modify.
- If the heat template does not set hieradata for the Puppet variable, use the configuration hooks to pass the hieradata using an environment file. See Section 4.5, “Puppet: Customizing hieradata for roles” for more information on customizing hieradata.
Procedure
To change the notification format for OpenStack Identity (keystone), use the workflow and complete the following steps:
-
Identify the OpenStack parameter that you want to configure (
notification_format). Search the
keystonePuppet module for thenotification_formatsetting:grep notification_format /etc/puppet/modules/keystone/manifests/*
$ grep notification_format /etc/puppet/modules/keystone/manifests/*Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this case, the
keystonemodule manages this option using thekeystone::notification_formatvariable.Search the
keystoneservice template for this variable:grep "keystone::notification_format" /usr/share/openstack-tripleo-heat-templates/deployment/keystone/keystone-container-puppet.yaml
$ grep "keystone::notification_format" /usr/share/openstack-tripleo-heat-templates/deployment/keystone/keystone-container-puppet.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows that director uses the
KeystoneNotificationFormatparameter to set thekeystone::notification_formathieradata.
-
Identify the OpenStack parameter that you want to configure (
The following table shows the eventual mapping:
| Director parameter | Puppet hieradata | OpenStack Identity (keystone) option |
|---|---|---|
|
|
|
|
You set the KeystoneNotificationFormat in an overcloud environment file, which then sets the notification_format option in the keystone.conf file during the overcloud configuration.
Chapter 4. Configuration hooks Copy linkLink copied to clipboard!
Use configuration hooks to inject your own custom configuration functions into the overcloud deployment process. You can create hooks to inject custom configuration before and after the main overcloud services configuration, and hooks for modifying and including Puppet-based configuration.
4.1. First boot: customizing first boot configuration Copy linkLink copied to clipboard!
Director uses cloud-init to perform configuration on all nodes after the initial creation of the overcloud. You can use the NodeUserData resource types to call cloud-init.
- OS::TripleO::NodeUserData
-
cloud-initconfiguration to apply to all nodes. - OS::TripleO::Controller::NodeUserData
-
cloud-initconfiguration to apply to Controller nodes. - OS::TripleO::Compute::NodeUserData
-
cloud-initconfiguration to apply to Compute nodes. - OS::TripleO::CephStorage::NodeUserData
-
cloud-initconfiguration to apply to Ceph Storage nodes. - OS::TripleO::ObjectStorage::NodeUserData
-
cloud-initconfiguration to apply to Object Storage nodes. - OS::TripleO::BlockStorage::NodeUserData
-
cloud-initconfiguration to apply to Block Storage nodes. - OS::TripleO::[ROLE]::NodeUserData
-
cloud-initconfiguration to apply to custom nodes. Replace[ROLE]with the composable role name.
In this example, update the nameserver with a custom IP address on all nodes:
Procedure
Create a basic heat template
~/templates/nameserver.yamlthat runs a script to append theresolv.conffile on each node with a specific nameserver. You can use theOS::TripleO::MultipartMimeresource type to send the configuration script.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an environment file
~/templates/firstboot.yamlthat registers your heat template as theOS::TripleO::NodeUserDataresource type.resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml
resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To add the first boot configuration to your overcloud, add the environment file to the stack, along with your other environment files:
openstack overcloud deploy --templates \ ...$ openstack overcloud deploy --templates \ ... -e /home/stack/templates/firstboot.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This adds the configuration to all nodes when they are first created and boot for the first time. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts.
You can only register the NodeUserData resources to one heat template per resource. Subsequent usage overrides the heat template to use.
4.2. Pre-configuration: customizing specific overcloud roles Copy linkLink copied to clipboard!
The overcloud uses Puppet for the core configuration of OpenStack components. Director provides a set of hooks that you can use to perform custom configuration for specific node roles after the first boot completes and before the core configuration begins. These hooks include:
Previous versions of this document used the OS::TripleO::Tasks::*PreConfig resources to provide pre-configuration hooks on a per role basis. The heat template collection requires dedicated use of these hooks, which means that you should not use them for custom use. Instead, use the OS::TripleO::*ExtraConfigPre hooks outlined here.
- OS::TripleO::ControllerExtraConfigPre
- Additional configuration applied to Controller nodes before the core Puppet configuration.
- OS::TripleO::ComputeExtraConfigPre
- Additional configuration applied to Compute nodes before the core Puppet configuration.
- OS::TripleO::CephStorageExtraConfigPre
- Additional configuration applied to Ceph Storage nodes before the core Puppet configuration.
- OS::TripleO::ObjectStorageExtraConfigPre
- Additional configuration applied to Object Storage nodes before the core Puppet configuration.
- OS::TripleO::BlockStorageExtraConfigPre
- Additional configuration applied to Block Storage nodes before the core Puppet configuration.
- OS::TripleO::[ROLE]ExtraConfigPre
-
Additional configuration applied to custom nodes before the core Puppet configuration. Replace
[ROLE]with the composable role name.
In this example, append the resolv.conf file on all nodes of a particular role with a variable nameserver:
Procedure
Create a basic heat template
~/templates/nameserver.yamlthat runs a script to write a variable nameserver to theresolv.conffile of a node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
resourcessection contains the following parameters:- CustomExtraConfigPre
-
This defines a software configuration. In this example, we define a Bash
scriptand heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - CustomExtraDeploymentPre
This executes a software configuration, which is the software configuration from the
CustomExtraConfigPreresource. Note the following:-
The
configparameter references theCustomExtraConfigPreresource so that heat knows which configuration to apply. -
The
serverparameter retrieves a map of the overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. -
The
actionsparameter defines when to apply the configuration. Possible actions includeCREATEandUPDATE, which are both set by default. In this case, you only apply the configuration when the overcloud is created. -
input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update to ensure that the resource reapplies on subsequent overcloud updates.
-
The
Create an environment file
~/templates/pre_config.yamlthat registers your heat template to the role-based resource type. For example, to apply the configuration only to Controller nodes, use theControllerExtraConfigPrehook:resource_registry: OS::TripleO::ControllerExtraConfigPre: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1
resource_registry: OS::TripleO::ControllerExtraConfigPre: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the environment file to the stack, along with your other environment files:
openstack overcloud deploy --templates \ ...$ openstack overcloud deploy --templates \ ... -e /home/stack/templates/pre_config.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This applies the configuration to all Controller nodes before the core configuration begins on either the initial overcloud creation or subsequent updates.
You can register each resource to only one heat template per hook. Subsequent usage overrides the heat template to use.
4.3. Pre-configuration: customizing all overcloud roles Copy linkLink copied to clipboard!
The overcloud uses Puppet for the core configuration of OpenStack components. Director provides a hook that you can use to configure all node types after the first boot completes and before the core configuration begins:
- OS::TripleO::NodeExtraConfig
- Additional configuration applied to all nodes roles before the core Puppet configuration.
In this example, append the resolv.conf file on each node with a variable nameserver:
Procedure
Create a basic heat template
~/templates/nameserver.yamlthat runs a script to append theresolv.conffile of each node with a variable nameserver:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
resourcessection contains the following parameters:- CustomExtraConfigPre
-
This parameter defines a software configuration. In this example, you define a Bash
scriptand heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - CustomExtraDeploymentPre
This parameter executes a software configuration, which is the software configuration from the
CustomExtraConfigPreresource. Note the following:-
The
configparameter references theCustomExtraConfigPreresource so that heat knows which configuration to apply. -
The
serverparameter retrieves a map of the overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. -
The
actionsparameter defines when to apply the configuration. Possible actions includeCREATEandUPDATE, which are both set by default. In this case, you only apply the configuration when the overcloud is created. -
The
input_valuesparameter contains a sub-parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update to ensure that the resource reapplies on subsequent overcloud updates.
-
The
Create an environment file
~/templates/pre_config.yamlthat registers your heat template as theOS::TripleO::NodeExtraConfigresource type.resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1
resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the environment file to the stack, along with your other environment files:
openstack overcloud deploy --templates \ ...$ openstack overcloud deploy --templates \ ... -e /home/stack/templates/pre_config.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This applies the configuration to all nodes before the core configuration begins on either the initial overcloud creation or subsequent updates.
You can register the OS::TripleO::NodeExtraConfig to only one heat template. Subsequent usage overrides the heat template to use.
4.4. Post-configuration: customizing all overcloud roles Copy linkLink copied to clipboard!
Previous versions of this document used the OS::TripleO::Tasks::*PostConfig resources to provide post-configuration hooks on a per role basis. The heat template collection requires dedicated use of these hooks, which means that you should not use them for custom use. Instead, use the OS::TripleO::NodeExtraConfigPost hook outlined here.
A situation might occur where you have completed the creation of your overcloud but you want to add additional configuration to all roles, either on initial creation or on a subsequent update of the overcloud. In this case, use the following post-configuration hook:
- OS::TripleO::NodeExtraConfigPost
- Additional configuration applied to all nodes roles after the core Puppet configuration.
In this example, append the resolv.conf file on each node with a variable nameserver:
Procedure
Create a basic heat template
~/templates/nameserver.yamlthat runs a script to append theresolv.conffile of each node with a variable nameserver:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
resourcessection contains the following parameters:- CustomExtraConfig
-
This defines a software configuration. In this example, you define a Bash
scriptand heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - CustomExtraDeployments
This executes a software configuration, which is the software configuration from the
CustomExtraConfigresource. Note the following:-
The
configparameter references theCustomExtraConfigresource so that heat knows which configuration to apply. -
The
serversparameter retrieves a map of the overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. -
The
actionsparameter defines when to apply the configuration. Possible actions includeCREATEandUPDATE, which are both set by default. In this case, you only apply the configuration when the overcloud is created. -
input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update to ensure that the resource reapplies on subsequent overcloud updates.
-
The
Create an environment file
~/templates/post_config.yamlthat registers your heat template as theOS::TripleO::NodeExtraConfigPost:resource type.resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the environment file to the stack, along with your other environment files:
openstack overcloud deploy --templates \ ...$ openstack overcloud deploy --templates \ ... -e /home/stack/templates/post_config.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This applies the configuration to all nodes after the core configuration completes on either initial overcloud creation or subsequent updates.
You can register the OS::TripleO::NodeExtraConfigPost to only one heat template. Subsequent usage overrides the heat template to use.
4.5. Puppet: Customizing hieradata for roles Copy linkLink copied to clipboard!
The heat template collection contains a set of parameters that you can use to pass extra configuration to certain node types. These parameters save the configuration as hieradata for the Puppet configuration on the node:
- ControllerExtraConfig
- Configuration to add to all Controller nodes.
- ComputeExtraConfig
- Configuration to add to all Compute nodes.
- BlockStorageExtraConfig
- Configuration to add to all Block Storage nodes.
- ObjectStorageExtraConfig
- Configuration to add to all Object Storage nodes.
- CephStorageExtraConfig
- Configuration to add to all Ceph Storage nodes.
- [ROLE]ExtraConfig
-
Configuration to add to a composable role. Replace
[ROLE]with the composable role name. - ExtraConfig
- Configuration to add to all nodes.
Procedure
To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the
parameter_defaultssection. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese, use the following entries in theComputeExtraConfigparameter:parameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_memory: 1024 nova::compute::vnc_keymap: japarameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_memory: 1024 nova::compute::vnc_keymap: jaCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include this environment file in the
openstack overcloud deploycommand, along with any other environment files relevant to your deployment.
You can define each parameter only once. Subsequent usage overrides previous values.
4.6. Puppet: Customizing hieradata for individual nodes Copy linkLink copied to clipboard!
You can set Puppet hieradata for individual nodes using the heat template collection:
Procedure
Identify the system UUID from the introspection data for a node:
openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuid
$ openstack baremetal introspection data save 9dcc87ae-4c6d-4ede-81a5-9b20d7dc4a14 | jq .extra.system.product.uuidCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns a system UUID. For example:
"f5055c6c-477f-47fb-afe5-95c6928c407f"
"f5055c6c-477f-47fb-afe5-95c6928c407f"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an environment file to define node-specific hieradata and register the
per_node.yamltemplate to a pre-configuration hook. Include the system UUID of the node that you want to configure in theNodeDataLookupparameter:resource_registry: OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: '{"f5055c6c-477f-47fb-afe5-95c6928c407f": {"nova::compute::vcpu_pin_set": [ "2", "3" ]}}'resource_registry: OS::TripleO::ComputeExtraConfigPre: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: '{"f5055c6c-477f-47fb-afe5-95c6928c407f": {"nova::compute::vcpu_pin_set": [ "2", "3" ]}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include this environment file in the
openstack overcloud deploycommand, along with any other environment files relevant to your deployment.
The per_node.yaml template generates a set of hieradata files on nodes that correspond to each system UUID and contains the hieradata that you define. If a UUID is not defined, the resulting hieradata file is empty. In this example, the per_node.yaml template runs on all Compute nodes as defined by the OS::TripleO::ComputeExtraConfigPre hook, but only the Compute node with system UUID f5055c6c-477f-47fb-afe5-95c6928c407f receives hieradata.
You can use this mechanism to tailor each node according to specific requirements.
For more information about NodeDataLookup, see Altering the disk layout in Ceph Storage nodes in the Deploying an overcloud with containerized Red Hat Ceph guide.
4.7. Puppet: Applying custom manifests Copy linkLink copied to clipboard!
In certain circumstances, you might want to install and configure some additional components on your overcloud nodes. You can achieve this with a custom Puppet manifest that applies to nodes after the main configuration completes. As a basic example, you might want to install motd on each node
Procedure
Create a heat template
~/templates/custom_puppet_config.yamlthat launches Puppet configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example includes the
/home/stack/templates/motd.ppwithin the template and passes it to nodes for configuration. Themotd.ppfile contains the Puppet classes necessary to install and configuremotd.Create an environment file
~templates/puppet_post_config.yamlthat registers your heat template as theOS::TripleO::NodeExtraConfigPost:resource type.resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Include this environment file in the
openstack overcloud deploycommand, along with any other environment files relevant to your deployment.openstack overcloud deploy --templates \ ...$ openstack overcloud deploy --templates \ ... -e /home/stack/templates/puppet_post_config.yaml \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This applies the configuration from
motd.ppto all nodes in the overcloud.
Chapter 5. Ansible-based overcloud registration Copy linkLink copied to clipboard!
Director uses Ansible-based methods to register overcloud nodes to the Red Hat Customer Portal or to a Red Hat Satellite Server.
If you used the rhel-registration method from previous Red Hat OpenStack Platform versions, you must disable it and switch to the Ansible-based method. For more information, see Switching to the rhsm composable service and RHEL-Registration to rhsm mappings.
In addition to the director-based registration method, you can also manually register after deployment. For more information, see Section 5.9, “Running Ansible-based registration manually”
5.1. Red Hat Subscription Manager (RHSM) composable service Copy linkLink copied to clipboard!
You can use the rhsm composable service to register overcloud nodes through Ansible. Each role in the default roles_data file contains a OS::TripleO::Services::Rhsm resource, which is disabled by default. To enable the service, register the resource to the rhsm composable service file:
resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml
resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml
The rhsm composable service accepts a RhsmVars parameter, which you can use to define multiple sub-parameters relevant to your registration:
You can also use the RhsmVars parameter in combination with role-specific parameters, for example, ControllerParameters, to provide flexibility when enabling specific repositories for different nodes types.
5.2. RhsmVars sub-parameters Copy linkLink copied to clipboard!
Use the following sub-parameters as part of the RhsmVars parameter when you configure the rhsm composable service. For more information about the Ansible parameters that are available, see the role documentation.
rhsm | Description |
|---|---|
|
|
Choose the registration method. Either |
|
|
The organization that you want to use for registration. To locate this ID, run |
|
|
The subscription pool ID that you want to use. Use this parameter if you do not want to auto-attach subscriptions. To locate this ID, run |
|
| The activation key that you want to use for registration. |
|
|
Use this parameter to attach compatible subscriptions to this system automatically. Set the value to |
|
| The base URL for obtaining content. The default URL is the Red Hat Content Delivery Network. If you use a Satellite server, change this value to the base URL of your Satellite server content repositories. |
|
| The hostname of the subscription management service for registration. The default is the Red Hat Subscription Management hostname. If you use a Satellite server, change this value to your Satellite server hostname. |
|
| A list of repositories that you want to enable. |
|
| The username for registration. If possible, use activation keys for registration. |
|
| The password for registration. If possible, use activation keys for registration. |
|
| Red Hat Enterprise Linux release for pinning the repositories. This is set to 8.4 for Red Hat OpenStack Platform |
|
|
The hostname for the HTTP proxy. For example: |
|
|
The port for HTTP proxy communication. For example: |
|
| The username to access the HTTP proxy. |
|
| The password to access the HTTP proxy. |
You can use rhsm_activation_key and rhsm_repos together only if rhsm_method is set to portal. If rhsm_method is set to satellite, you can only use either rhsm_activation_key or rhsm_repos.
5.3. Registering the overcloud with the rhsm composable service Copy linkLink copied to clipboard!
Create an environment file that enables and configures the rhsm composable service. Director uses this environment file to register and subscribe your nodes.
Procedure
-
Create an environment file named
templates/rhsm.ymlto store the configuration. Include your configuration in the environment file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
resource_registrysection associates therhsmcomposable service with theOS::TripleO::Services::Rhsmresource, which is available on each role. -
The
RhsmVarsvariable passes parameters to Ansible for configuring your Red Hat registration.
-
The
- Save the environment file.
5.4. Applying the rhsm composable service to different roles Copy linkLink copied to clipboard!
You can apply the rhsm composable service on a per-role basis. For example, you can apply different sets of configurations to Controller nodes, Compute nodes, and Ceph Storage nodes.
Procedure
-
Create an environment file named
templates/rhsm.ymlto store the configuration. Include your configuration in the environment file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
resource_registryassociates therhsmcomposable service with theOS::TripleO::Services::Rhsmresource, which is available on each role.The
ControllerParameters,ComputeParameters, andCephStorageParametersparameters each use a separateRhsmVarsparameter to pass subscription details to their respective roles.NoteSet the
RhsmVarsparameter within theCephStorageParametersparameter to use a Red Hat Ceph Storage subscription and repositories specific to Ceph Storage. Ensure therhsm_reposparameter contains the standard Red Hat Enterprise Linux repositories instead of the Extended Update Support (EUS) repositories that Controller and Compute nodes require.- Save the environment file.
5.5. Registering the overcloud to Red Hat Satellite Server Copy linkLink copied to clipboard!
Create an environment file that enables and configures the rhsm composable service to register nodes to Red Hat Satellite instead of the Red Hat Customer Portal.
Procedure
-
Create an environment file named
templates/rhsm.ymlto store the configuration. Include your configuration in the environment file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
resource_registryassociates therhsmcomposable service with theOS::TripleO::Services::Rhsmresource, which is available on each role.The
RhsmVarsvariable passes parameters to Ansible for configuring your Red Hat registration.- Save the environment file.
5.6. Switching to the rhsm composable service Copy linkLink copied to clipboard!
The previous rhel-registration method runs a bash script to handle the overcloud registration. The scripts and environment files for this method are located in the core heat template collection at /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/.
Complete the following steps to switch from the rhel-registration method to the rhsm composable service.
Procedure
Exclude the
rhel-registrationenvironment files from future deployments operations. In most cases, exclude the following files:-
rhel-registration/environment-rhel-registration.yaml -
rhel-registration/rhel-registration-resource-registry.yaml
-
If you use a custom
roles_datafile, ensure that each role in yourroles_datafile contains theOS::TripleO::Services::Rhsmcomposable service. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the environment file for
rhsmcomposable service parameters to future deployment operations.
This method replaces the rhel-registration parameters with the rhsm service parameters and changes the heat resource that enables the service from:
resource_registry: OS::TripleO::NodeExtraConfig: rhel-registration.yaml
resource_registry:
OS::TripleO::NodeExtraConfig: rhel-registration.yaml
To:
resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml
resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml
You can also include the /usr/share/openstack-tripleo-heat-templates/environments/rhsm.yaml environment file with your deployment to enable the service.
5.7. rhel-registration to rhsm mappings Copy linkLink copied to clipboard!
To help transition your details from the rhel-registration method to the rhsm method, use the following table to map your parameters and values.
rhel-registration | rhsm / RhsmVars |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5.8. Deploying the overcloud with the rhsm composable service Copy linkLink copied to clipboard!
Deploy the overcloud with the rhsm composable service so that Ansible controls the registration process for your overcloud nodes.
Procedure
Include
rhsm.ymlenvironment file with theopenstack overcloud deploycommand:openstack overcloud deploy \ <other cli args> \ -e ~/templates/rhsm.yamlopenstack overcloud deploy \ <other cli args> \ -e ~/templates/rhsm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This enables the Ansible configuration of the overcloud and the Ansible-based registration.
- Wait until the overcloud deployment completes.
Check the subscription details on your overcloud nodes. For example, log in to a Controller node and run the following commands:
sudo subscription-manager status sudo subscription-manager list --consumed
$ sudo subscription-manager status $ sudo subscription-manager list --consumedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9. Running Ansible-based registration manually Copy linkLink copied to clipboard!
You can perform manual Ansible-based registration on a deployed overcloud with the dynamic inventory script on the director node. Use this script to define node roles as host groups and then run a playbook against them with ansible-playbook. Use the following example playbook to register Controller nodes manually.
Procedure
Create a playbook that uses the
redhat_subscriptionmodules to register your nodes. For example, the following playbook applies to Controller nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This play contains three tasks:
- Register the node.
- Disable any auto-enabled repositories.
-
Enable only the repositories relevant to the Controller node. The repositories are listed with the
reposvariable.
After you deploy the overcloud, you can run the following command so that Ansible executes the playbook (
ansible-osp-registration.yml) against your overcloud:ansible-playbook -i /usr/bin/tripleo-ansible-inventory ansible-osp-registration.yml
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory ansible-osp-registration.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command performs the following actions:
- Runs the dynamic inventory script to get a list of host and their groups.
-
Applies the playbook tasks to the nodes in the group defined in the
hostsparameter of the playbook, which in this case is the Controller group.
Chapter 6. Composable services and custom roles Copy linkLink copied to clipboard!
The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core heat template collection on the director node. However, you can also create custom roles that contain specific sets of services.
You can use this flexibility to create different combinations of services on different roles. This chapter explores the architecture of custom roles, composable services, and methods for using them.
6.1. Supported role architecture Copy linkLink copied to clipboard!
The following architectures are available when you use custom roles and composable services:
- Default architecture
-
Uses the default
roles_datafiles. All controller services are contained within one Controller role. - Supported standalone roles
-
Use the predefined files in
/usr/share/openstack-tripleo-heat-templates/rolesto generate a customroles_datafile. For more information, see Section 6.4, “Supported custom roles”. - Custom composable services
-
Create your own roles and use them to generate a custom
roles_datafile. Note that only a limited number of composable service combinations have been tested and verified and Red Hat cannot support all composable service combinations.
6.2. Examining the roles_data file Copy linkLink copied to clipboard!
The roles_data file contains a YAML-formatted list of the roles that director deploys onto nodes. Each role contains definitions of all of the services that comprise the role. Use the following example snippet to understand the roles_data syntax:
The core heat template collection contains a default roles_data file located at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml. The default file contains definitions of the following role types:
-
Controller -
Compute -
BlockStorage -
ObjectStorage -
CephStorage.
The openstack overcloud deploy command includes the default roles_data.yaml file during deployment. However, you can use the -r argument to override this file with a custom roles_data file:
openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml
$ openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml
6.3. Creating a roles_data file Copy linkLink copied to clipboard!
Although you can create a custom roles_data file manually, you can also generate the file automatically using individual role templates. Director provides several commands to manage role templates and automatically generate a custom roles_data file.
Procedure
List the default role templates:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the role definition in YAML format with the
openstack overcloud roles showcommand:openstack overcloud roles show Compute
$ openstack overcloud roles show ComputeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a custom
roles_datafile. Use theopenstack overcloud roles generatecommand to join multiple predefined roles into a single file. For example, run the following command to generate aroles_data.yamlfile that contains theController,Compute, andNetworkerroles:openstack overcloud roles generate -o ~/roles_data.yaml Controller Compute Networker
$ openstack overcloud roles generate -o ~/roles_data.yaml Controller Compute NetworkerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
-ooption to define the name out of the output file.This command creates a custom
roles_datafile. However, the previous example uses theControllerandNetworkerroles, which both contain the same networking agents. This means that the networking services scale from theControllerrole to theNetworkerrole and the overcloud balances the load for networking services between theControllerandNetworkernodes.To make this
Networkerrole standalone, you can create your own customControllerrole, as well as any other role that you require. This allows you to generate aroles_datafile from your own custom roles.Copy the directory from the core heat template collection to the home directory of the
stackuser:cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add or modify the custom role files in this directory. Use the
--roles-pathoption with any of the role sub-commands to use this directory as the source for your custom roles:openstack overcloud roles generate -o my_roles_data.yaml \ --roles-path ~/roles \ Controller Compute Networker
$ openstack overcloud roles generate -o my_roles_data.yaml \ --roles-path ~/roles \ Controller Compute NetworkerCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command generates a single
my_roles_data.yamlfile from the individual roles in the~/rolesdirectory.
The default roles collection also contains the ControllerOpenStack role, which does not include services for Networker, Messaging, and Database roles. You can use the ControllerOpenStack in combination with the standalone Networker, Messaging, and Database roles.
6.4. Supported custom roles Copy linkLink copied to clipboard!
The following table contains information about the available custom roles. You can find custom role templates in the /usr/share/openstack-tripleo-heat-templates/roles directory.
| Role | Description | File |
|---|---|---|
|
| OpenStack Block Storage (cinder) node. |
|
|
| Full standalone Ceph Storage node. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. |
|
|
| Standalone scale-out Ceph Storage file role. Includes OSD and Object Operations (MDS). |
|
|
| Standalone scale-out Ceph Storage object role. Includes OSD and Object Gateway (RGW). |
|
|
| Ceph Storage OSD node role. |
|
|
| Alternate Compute node role. |
|
|
| DVR enabled Compute node role. |
|
|
| Compute node with hyper-converged infrastructure. Includes Compute and Ceph OSD services. |
|
|
|
Compute Instance HA node role. Use in conjunction with the |
|
|
| Compute node with Cavium Liquidio Smart NIC. |
|
|
| Compute OVS DPDK RealTime role. |
|
|
| Compute OVS DPDK role. |
|
|
| Compute role for ppc64le servers. |
|
|
|
Compute role optimized for real-time behaviour. When using this role, it is mandatory that an |
|
|
| Compute SR-IOV RealTime role. |
|
|
| Compute SR-IOV role. |
|
|
| Standard Compute node role. |
|
|
|
Controller role that does not contain the database, messaging, networking, and OpenStack Compute (nova) control components. Use in combination with the |
|
|
| Controller role with core Controller services loaded but no Ceph Storage (MON) components. This role handles database, messaging, and network functions but not any Ceph Storage functions. |
|
|
|
Controller role that does not contain the OpenStack Compute (nova) control component. Use in combination with the |
|
|
|
Controller role that does not contain the database, messaging, and networking components. Use in combination with the |
|
|
| Controller role with all core services loaded and uses Ceph NFS. This roles handles database, messaging, and network functions. |
|
|
| Controller role with all core services loaded. This roles handles database, messaging, and network functions. |
|
|
| Same as the normal Controller role but with the OVN Metadata agent deployed. |
|
|
| Standalone database role. Database managed as a Galera cluster using Pacemaker. |
|
|
| Compute node with hyper-converged infrastructure and all Ceph Storage services. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. |
|
|
| Compute node with hyper-converged infrastructure and Ceph Storage file services. Includes OSD and Object Operations (MDS). |
|
|
| Compute node with hyper-converged infrastructure and Ceph Storage block services. Includes OSD, MON, and Manager. |
|
|
| Compute node with hyper-converged infrastructure and Ceph Storage object services. Includes OSD and Object Gateway (RGW). |
|
|
| Ironic Conductor node role. |
|
|
| Standalone messaging role. RabbitMQ managed with Pacemaker. |
|
|
| Standalone networking role. Runs OpenStack networking (neutron) agents on their own. If your deployment uses the ML2/OVN mechanism driver, see additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide. |
|
|
| Same as the normal Networker role but with the OVN Metadata agent deployed. See additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide. |
|
|
|
Standalone |
|
|
| Swift Object Storage node role. |
|
|
| Telemetry role with all the metrics and alarming services. |
|
6.5. Examining role parameters Copy linkLink copied to clipboard!
Each role contains the following parameters:
- name
-
(Mandatory) The name of the role, which is a plain text name with no spaces or special characters. Check that the chosen name does not cause conflicts with other resources. For example, use
Networkeras a name instead ofNetwork. - description
- (Optional) A plain text description for the role.
- tags
(Optional) A YAML list of tags that define role properties. Use this parameter to define the primary role with both the
controllerandprimarytags together:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you do not tag the primary role, the first role that you define becomes the primary role. Ensure that this role is the Controller role.
- networks
A YAML list or dictionary of networks that you want to configure on the role. If you use a YAML list, list each composable network:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a dictionary, map each network to a specific
subnetin your composable networks.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Default networks include
External,InternalApi,Storage,StorageMgmt,Tenant, andManagement.- CountDefault
- (Optional) Defines the default number of nodes that you want to deploy for this role.
- HostnameFormatDefault
(Optional) Defines the default hostname format for the role. The default naming convention uses the following format:
[STACK NAME]-[ROLE NAME]-[NODE ID]
[STACK NAME]-[ROLE NAME]-[NODE ID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the default Controller nodes are named:
overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ...
overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - disable_constraints
- (Optional) Defines whether to disable OpenStack Compute (nova) and OpenStack Image Storage (glance) constraints when deploying with director. Use this parameter when you deploy an overcloud with pre-provisioned nodes. For more information, see Configuring a Basic Overcloud with Pre-Provisioned Nodes in the Director Installation and Usage guide.
- update_serial
(Optional) Defines how many nodes to update simultaneously during the OpenStack update options. In the default
roles_data.yamlfile:-
The default is
1for Controller, Object Storage, and Ceph Storage nodes. -
The default is
25for Compute and Block Storage nodes.
If you omit this parameter from a custom role, the default is
1.-
The default is
- ServicesDefault
- (Optional) Defines the default list of services to include on the node. For more information, see Section 6.8, “Examining composable service architecture”.
You can use these parameters to create new roles and also define which services to include in your roles.
The openstack overcloud deploy command integrates the parameters from the roles_data file into some of the Jinja2-based templates. For example, at certain points, the overcloud.j2.yaml heat template iterates over the list of roles from roles_data.yaml and creates parameters and resources specific to each respective role.
For example, the following snippet contains the resource definition for each role in the overcloud.j2.yaml heat template:
This snippet shows how the Jinja2-based template incorporates the {{role.name}} variable to define the name of each role as an OS::Heat::ResourceGroup resource. This in turn uses each name parameter from the roles_data file to name each respective OS::Heat::ResourceGroup resource.
6.6. Creating a new role Copy linkLink copied to clipboard!
You can use the composable service architecture to create new roles according to the requirements of your deployment. For example, you might want to create a new Horizon role to host only the OpenStack Dashboard (horizon).
Role names must start with a letter, end with a letter or digit, and contain only letters, digits, and hyphens. Underscores must never be used in role names.
Procedure
Create a custom copy of the default
rolesdirectory:cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file called
~/roles/Horizon.yamland create a newHorizonrole that contains base and core OpenStack Dashboard services:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Set the
nameparameter to the name of the custom role. Custom role names have a maximum length of 47 characters. -
Set the
CountDefaultparameter to1so that a default overcloud always includes theHorizonnode.
-
Set the
Optional: If you want to scale the services in an existing overcloud, retain the existing services on the
Controllerrole. If you want to create a new overcloud and you want the OpenStack Dashboard to remain on the standalone role, remove the OpenStack Dashboard components from theControllerrole definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the new
roles_data-horizon.yamlfile using the~/rolesdirectory as the source:openstack overcloud roles generate -o roles_data-horizon.yaml \ --roles-path ~/roles \ Controller Compute Horizon
$ openstack overcloud roles generate -o roles_data-horizon.yaml \ --roles-path ~/roles \ Controller Compute HorizonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define a new flavor for this role so that you can tag specific nodes. For this example, use the following commands to create a
horizonflavor:Create a
horizonflavor:openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 horizon
(undercloud)$ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 horizonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese properties are not used for scheduling instances, however, the Compute scheduler does use the disk size to determine the root partition size.
Tag each bare metal node that you want to designate for the Dashboard service (horizon) with a custom resource class:
openstack baremetal node set --resource-class baremetal.HORIZON <NODE>
(undercloud)$ openstack baremetal node set --resource-class baremetal.HORIZON <NODE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<NODE>with the ID of the bare metal node.Associate the
horizonflavor with the custom resource class:openstack flavor set --property resources:CUSTOM_BAREMETAL_HORIZON=1 horizon
(undercloud)$ openstack flavor set --property resources:CUSTOM_BAREMETAL_HORIZON=1 horizonCopy to Clipboard Copied! Toggle word wrap Toggle overflow To determine the name of a custom resource class that corresponds to a resource class of a bare metal node, convert the resource class to uppercase, replace punctuation with an underscore, and prefix the value with
CUSTOM_.NoteA flavor can request only one instance of a bare metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties for scheduling instances:
openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 horizon
(undercloud)$ openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 horizonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Define the Horizon node count and flavor using the following environment file snippet:
parameter_defaults: OvercloudHorizonFlavor: horizon HorizonCount: 1
parameter_defaults: OvercloudHorizonFlavor: horizon HorizonCount: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include the new
roles_data-horizon.yamlfile and environment file in theopenstack overcloud deploycommand, along with any other environment files relevant to your deployment:openstack overcloud deploy --templates -r ~/templates/roles_data-horizon.yaml -e ~/templates/node-count-flavor.yaml
$ openstack overcloud deploy --templates -r ~/templates/roles_data-horizon.yaml -e ~/templates/node-count-flavor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration creates a three-node overcloud that consists of one Controller node, one Compute node, and one Networker node. To view the list of nodes in your overcloud, run the following command:
openstack server list
$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.7. Guidelines and limitations Copy linkLink copied to clipboard!
Note the following guidelines and limitations for the composable role architecture.
For services not managed by Pacemaker:
- You can assign services to standalone custom roles.
- You can create additional custom roles after the initial deployment and deploy them to scale existing services.
For services managed by Pacemaker:
- You can assign Pacemaker-managed services to standalone custom roles.
-
Pacemaker has a 16 node limit. If you assign the Pacemaker service (
OS::TripleO::Services::Pacemaker) to 16 nodes, subsequent nodes must use the Pacemaker Remote service (OS::TripleO::Services::PacemakerRemote) instead. You cannot have the Pacemaker service and Pacemaker Remote service on the same role. -
Do not include the Pacemaker service (
OS::TripleO::Services::Pacemaker) on roles that do not contain Pacemaker-managed services. -
You cannot scale up or scale down a custom role that contains
OS::TripleO::Services::PacemakerorOS::TripleO::Services::PacemakerRemoteservices.
General limitations:
- You cannot change custom roles and composable services during a major version upgrade.
- You cannot modify the list of services for any role after deploying an overcloud. Modifying the service lists after Overcloud deployment can cause deployment errors and leave orphaned services on nodes.
6.8. Examining composable service architecture Copy linkLink copied to clipboard!
The core heat template collection contains two sets of composable service templates:
-
deploymentcontains the templates for key OpenStack services. -
puppet/servicescontains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in thedeploymentdirectory.
Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml service template contains the following description:
description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.
description: >
NTP service deployment using puppet, this YAML file
creates the interface between the HOT template
and the puppet manifest that actually installs
and configure NTP.
These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means that you can call each resource using a unique heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml file. All services use the OS::TripleO::Services namespace for their resource type.
Some resources use the base composable service templates directly:
resource_registry: ... OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml ...
resource_registry:
...
OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml
...
However, core services require containers and use the containerized service templates. For example, the keystone containerized service uses the following resource:
resource_registry: ... OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml ...
resource_registry:
...
OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml
...
These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml template stores the output of the base template in the ContainersCommon resource:
resources:
ContainersCommon:
type: ../containers-common.yaml
resources:
ContainersCommon:
type: ../containers-common.yaml
The containerized template can then incorporate functions and data from the containers-common.yaml template.
The overcloud.j2.yaml heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml file:
For the default roles, this creates the following service list parameters: ControllerServices, ComputeServices, BlockStorageServices, ObjectStorageServices, and CephStorageServices.
You define the default services for each custom role in the roles_data.yaml file. For example, the default Controller role contains the following content:
These services are then defined as the default list for the ControllerServices parameter.
You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices as a parameter_default in an environment file to override the services list from the roles_data.yaml file.
6.9. Adding and removing services from roles Copy linkLink copied to clipboard!
The basic method of adding or removing services involves creating a copy of the default service list for a node role and then adding or removing services. For example, you might want to remove OpenStack Orchestration (heat) from the Controller nodes.
Procedure
Create a custom copy of the default
rolesdirectory:cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
~/roles/Controller.yamlfile and modify the service list for theServicesDefaultparameter. Scroll to the OpenStack Orchestration services and remove them:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the new
roles_datafile:openstack overcloud roles generate -o roles_data-no_heat.yaml \ --roles-path ~/roles \ Controller Compute Networker
$ openstack overcloud roles generate -o roles_data-no_heat.yaml \ --roles-path ~/roles \ Controller Compute NetworkerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Include this new
roles_datafile when you run theopenstack overcloud deploycommand:openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yaml
$ openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command deploys an overcloud without OpenStack Orchestration services installed on the Controller nodes.
You can also disable services in the roles_data file using a custom environment file. Redirect the services to disable to the OS::Heat::None resource. For example:
resource_registry: OS::TripleO::Services::HeatApi: OS::Heat::None OS::TripleO::Services::HeatApiCfn: OS::Heat::None OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None OS::TripleO::Services::HeatEngine: OS::Heat::None
resource_registry:
OS::TripleO::Services::HeatApi: OS::Heat::None
OS::TripleO::Services::HeatApiCfn: OS::Heat::None
OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None
OS::TripleO::Services::HeatEngine: OS::Heat::None
6.10. Enabling disabled services Copy linkLink copied to clipboard!
Some services are disabled by default. These services are registered as null operations (OS::Heat::None) in the overcloud-resource-registry-puppet.j2.yaml file. For example, the Block Storage backup service (cinder-backup) is disabled:
OS::TripleO::Services::CinderBackup: OS::Heat::None
OS::TripleO::Services::CinderBackup: OS::Heat::None
To enable this service, include an environment file that links the resource to its respective heat templates in the puppet/services directory. Some services have predefined environment files in the environments directory. For example, the Block Storage backup service uses the environments/cinder-backup.yaml file, which contains the following entry:
Procedure
Add an entry in an environment file that links the
CinderBackupservice to the heat template that contains thecinder-backupconfiguration:resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml ...
resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This entry overrides the default null operation resource and enables the service.
Include this environment file when you run the
openstack overcloud deploycommand:openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.11. Creating a generic node with no services Copy linkLink copied to clipboard!
You can create generic Red Hat Enterprise Linux 8.4 nodes without any OpenStack services configured. This is useful when you need to host software outside of the core Red Hat OpenStack Platform (RHOSP) environment. For example, RHOSP provides integration with monitoring tools such as Kibana and Sensu. For more information, see the Monitoring Tools Configuration Guide. While Red Hat does not provide support for the monitoring tools themselves, director can create a generic Red Hat Enterprise Linux 8.4 node to host these tools.
The generic node still uses the base overcloud-full image rather than a base Red Hat Enterprise Linux 8 image. This means the node has some Red Hat OpenStack Platform software installed but not enabled or configured.
Procedure
Create a generic role in your custom
roles_data.yamlfile that does not contain aServicesDefaultlist:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you retain the existing
ControllerandComputeroles.Create an environment file
generic-node-params.yamlto specify how many generic Red Hat Enterprise Linux 8 nodes you require and the flavor when selecting nodes to provision:parameter_defaults: OvercloudGenericFlavor: baremetal GenericCount: 1
parameter_defaults: OvercloudGenericFlavor: baremetal GenericCount: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include both the roles file and the environment file when you run the
openstack overcloud deploycommand:openstack overcloud deploy --templates \ -r ~/templates/roles_data_with_generic.yaml \ -e ~/templates/generic-node-params.yaml
$ openstack overcloud deploy --templates \ -r ~/templates/roles_data_with_generic.yaml \ -e ~/templates/generic-node-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration deploys a three-node environment with one Controller node, one Compute node, and one generic Red Hat Enterprise Linux 8 node.
Chapter 7. Containerized services Copy linkLink copied to clipboard!
Director installs the core OpenStack Platform services as containers on the overcloud. This section provides some background information on how containerized services work.
7.1. Containerized service architecture Copy linkLink copied to clipboard!
Director installs the core OpenStack Platform services as containers on the overcloud. The templates for the containerized services are located in the /usr/share/openstack-tripleo-heat-templates/deployment/.
You must enable the OS::TripleO::Services::Podman service in the role for all nodes that use containerized services. When you create a roles_data.yaml file for your custom roles configuration, include the OS::TripleO::Services::Podman service along with the base composable services. For example, the IronicConductor role uses the following role definition:
7.2. Containerized service parameters Copy linkLink copied to clipboard!
Each containerized service template contains an outputs section that defines a data set passed to the OpenStack Orchestration (heat) service. In addition to the standard composable service parameters (see Section 6.5, “Examining role parameters”), the template contains a set of parameters specific to the container configuration.
puppet_configData to pass to Puppet when configuring the service. In the initial overcloud deployment steps, director creates a set of containers used to configure the service before the actual containerized service runs. This parameter includes the following sub-parameters:
-
config_volume- The mounted volume that stores the configuration. -
puppet_tags- Tags to pass to Puppet during configuration. OpenStack uses these tags to restrict the Puppet run to the configuration resource of a particular service. For example, the OpenStack Identity (keystone) containerized service uses thekeystone_configtag to ensure that all require only thekeystone_configPuppet resource run on the configuration container. -
step_config- The configuration data passed to Puppet. This is usually inherited from the referenced composable service. -
config_image- The container image used to configure the service.
-
kolla_config- A set of container-specific data that defines configuration file locations, directory permissions, and the command to run on the container to launch the service.
docker_configTasks to run on the configuration container for the service. All tasks are grouped into the following steps to help director perform a staged deployment:
- Step 1 - Load balancer configuration
- Step 2 - Core services (Database, Redis)
- Step 3 - Initial configuration of OpenStack Platform service
- Step 4 - General OpenStack Platform services configuration
- Step 5 - Service activation
host_prep_tasks- Preparation tasks for the bare metal node to accommodate the containerized service.
7.3. Preparing container images Copy linkLink copied to clipboard!
The overcloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize this environment file that you can use to prepare your container images.
If you need to configure specific container image versions for your overcloud, you must pin the images to a specific version. For more information, see Pinning container images for the overcloud.
Procedure
-
Log in to your undercloud host as the
stackuser. Generate the default container image preparation file:
sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yaml
$ sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command includes the following additional options:
-
--local-push-destinationsets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-fileis an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file iscontainers-prepare-parameter.yaml.NoteYou can use the same
containers-prepare-parameter.yamlfile to define a container image source for both the undercloud and the overcloud.
-
-
Modify the
containers-prepare-parameter.yamlto suit your requirements.
7.4. Container image preparation parameters Copy linkLink copied to clipboard!
The default file for preparing your containers (containers-prepare-parameter.yaml) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images:
Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters that you can use with each ContainerImagePrepare strategy:
| Parameter | Description |
|---|---|
|
| List of regular expressions to exclude image names from a strategy. |
|
|
List of regular expressions to include in a strategy. At least one image name must match an existing image. All |
|
|
String to append to the tag for the destination image. For example, if you pull an image with the tag 16.2.3-5.161 and set the |
|
| A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. |
|
| String of ansible role names to run during upload but before pushing the image to the destination registry. |
|
|
Dictionary of variables to pass to |
|
| Defines the namespace of the registry that you want to push images to during the upload process.
If you set this parameter to
If the |
|
| The source registry from where to pull the original container images. |
|
|
A dictionary of |
|
|
Use the value of specified container image metadata labels to create a tag for every image and pull that tagged image. For example, if you set |
When you push images to the undercloud, use push_destination: true instead of push_destination: UNDERCLOUD_IP:PORT. The push_destination: true method provides a level of consistency across both IPv4 and IPv6 addresses.
The set parameter accepts a set of key: value definitions:
| Key | Description |
|---|---|
|
| The name of the Ceph Storage container image. |
|
| The namespace of the Ceph Storage container image. |
|
| The tag of the Ceph Storage container image. |
|
| The name, namespace, and tag of the Ceph Storage Alert Manager container image. |
|
| The name, namespace, and tag of the Ceph Storage Grafana container image. |
|
| The name, namespace, and tag of the Ceph Storage Node Exporter container image. |
|
| The name, namespace, and tag of the Ceph Storage Prometheus container image. |
|
| A prefix for each OpenStack service image. |
|
| A suffix for each OpenStack service image. |
|
| The namespace for each OpenStack service image. |
|
|
The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard |
|
|
Sets a specific tag for all images from the source. If not defined, director uses the Red Hat OpenStack Platform version number as the default value. This parameter takes precedence over the |
The container images use multi-stream tags based on the Red Hat OpenStack Platform version. This means that there is no longer a latest tag.
7.5. Guidelines for container image tagging Copy linkLink copied to clipboard!
The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This format follows the label metadata for each container, which is version-release.
- version
- Corresponds to a major and minor version of Red Hat OpenStack Platform. These versions act as streams that contain one or more releases.
- release
- Corresponds to a release of a specific container image version within a version stream.
For example, if the latest version of Red Hat OpenStack Platform is 16.2.3 and the release for the container image is 5.161, then the resulting tag for the container image is 16.2.3-5.161.
The Red Hat Container Registry also uses a set of major and minor version tags that link to the latest release for that container image version. For example, both 16.2 and 16.2.3 link to the latest release in the 16.2.3 container stream. If a new minor release of 16.2 occurs, the 16.2 tag links to the latest release for the new minor release stream while the 16.2.3 tag continues to link to the latest release within the 16.2.3 stream.
The ContainerImagePrepare parameter contains two sub-parameters that you can use to determine which container image to download. These sub-parameters are the tag parameter within the set dictionary, and the tag_from_label parameter. Use the following guidelines to determine whether to use tag or tag_from_label.
The default value for
tagis the major version for your OpenStack Platform version. For this version it is 16.2. This always corresponds to the latest minor version and release.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To change to a specific minor version for OpenStack Platform container images, set the tag to a minor version. For example, to change to 16.2.2, set
tagto 16.2.2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
When you set
tag, director always downloads the latest container imagereleasefor the version set intagduring installation and updates. If you do not set
tag, director uses the value oftag_from_labelin conjunction with the latest major version.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
tag_from_labelparameter generates the tag from the label metadata of the latest container image release it inspects from the Red Hat Container Registry. For example, the labels for a certain container might use the followingversionandreleasemetadata:"Labels": { "release": "5.161", "version": "16.2.3", ... }"Labels": { "release": "5.161", "version": "16.2.3", ... }Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The default value for
tag_from_labelis{version}-{release}, which corresponds to the version and release metadata labels for each container image. For example, if a container image has 16.2.3 set forversionand 5.161 set forrelease, the resulting tag for the container image is 16.2.3-5.161. -
The
tagparameter always takes precedence over thetag_from_labelparameter. To usetag_from_label, omit thetagparameter from your container preparation configuration. -
A key difference between
tagandtag_from_labelis that director usestagto pull an image only based on major or minor version tags, which the Red Hat Container Registry links to the latest image release within a version stream, while director usestag_from_labelto perform a metadata inspection of each container image so that director generates a tag and pulls the corresponding image.
7.6. Obtaining container images from private registries Copy linkLink copied to clipboard!
The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file.
ContainerImageRegistryCredentials
Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries.
In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content.
To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials:
The default ContainerImagePrepare parameter pulls container images from registry.redhat.io, which requires authentication.
For more information, see Red Hat Container Registry Authentication.
ContainerImageRegistryLogin
The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images.
You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy.
However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true, the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials, set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud.
7.7. Layering image preparation entries Copy linkLink copied to clipboard!
The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries.
The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 16.2.1-hotfix:
The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must match the includes or excludes regular expression value to be considered a match.
A similar technique is used if your Block Storage (cinder) driver requires a vendor supplied cinder-volume image known as a plugin. If your Block Storage driver requires a plugin, see Deploying a vendor plugin in the Advanced Overcloud Customization guide.
7.8. Modifying images during preparation Copy linkLink copied to clipboard!
It is possible to modify images during image preparation, and then immediately deploy the overcloud with modified images.
Red Hat OpenStack Platform (RHOSP) director supports modifying images during preparation for RHOSP containers, not for Ceph containers.
Scenarios for modifying images include:
- As part of a continuous integration pipeline where images are modified with the changes being tested before deployment.
- As part of a development workflow where local changes must be deployed for testing and development.
- When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes.
To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image.
The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter:
-
modify_rolespecifies the Ansible role to invoke for each image to modify. -
modify_append_tagappends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if thepush_destinationregistry already contains the modified image. Changemodify_append_tagwhenever you modify the image. -
modify_varsis a dictionary of Ansible variables to pass to the role.
To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role.
While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect:
sudo openstack tripleo container image prepare \ -e ~/containers-prepare-parameter.yaml
sudo openstack tripleo container image prepare \
-e ~/containers-prepare-parameter.yaml
To use the openstack tripleo container image prepare command, your undercloud must contain a running image-serve registry. As a result, you cannot run this command before a new undercloud installation because the image-serve registry will not be installed. You can run this command after a successful undercloud installation.
7.9. Updating existing packages on container images Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director supports updating existing packages on container images for RHOSP containers, not for Ceph containers.
Procedure
The following example
ContainerImagePrepareentry updates in all packages on the container images by using the dnf repository configuration of the undercloud host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10. Installing additional RPM files to container images Copy linkLink copied to clipboard!
You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository.
Red Hat OpenStack Platform (RHOSP) director supports installing additional RPM files to container images for RHOSP containers, not for Ceph containers.
When you modify container images in existing deployments, you must then perform a minor update to apply the changes to your overcloud. For more information, see Keeping Red Hat OpenStack Platform Updated.
Procedure
The following example
ContainerImagePrepareentry installs some hotfix packages on only thenova-computeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.11. Modifying container images with a custom Dockerfile Copy linkLink copied to clipboard!
You can specify a directory that contains a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives.
Red Hat OpenStack Platform (RHOSP) director supports modifying container images with a custom Dockerfile for RHOSP containers, not for Ceph containers.
Procedure
The following example runs the custom Dockerfile on the
nova-computeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows the
/home/stack/nova-custom/Dockerfilefile. After you run anyUSERroot directives, you must switch back to the original image default user:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.12. Deploying a vendor plugin Copy linkLink copied to clipboard!
To use some third-party hardware as a Block Storage back end, you must deploy a vendor plugin. The following example demonstrates how to deploy a vendor plugin to use Dell EMC hardware as a Block Storage back end.
For more information about supported back end appliances and drivers, see Third-Party Storage Providers in the Storage Guide.
Procedure
Create a new container images file for your overcloud:
sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter-dellemc.yaml$ sudo openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter-dellemc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the containers-prepare-parameter-dellemc.yaml file.
Add an
excludeparameter to the strategy for the main Red Hat OpenStack Platform container images. Use this parameter to exclude the container image that the vendor container image will replace. In the example, the container image is thecinder-volumeimage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new strategy to the
ContainerImagePrepareparameter that includes the replacement container image for the vendor plugin:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the authentication details for the registry.connect.redhat.com registry to the
ContainerImageRegistryCredentialsparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
containers-prepare-parameter-dellemc.yamlfile. Include the
containers-prepare-parameter-dellemc.yamlfile with any deployment commands, such as asopenstack overcloud deploy:openstack overcloud deploy --templates
$ openstack overcloud deploy --templates ... -e containers-prepare-parameter-dellemc.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow When director deploys the overcloud, the overcloud uses the vendor container image instead of the standard container image.
- IMPORTANT
-
The
containers-prepare-parameter-dellemc.yamlfile replaces the standardcontainers-prepare-parameter.yamlfile in your overcloud deployment. Do not include the standardcontainers-prepare-parameter.yamlfile in your overcloud deployment. Retain the standardcontainers-prepare-parameter.yamlfile for your undercloud installation and updates.
Chapter 8. Basic network isolation Copy linkLink copied to clipboard!
Configure the overcloud to use isolated networks so that you can host specific types of network traffic in isolation. Red Hat OpenStack Platform (RHOSP) includes a set of environment files that you can use to configure this network isolation. You might also require additional environment files to further customize your networking parameters:
An environment file that you can use to enable network isolation (
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml).NoteBefore you deploy RHOSP with director, the files
network-isolation.yamlandnetwork-environment.yamlare only in Jinja2 format and have a.j2.yamlextension. Director renders these files to.yamlversions during deployment.-
An environment file that you can use to configure network defaults (
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml). -
A
network_datafile that you can use to define network settings such as IP ranges, subnets, and virtual IPs. This example shows you how to create a copy of the default and edit it to suit your own network. - Templates that you can use to define your NIC layout for each node. The overcloud core template collection contains a set of defaults for different use cases.
-
An environment file that you can use to enable NICs. This example uses a default file located in the
environmentsdirectory.
8.1. Network isolation Copy linkLink copied to clipboard!
The overcloud assigns services to the provisioning network by default. However, director can divide overcloud network traffic into isolated networks. To use isolated networks, the overcloud contains an environment file that enables this feature. The environments/network-isolation.j2.yaml file in the core heat templates is a Jinja2 file that defines all ports and VIPs for each network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry:
The first section of this file has the resource registry declaration for the OS::TripleO::Network::* resources. By default, these resources use the OS::Heat::None resource type, which does not create any networks. By redirecting these resources to the YAML files for each network, you enable the creation of these networks.
The next several sections create the IP addresses for the nodes in each role. The controller nodes have IPs on each network. The compute and storage nodes each have IPs on a subset of the networks.
Other functions of overcloud networking, such as Chapter 9, Custom composable networks and Chapter 10, Custom network interface templates rely on the network-isolation.yaml environment file. Therefore you must include the the rendered environment file in your deployment commands:
openstack overcloud deploy --templates \
...
$ openstack overcloud deploy --templates \
...
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
...
8.2. Modifying isolated network configuration Copy linkLink copied to clipboard!
Copy the default network_data.yaml file and modify the copy to configure the default isolated networks.
Procedure
Copy the default
network_data.yamlfile:cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.
$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the local copy of the
network_data.yamlfile and modify the parameters to suit your networking requirements. For example, the Internal API network contains the following default network details:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the following values for each network:
-
vlandefines the VLAN ID that you want to use for this network. -
ip_subnetandip_allocation_poolsset the default subnet and IP range for the network. -
gatewaysets the gateway for the network. Use this value to define the default route for the External network, or for other networks if necessary.
Include the custom network_data.yaml file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details.
8.3. Network interface templates Copy linkLink copied to clipboard!
The overcloud network configuration requires a set of the network interface templates. These templates are standard heat templates in YAML format. Each role requires a NIC template so that director can configure each node within that role correctly.
All NIC templates contain the same sections as standard heat templates:
heat_template_version- The syntax version to use.
description- A string description of the template.
parameters- Network parameters to include in the template.
resources-
Takes parameters defined in
parametersand applies them to a network configuration script. outputs- Renders the final script used for configuration.
The default NIC templates in /usr/share/openstack-tripleo-heat-templates/network/config use Jinja2 syntax to render the template. For example, the following snippet from the single-nic-vlans configuration renders a set of VLANs for each network:
For default Compute nodes, this renders only the network information for the Storage, Internal API, and Tenant networks:
Chapter 10, Custom network interface templates explores how to render the default Jinja2-based templates to standard YAML versions, which you can use as a basis for customization.
8.4. Default network interface templates Copy linkLink copied to clipboard!
Director contains templates in /usr/share/openstack-tripleo-heat-templates/network/config/ to suit most common network scenarios. The following table outlines each NIC template set and the respective environment file that you must use to enable the templates.
Each environment file for enabling NIC templates uses the suffix .j2.yaml. This is the unrendered Jinja2 version. Ensure that you include the rendered file name, which uses the .yaml suffix, in your deployment.
| NIC directory | Description | Environment file |
|---|---|---|
|
|
Single NIC ( |
|
|
|
Single NIC ( |
|
|
|
Control plane attached to |
|
|
|
Control plane attached to |
|
Environment files exist for deploying the overcloud without an external network, for example, net-bond-with-vlans-no-external.yaml, and for IPv6 deployments, for example, net-bond-with-vlans-v6.yaml. These are provided for backwards compatibility and do not function with composable networks.
Each default NIC template set contains a role.role.j2.yaml template. This file uses Jinja2 to render additional files for each composable role. For example, if your overcloud uses Compute, Controller, and Ceph Storage roles, the deployment renders new templates based on role.role.j2.yaml, such as the following templates:
-
compute.yaml -
controller.yaml -
ceph-storage.yaml.
8.5. Enabling basic network isolation Copy linkLink copied to clipboard!
Director includes templates that you can use to enable basic network isolation. These files are located in the /usr/share/openstack-tripleo-heat-templates/environments directory. For example, you can use the templates to deploy an overcloud on a single NIC with VLANs with basic network isolation. In this scenario, use the net-single-nic-with-vlans template.
Procedure
When you run the
openstack overcloud deploycommand, ensure that you include the following rendered environment files:-
The custom
network_data.yamlfile. - The rendered file name of the default network isolation file.
- The rendered file name of the default network environment file.
- The rendered file name of the default network interface configuration file.
- Any additional environment files relevant to your configuration.
-
The custom
For example:
Chapter 9. Custom composable networks Copy linkLink copied to clipboard!
You can create custom composable networks if you want to host specific network traffic on different networks. To configure the overcloud with an additional composable network, you must configure the following files and templates:
-
The environment file to enable network isolation (
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml). -
The environment file to configure network defaults (
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml). -
A custom
network_datafile to create additional networks outside of the defaults. -
A custom
roles_datafile to assign custom networks to roles. - Templates to define your NIC layout for each node. The overcloud core template collection contains a set of defaults for different use cases.
-
An environment file to enable NICs. This example uses a default file that is located in the
environmentsdirectory. - Any additional environment files to customize your networking parameters. This example uses an environment file to customize OpenStack service mappings to composable networks.
Some of the files in the previous list are Jinja2 format files and have a .j2.yaml extension. Director renders these files to .yaml versions during deployment.
9.1. Composable networks Copy linkLink copied to clipboard!
The overcloud uses the following pre-defined set of network segments by default:
- Control Plane
- Internal API
- Storage
- Storage Management
- Tenant
- External
- Management (optional)
You can use composable networks to add networks for various services. For example, if you have a network that is dedicated to NFS traffic, you can present it to multiple roles.
Director supports the creation of custom networks during the deployment and update phases. You can use these additional networks for ironic bare metal nodes, system management, or to create separate networks for different roles. You can also use them to create multiple sets of networks for split deployments where traffic is routed between networks.
A single data file (network_data.yaml) manages the list of networks that you want to deploy. Include this file with your deployment command using the -n option. Without this option, the deployment uses the default /usr/share/openstack-tripleo-heat-templates/network_data.yaml file.
9.2. Adding a composable network Copy linkLink copied to clipboard!
Use composable networks to add networks for various services. For example, if you have a network that is dedicated to storage backup traffic, you can present the network to multiple roles.
Procedure
Copy the default
network_data.yamlfile:cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.
$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the local copy of the
network_data.yamlfile and add a section for your new network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the following parameters in your
network_data.yamlfile:name-
Sets the human readable name of the network. This parameter is the only mandatory parameter. You can also use
name_lowerto normalize names for readability. For example, changeInternalApitointernal_api. name_lower-
Sets the lowercase version of the name, which director maps to respective networks assigned to roles in the
roles_data.yamlfile. vlan- Sets the VLAN that you want to use for this network.
vip: true-
Creates a virtual IP address (VIP) on the new network. This IP is used as the target IP for services listed in the service-to-network mapping parameter (
ServiceNetMap). Note that VIPs are used only by roles that use Pacemaker. The overcloud load-balancing service redirects traffic from these IPs to their respective service endpoint. ip_subnet- Sets the default IPv4 subnet in CIDR format.
allocation_pools- Sets the IP range for the IPv4 subnet
gateway_ip- Sets the gateway for the network.
routesAdds additional routes to the network. Uses a JSON list that contains each additional route. Each list item contains a dictionary value mapping. Use the following example syntax:
routes: [{'destination':'10.0.0.0/16', 'nexthop':'10.0.2.254'}]routes: [{'destination':'10.0.0.0/16', 'nexthop':'10.0.2.254'}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow subnetsCreates additional routed subnets that fall within this network. This parameter accepts a
dictvalue that contains the lowercase name of the routed subnet as the key and thevlan,ip_subnet,allocation_pools, andgateway_ipparameters as the value mapped to the subnet. The following example demonstrates this layout:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This mapping is common in spine leaf deployments. For more information, see the Spine Leaf Networking guide.
When you add an extra composable network that contains a virtual IP, and want to map some API services to this network, use the
CloudName{network.name}definition to set the DNS name for the API endpoint:CloudName{{network.name}}CloudName{{network.name}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here is an example:
parameter_defaults: ... CloudNameOcProvisioning: baremetal-vip.example.com
parameter_defaults: ... CloudNameOcProvisioning: baremetal-vip.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include the custom
network_data.yamlfile in your deployment command using the-noption. Without the-noption, the deployment command uses the default set of networks. If you want a predictable virtual IP address (VIP), add a
VirtualFixedIPsparameter for your custom network to theparameter_defaultssection of a heat environment file, for example,my_network_vips.yaml:<% my_customer_network %>VirtualFixedIPs: [{'ip_address':'<% ipaddres %>'}]<% my_customer_network %>VirtualFixedIPs: [{'ip_address':'<% ipaddres %>'}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here is an example:
parameter_defaults: ... # Predictable VIPs StorageBackuptVirtualFixedIPs: [{'ip_address':'172.21.1.9'}]parameter_defaults: ... # Predictable VIPs StorageBackuptVirtualFixedIPs: [{'ip_address':'172.21.1.9'}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include the heat environment file,
my_network_vips.yaml, in your deployment command by using the-eoption.
9.3. Including a composable network in a role Copy linkLink copied to clipboard!
You can assign composable networks to the overcloud roles defined in your environment. For example, you might include a custom StorageBackup network with your Ceph Storage nodes.
Procedure
If you do not already have a custom
roles_data.yamlfile, copy the default to your home directory:cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/.
$ cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml /home/stack/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the custom
roles_data.yamlfile. Include the network name in the
networkslist for the role that you want to add the network to. For example, to add theStorageBackupnetwork to the Ceph Storage role, use the following example snippet:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you add custom networks to their respective roles, save the file.
When you run the openstack overcloud deploy command, include the custom roles_data.yaml file using the -r option. Without the -r option, the deployment command uses the default set of roles with their respective assigned networks.
9.4. Assigning OpenStack services to composable networks Copy linkLink copied to clipboard!
Each OpenStack service is assigned to a default network type in the resource registry. These services are bound to IP addresses within the network type’s assigned network. Although the OpenStack services are divided among these networks, the number of actual physical networks can differ as defined in the network environment file. You can reassign OpenStack services to different network types by defining a new network map in an environment file, for example, /home/stack/templates/service-reassignments.yaml. The ServiceNetMap parameter determines the network types that you want to use for each service.
For example, you can reassign the Storage Management network services to the Storage Backup Network by modifying the highlighted sections:
parameter_defaults:
ServiceNetMap:
SwiftMgmtNetwork: storage_backup
CephClusterNetwork: storage_backup
parameter_defaults:
ServiceNetMap:
SwiftMgmtNetwork: storage_backup
CephClusterNetwork: storage_backup
Changing these parameters to storage_backup places these services on the Storage Backup network instead of the Storage Management network. This means that you must define a set of parameter_defaults only for the Storage Backup network and not the Storage Management network.
Director merges your custom ServiceNetMap parameter definitions into a pre-defined list of defaults that it obtains from ServiceNetMapDefaults and overrides the defaults. Director returns the full list, including customizations, to ServiceNetMap, which is used to configure network assignments for various services.
Service mappings apply to networks that use vip: true in the network_data.yaml file for nodes that use Pacemaker. The overcloud load balancer redirects traffic from the VIPs to the specific service endpoints.
You can find a full list of default services in the ServiceNetMapDefaults parameter in the /usr/share/openstack-tripleo-heat-templates/network/service_net_map.j2.yaml file.
9.5. Enabling custom composable networks Copy linkLink copied to clipboard!
Enable custom composable networks using one of the default NIC templates. In this example, use the Single NIC with VLANs template (net-single-nic-with-vlans).
Procedure
When you run the
openstack overcloud deploycommand, ensure that you include the following files:-
The custom
network_data.yamlfile. -
The custom
roles_data.yamlfile with network-to-role assignments. - The rendered file name of the default network isolation.
- The rendered file name of the default network environment file.
- The rendered file name of the default network interface configuration.
- Any additional environment files related to your network, such as the service reassignments.
-
The custom
For example:
This example command deploys the composable networks, including your additional custom networks, across nodes in your overcloud.
Remember that you must render the templates again if you are introducing a new custom network, such as a management network. Simply adding the network name to the roles_data.yaml file is not sufficient.
9.6. Renaming the default networks Copy linkLink copied to clipboard!
You can use the network_data.yaml file to modify the user-visible names of the default networks:
- InternalApi
- External
- Storage
- StorageMgmt
- Tenant
To change these names, do not modify the name field. Instead, change the name_lower field to the new name for the network and update the ServiceNetMap with the new name.
Procedure
In your
network_data.yamlfile, enter new names in thename_lowerparameter for each network that you want to rename:- name: InternalApi name_lower: MyCustomInternalApi
- name: InternalApi name_lower: MyCustomInternalApiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Include the default value of the
name_lowerparameter in theservice_net_map_replaceparameter:- name: InternalApi name_lower: MyCustomInternalApi service_net_map_replace: internal_api
- name: InternalApi name_lower: MyCustomInternalApi service_net_map_replace: internal_apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Custom network interface templates Copy linkLink copied to clipboard!
After you configure Chapter 8, Basic network isolation, you can create a set of custom network interface templates to suit the nodes in your environment. For example, you can include the following files:
-
The environment file to enable network isolation (
/usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml). -
The environment file to configure network defaults (
/usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml). - Templates to define your NIC layout for each node. The overcloud core template collection contains a set of defaults for different use cases. To create a custom NIC template, render a default Jinja2 template as the basis for your custom templates.
-
A custom environment file to enable NICs. This example uses a custom environment file (
/home/stack/templates/custom-network-configuration.yaml) that references your custom interface templates. - Any additional environment files to customize your networking parameters.
-
If you customize your networks, a custom
network_data.yamlfile. -
If you create additional or custom composable networks, a custom
network_data.yamlfile and a customroles_data.yamlfile.
Some of the files in the previous list are Jinja2 format files and have a .j2.yaml extension. Director renders these files to .yaml versions during deployment.
10.1. Custom network architecture Copy linkLink copied to clipboard!
The default NIC templates might not suit a specific network configuration. For example, you might want to create your own custom NIC template that suits a specific network layout. You might want to separate the control services and data services on to separate NICs. In this situation, you can map the service to NIC assignments in the following way:
NIC1 (Provisioning)
- Provisioning / Control Plane
NIC2 (Control Group)
- Internal API
- Storage Management
- External (Public API)
NIC3 (Data Group)
- Tenant Network (VXLAN tunneling)
- Tenant VLANs / Provider VLANs
- Storage
- External VLANs (Floating IP/SNAT)
NIC4 (Management)
- Management
10.2. Rendering default network interface templates for customization Copy linkLink copied to clipboard!
To simplify the configuration of custom interface templates, render the Jinja2 syntax of a default NIC template and use the rendered templates as the basis for your custom configuration.
Procedure
Render a copy of the
openstack-tripleo-heat-templatescollection with theprocess-templates.pyscript:cd /usr/share/openstack-tripleo-heat-templates ./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered
$ cd /usr/share/openstack-tripleo-heat-templates $ ./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-renderedCopy to Clipboard Copied! Toggle word wrap Toggle overflow This converts all Jinja2 templates to their rendered YAML versions and saves the results to
~/openstack-tripleo-heat-templates-rendered.If you use a custom network file or custom roles file, you can include these files using the
-nand-roptions respectively:./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered -n /home/stack/network_data.yaml -r /home/stack/roles_data.yaml
$ ./tools/process-templates.py -o ~/openstack-tripleo-heat-templates-rendered -n /home/stack/network_data.yaml -r /home/stack/roles_data.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the multiple NIC example:
cp -r ~/openstack-tripleo-heat-templates-rendered/network/config/multiple-nics/ ~/templates/custom-nics/
$ cp -r ~/openstack-tripleo-heat-templates-rendered/network/config/multiple-nics/ ~/templates/custom-nics/Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the template set in
custom-nicsto suit your own network configuration.
10.3. Network interface architecture Copy linkLink copied to clipboard!
The custom NIC templates that you render in Section 10.2, “Rendering default network interface templates for customization” contain the parameters and resources sections.
Parameters
The parameters section contains all network configuration parameters for network interfaces. This includes information such as subnet ranges and VLAN IDs. This section should remain unchanged as the heat template inherits values from its parent template. However, you can use a network environment file to modify the values for some parameters.
Resources
The resources section is where the main network interface configuration occurs. In most cases, the resources section is the only one that requires modification. Each resources section begins with the following header:
This snippet runs a script (run-os-net-config.sh) that creates a configuration file for os-net-config to use to configure network properties on a node. The network_config section contains the custom network interface data sent to the run-os-net-config.sh script. You arrange this custom interface data in a sequence based on the type of device.
If you create custom NIC templates, you must set the run-os-net-config.sh script location to an absolute path for each NIC template. The script is located at /usr/share/openstack-tripleo-heat-templates/network/scripts/run-os-net-config.sh on the undercloud.
10.4. Network interface reference Copy linkLink copied to clipboard!
Network interface configuration contains the following parameters:
interface
Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3"):
- type: interface
name: nic2
- type: interface
name: nic2
| Option | Default | Description |
|---|---|---|
| name | Name of the interface. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the interface. | |
| routes | A list of routes assigned to the interface. For more information, see routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the interface as the primary interface. |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the interface. |
| ethtool_opts |
Set this option to |
vlan
Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section.
For example:
- type: vlan
vlan_id:{get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
- type: vlan
vlan_id:{get_param: ExternalNetworkVlanID}
addresses:
- ip_netmask: {get_param: ExternalIpSubnet}
| Option | Default | Description |
|---|---|---|
| vlan_id | The VLAN ID. | |
| device | The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the VLAN. | |
| routes | A list of routes assigned to the VLAN. For more information, see routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the VLAN as the primary interface. |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the VLAN. |
ovs_bond
Defines a bond in Open vSwitch to join two or more interfaces together. This helps with redundancy and increases bandwidth.
For example:
| Option | Default | Description |
|---|---|---|
| name | Name of the bond. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bond. | |
| routes | A list of routes assigned to the bond. For more information, see routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the interface as the primary interface. |
| members | A sequence of interface objects that you want to use in the bond. | |
| ovs_options | A set of options to pass to OVS when creating the bond. | |
| ovs_extra | A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bond. |
ovs_bridge
Defines a bridge in Open vSwitch, which connects multiple interface, ovs_bond, and vlan objects together.
The network interface type, ovs_bridge, takes a parameter name.
If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name. If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge.
If you are defining an OVS bridge for the external tripleo network, then retain the values bridge_name and interface_name as your deployment framework automatically replaces these values with an external bridge name and an external interface name, respectively.
For example:
The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. This causes some downtime. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge:
- You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface.
- To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond (Linux bridge). If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs.
| Option | Default | Description |
|---|---|---|
| name | Name of the bridge. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bridge. | |
| routes | A list of routes assigned to the bridge. For more information, see routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
| ovs_options | A set of options to pass to OVS when creating the bridge. | |
| ovs_extra | A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bridge. |
linux_bond
Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter.
For example:
Note that nic2 uses primary: true to ensure that the bond uses the MAC address for nic2.
| Option | Default | Description |
|---|---|---|
| name | Name of the bond. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bond. | |
| routes | A list of routes assigned to the bond. See routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| primary | False | Defines the interface as the primary interface. |
| members | A sequence of interface objects that you want to use in the bond. | |
| bonding_options | A set of options when creating the bond. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bond. |
linux_bridge
Defines a Linux bridge, which connects multiple interface, linux_bond, and vlan objects together. The external bridge also uses two special values for parameters:
-
bridge_name, which is replaced with the external bridge name. -
interface_name, which is replaced with the external interface.
For example:
| Option | Default | Description |
|---|---|---|
| name | Name of the bridge. | |
| use_dhcp | False | Use DHCP to get an IP address. |
| use_dhcpv6 | False | Use DHCP to get a v6 IP address. |
| addresses | A list of IP addresses assigned to the bridge. | |
| routes | A list of routes assigned to the bridge. For more information, see routes. | |
| mtu | 1500 | The maximum transmission unit (MTU) of the connection. |
| members | A sequence of interface, VLAN, and bond objects that you want to use in the bridge. | |
| defroute | True |
Use a default route provided by the DHCP service. Only applies when you enable |
| persist_mapping | False | Write the device alias configuration instead of the system names. |
| dhclient_args | None | Arguments that you want to pass to the DHCP client. |
| dns_servers | None | List of DNS servers that you want to use for the bridge. |
routes
Defines a list of routes to apply to a network interface, VLAN, bridge, or bond.
For example:
| Option | Default | Description |
|---|---|---|
| ip_netmask | None | IP and netmask of the destination network. |
| default | False |
Sets this route to a default route. Equivalent to setting |
| next_hop | None | The IP address of the router used to reach the destination network. |
10.5. Example network interface layout Copy linkLink copied to clipboard!
The following snippet for an example Controller node NIC template demonstrates how to configure the custom network scenario to keep the control group separate from the OVS bridge:
This template uses three network interfaces and assigns a number of tagged VLAN devices to the numbered interfaces, nic1 to nic3. On nic2 and nic3 this template creates the OVS bridge that hosts the Storage, Tenant, and External networks. As a result, it creates the following layout:
NIC1 (Provisioning)
- Provisioning / Control Plane
NIC2 and NIC3 (Management)
- Internal API
- Storage
- Storage Management
- Tenant Network (VXLAN tunneling)
- Tenant VLANs / Provider VLANs
- External (Public API)
- External VLANs (Floating IP/SNAT)
10.6. Network interface template considerations for custom networks Copy linkLink copied to clipboard!
When you use composable networks, the process-templates.py script renders the static templates to include networks and roles that you define in your network_data.yaml and roles_data.yaml files. Ensure that your rendered NIC templates contain the following items:
- A static file for each role, including custom composable networks.
- The correct network definitions in the static file for each role.
Each static file requires all of the parameter definitions for any custom networks, even if the network is not used on the role. Ensure that the rendered templates contain these parameters. For example, if you add a StorageBackup network only to the Ceph nodes, you must also include this definition in the parameters section in the NIC configuration templates for all roles:
You can also include the parameters definitions for VLAN IDs and/or gateway IP, if necessary:
The IpSubnet parameter for the custom network appears in the parameter definitions for each role. However, since the Ceph role might be the only role that uses the StorageBackup network, only the NIC configuration template for the Ceph role uses the StorageBackup parameters in the network_config section of the template.
10.7. Custom network environment file Copy linkLink copied to clipboard!
The custom network environment file (in this case, /home/stack/templates/custom-network-configuration.yaml) is a heat environment file that describes the overcloud network environment and points to the custom network interface configuration templates. You can define the subnets and VLANs for your network along with IP address ranges. You can then customize these values for the local environment.
The resource_registry section contains references to the custom network interface templates for each node role. Each resource registered uses the following format:
-
OS::TripleO::[ROLE]::Net::SoftwareConfig: [FILE]
[ROLE] is the role name and [FILE] is the respective network interface template for that particular role. For example:
resource_registry: OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/custom-nics/controller.yaml
resource_registry:
OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/custom-nics/controller.yaml
The parameter_defaults section contains a list of parameters that define the network options for each network type.
10.8. Network environment parameters Copy linkLink copied to clipboard!
The following table is a list of parameters that you can use in the parameter_defaults section of a network environment file to override the default parameter values in your NIC templates.
| Parameter | Description | Type |
|---|---|---|
|
| The IP address of the router on the Control Plane, which is used as a default route for roles other than the Controller nodes. Set this value to the undercloud IP if you use IP masquerade instead of a router. | string |
|
|
The CIDR netmask of the IP network used on the Control Plane. If the Control Plane network uses 192.168.24.0/24, the CIDR is | string (though is always a number) |
|
|
The full network and CIDR netmask for a particular network. The default is automatically set to the network | string |
|
|
The IP allocation range for a particular network. The default is automatically set to the network | hash |
|
|
The VLAN ID for a node on a particular network. The default is set automatically to the network | number |
|
|
The router address for a particular network, which you can use as a default route for roles or for routes to other networks. The default is automatically set to the network | string |
|
| A list of DNS servers added to resolv.conf. Usually allows a maximum of 2 servers. | comma delimited list |
|
|
The options for bonding interfaces. For example, | string |
|
|
Legacy value for the name of the external bridge that you want to use for OpenStack Networking (neutron). This value is empty by default, which means that you can define multiple physical bridges in the | string |
|
|
Defines the flat networks that you want to configure in neutron plugins. The default value is | string |
|
|
The logical to physical bridge mappings that you want to use. The default value maps the external bridge on hosts ( | string |
|
|
Defines the interface that you want to bridge onto | string |
|
|
The tenant network type for OpenStack Networking (neutron). To specify multiple values, use a comma separated list. The first type that you specify is used until all available networks are exhausted, then the next type is used. For example, | string |
|
|
The tunnel types for the neutron tenant network. To specify multiple values, use a comma separated string. For example, | string / comma separated list |
|
|
Ranges of GRE tunnel IDs that you want to make available for tenant network allocation. For example, | string |
|
|
Ranges of VXLAN VNI IDs that you want to make available for tenant network allocation. For example, | string |
|
|
Defines whether to enable or completely disable all tunnelled networks. Leave this enabled unless you are sure that you do not want to create tunnelled networks in future. The default value is | Boolean |
|
|
The ML2 and Open vSwitch VLAN mapping range that you want to support. Defaults to permitting any VLAN on the | string |
|
|
The mechanism drivers for the neutron tenant network. The default value is | string / comma separated list |
10.9. Example custom network environment file Copy linkLink copied to clipboard!
The following snippet is an example of an environment file that you can use to enable your NIC templates and set custom parameters.
10.10. Enabling network isolation with custom NICs Copy linkLink copied to clipboard!
To deploy the overcloud with network isolation and custom NIC templates, include all of the relevant networking environment files in the overcloud deployment command.
Procedure
When you run the
openstack overcloud deploycommand, include the following files:-
The custom
network_data.yamlfile. - The rendered file name of the default network isolation.
- The rendered file name of the default network environment file.
- The custom environment network configuration that includes resource references to your custom NIC templates.
- Any additional environment files relevant to your configuration.
-
The custom
For example:
-
Include the
network-isolation.yamlfile first, then thenetwork-environment.yamlfile. The subsequentcustom-network-configuration.yamloverrides theOS::TripleO::[ROLE]::Net::SoftwareConfigresources from the previous two files. -
If you use composable networks, include the
network_data.yamlandroles_data.yamlfiles with this command.
Chapter 11. Additional network configuration Copy linkLink copied to clipboard!
This chapter follows on from the concepts and procedures outlined in Chapter 10, Custom network interface templates and provides some additional information to help configure parts of your overcloud network.
11.1. Configuring custom interfaces Copy linkLink copied to clipboard!
Individual interfaces might require modification. The following example shows the modifications that are necessary to use a second NIC to connect to an infrastructure network with DHCP addresses, and to use a third and fourth NIC for the bond:
The network interface template uses either the actual interface name (eth0, eth1, enp0s25) or a set of numbered interfaces (nic1, nic2, nic3). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces (nic1, nic2, etc.) instead of named interfaces (eth0, eno2, etc.). For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to the NICs of both hosts as nic1 and nic2.
The order of numbered interfaces corresponds to the order of named network interface types:
-
ethXinterfaces, such aseth0,eth1, etc. These are usually onboard interfaces. -
enoXinterfaces, such aseno0,eno1, etc. These are usually onboard interfaces. -
enXinterfaces, sorted alpha numerically, such asenp3s0,enp3s1,ens3, etc. These are usually add-on interfaces.
The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host.
You can configure os-net-config mappings for specific nodes, and assign aliases to the physical interfaces on each node to pre-determine which physical NIC maps to specific aliases, such as nic1 or nic2. You can also map a MAC address to a specified alias. You map interfaces to aliases in an environment file. You can map specific nodes by using the MAC address or DMI keyword, or you can map a group of nodes by using a DMI keyword. The following example configures three nodes and two node groups with aliases to the physical interfaces. The resulting configuration is applied by os-net-config. On each node, you can see the applied configuration in the interface_mapping section of the /etc/os-net-config/mapping.yaml file.
Example os-net-config-mappings.yaml
- 1
- Maps
node1to the specified MAC address, and assignsnic1as the alias for the MAC address on this node. - 2
- Maps
node3to the node with the system UUID "A8C85861-1B16-4803-8689-AFC62984F8F6", and assignsnic1as the alias forem3interface on this node. - 3
- The
dmiStringparameter must be set to a valid string keyword. For a list of the valid string keywords, see the DMIDECODE(8) man page. - 4
- Maps all the nodes in
nodegroup1to nodes with the product name "PowerEdge R630", and assignsnic1,nic2, andnic3as the alias for the named interfaces on these nodes.
-
If you want to use the
NetConfigDataLookupconfiguration, you must also include theos-net-config-mappings.yamlfile in theNodeUserDataresource registry. -
Normally,
os-net-configregisters only the interfaces that are already connected in anUPstate. However, if you hardcode interfaces with a custom mapping file, the interface is registered even if it is in aDOWNstate.
11.2. Configuring routes and default routes Copy linkLink copied to clipboard!
You can set the default route of a host in one of two ways. If the interface uses DHCP and the DHCP server offers a gateway address, the system uses a default route for that gateway. Otherwise, you can set a default route on an interface with a static IP.
Although the Linux kernel supports multiple default gateways, it uses only the gateway with the lowest metric. If there are multiple DHCP interfaces, this can result in an unpredictable default gateway. In this case, it is recommended to set defroute: false for interfaces other than the interface that uses the default route.
For example, you might want a DHCP interface (nic3) to be the default route. Use the following YAML snippet to disable the default route on another DHCP interface (nic2):
The defroute parameter applies only to routes obtained through DHCP.
To set a static route on an interface with a static IP, specify a route to the subnet. For example, you can set a route to the 10.1.2.0/24 subnet through the gateway at 172.17.0.1 on the Internal API network:
11.3. Configuring policy-based routing Copy linkLink copied to clipboard!
On Controller nodes, to configure unlimited access from different networks, configure policy-based routing. Policy-based routing uses route tables where, on a host with multiple interfaces, you can send traffic through a particular interface depending on the source address. You can route packets that come from different sources to different networks, even if the destinations are the same.
For example, you can configure a route to send traffic to the Internal API network, based on the source address of the packet, even when the default route is for the External network. You can also define specific route rules for each interface.
Red Hat OpenStack Platform uses the os-net-config tool to configure network properties for your overcloud nodes. The os-net-config tool manages the following network routing on Controller nodes:
-
Routing tables in the
/etc/iproute2/rt_tablesfile -
IPv4 rules in the
/etc/sysconfig/network-scripts/rule-{ifname}file -
IPv6 rules in the
/etc/sysconfig/network-scripts/rule6-{ifname}file -
Routing table specific routes in the
/etc/sysconfig/network-scripts/route-{ifname}
Prerequisites
- You have installed the undercloud successfully. For more information, see Installing director in the Director Installation and Usage guide.
-
You have rendered the default
.j2network interface templates from theopenstack-tripleo-heat-templatesdirectory. For more information, see Section 10.2, “Rendering default network interface templates for customization”.
Procedure
Create
route_tableandinterfaceentries in a custom NIC template from the~/templates/custom-nicsdirectory, define a route for the interface, and define rules that are relevant to your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
run-os-net-config.shscript location to an absolute path in each custom NIC template that you create. The script is located in the/usr/share/openstack-tripleo-heat-templates/network/scripts/directory on the undercloud:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include your custom NIC configuration and network environment files in the deployment command, along with any other environment files relevant to your deployment:
openstack overcloud deploy --templates \ -e ~/templates/<custom-nic-template>
$ openstack overcloud deploy --templates \ -e ~/templates/<custom-nic-template> -e <OTHER_ENVIRONMENT_FILES>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the following commands on a Controller node to verify that the routing configuration is functioning correctly:
cat /etc/iproute2/rt_tables ip route ip rule
$ cat /etc/iproute2/rt_tables $ ip route $ ip ruleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Configuring jumbo frames Copy linkLink copied to clipboard!
The Maximum Transmission Unit (MTU) setting determines the maximum amount of data transmitted with a single Ethernet frame. Using a larger value results in less overhead because each frame adds data in the form of a header. The default value is 1500 and using a higher value requires the configuration of the switch port to support jumbo frames. Most switches support an MTU of at least 9000, but many are configured for 1500 by default.
The MTU of a VLAN cannot exceed the MTU of the physical interface. Ensure that you include the MTU value on the bond or interface.
The Storage, Storage Management, Internal API, and Tenant networks all benefit from jumbo frames.
Routers typically cannot forward jumbo frames across Layer 3 boundaries. To avoid connectivity issues, do not change the default MTU for the Provisioning interface, External interface, and any floating IP interfaces.
11.5. Configuring ML2/OVN northbound path MTU discovery for jumbo frame fragmentation Copy linkLink copied to clipboard!
If a VM on your internal network sends jumbo frames to an external network, and the maximum transmission unit (MTU) of the internal network exceeds the MTU of the external network, a northbound frame can easily exceed the capacity of the external network.
ML2/OVS automatically handles this oversized packet issue, and ML2/OVN handles it automatically for TCP packets.
But to ensure proper handling of oversized northbound UDP packets in a deployment that uses the ML2/OVN mechanism driver, you need to perform additional configuration steps.
These steps configure ML2/OVN routers to return ICMP "fragmentation needed" packets to the sending VM, where the sending application can break the payload into smaller packets.
In east/west traffic, a RHOSP ML2/OVN deployment does not support fragmentation of packets that are larger than the smallest MTU on the east/west path. For example:
- VM1 is on Network1 with an MTU of 1300.
- VM2 is on Network2 with an MTU of 1200.
A ping in either direction between VM1 and VM2 with a size of 1171 or less succeeds. A ping with a size greater than 1171 results in 100 percent packet loss.
With no identified customer requirements for this type of fragmentation, Red Hat has no plans to add support.
Prerequisites
- RHEL 8.2.0.4 or later with kernel-4.18.0-193.20.1.el8_2 or later.
Procedure
Check the kernel version.
ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-int
ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-intCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the output includes
Check pkt length action: No, or if there is noCheck pkt length actionstring in the output, upgrade to RHEL 8.2.0.4 or later, or do not send jumbo frames to an external network that has a smaller MTU. If the output includes
Check pkt length action: Yes, set the following value in the [ovn] section of ml2_conf.ini.ovn_emit_need_to_frag = True
ovn_emit_need_to_frag = TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Configuring the native VLAN on a trunked interface Copy linkLink copied to clipboard!
If a trunked interface or bond has a network on the native VLAN, the IP addresses are assigned directly to the bridge and there is no VLAN interface.
For example, if the External network is on the native VLAN, a bonded configuration looks like this:
When you move the address or route statements onto the bridge, remove the corresponding VLAN interface from the bridge. Make the changes to all applicable roles. The External network is only on the controllers, so only the controller template requires a change. The Storage network is attached to all roles, so if the Storage network is on the default VLAN, all roles require modifications.
11.7. Increasing the maximum number of connections that netfilter tracks Copy linkLink copied to clipboard!
The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses netfilter connection tracking to build stateful firewalls and to provide network address translation (NAT) on virtual networks. There are some situations that can cause the kernel space to reach the maximum connection limit and result in errors such as nf_conntrack: table full, dropping packet. You can increase the limit for connection tracking (conntrack) and avoid these types of errors. You can increase the conntrack limit for one or more roles, or across all the nodes, in your RHOSP deployment.
Prerequisites
- A successful RHOSP undercloud installation.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the undercloud credentials file:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a custom YAML environment file.
Example
vi /home/stack/templates/my-environment.yaml
$ vi /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your environment file must contain the keywords
parameter_defaultsandExtraSysctlSettings. Enter a new value for the maximum number of connections that netfilter can track in the variable,net.nf_conntrack_max.Example
In this example, you can set the conntrack limit across all hosts in your RHOSP deployment:
parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000parameter_defaults: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
<role>Parameterparameter to set the conntrack limit for a specific role:parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>parameter_defaults: <role>Parameters: ExtraSysctlSettings: net.nf_conntrack_max: value: <simultaneous_connections>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<role>with the name of the role.For example, use
ControllerParametersto set the conntrack limit for the Controller role, orComputeParametersto set the conntrack limit for the Compute role.Replace
<simultaneous_connections>with the quantity of simultaneous connections that you want to allow.Example
In this example, you can set the conntrack limit for only the Controller role in your RHOSP deployment:
parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000parameter_defaults: ControllerParameters: ExtraSysctlSettings: net.nf_conntrack_max: value: 500000Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default value for
net.nf_conntrack_maxis500000connections. The maximum value is:4294967295.
Run the deployment command and include the core heat templates, environment files, and this new custom environment file.
ImportantThe order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence.
Example
openstack overcloud deploy --templates \ -e /home/stack/templates/my-environment.yaml
$ openstack overcloud deploy --templates \ -e /home/stack/templates/my-environment.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 12. Network interface bonding Copy linkLink copied to clipboard!
You can use various bonding options in your custom network configuration.
12.1. Network interface bonding for overcloud nodes Copy linkLink copied to clipboard!
You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput.
Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds.
| Bond type | Type value | Allowed bridge types | Allowed members |
|---|---|---|---|
| OVS kernel bonds |
|
|
|
| OVS-DPDK bonds |
|
|
|
| Linux kernel bonds |
|
|
|
Do not combine ovs_bridge and ovs_user_bridge on the same node.
12.2. Creating Open vSwitch (OVS) bonds Copy linkLink copied to clipboard!
You create OVS bonds in your network interface templates. For example, you can create a bond as part of an OVS user space bridge:
In this example, you create the bond from two DPDK ports.
The ovs_options parameter contains the bonding options. You can configure a bonding options in a network environment file with the BondInterfaceOvsOptions parameter:
parameter_defaults: BondInterfaceOvsOptions: "bond_mode=balance-slb"
parameter_defaults:
BondInterfaceOvsOptions: "bond_mode=balance-slb"
12.3. Open vSwitch (OVS) bonding options Copy linkLink copied to clipboard!
You can set various Open vSwitch (OVS) bonding options with the ovs_options heat parameter in your NIC template files. The active-backup, balance-tlb, balance-alb and balance-slb modes do not require any specific configuration of the switch.
bond_mode=balance-slb-
Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the
balance-slbbonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. Thebalance-slbmode is similar to mode 2 bonds used by the Linux bonding driver, although unlike mode 2,balance-slbdoes not require any specific configuration of the swtich. You can use thebalance-slbmode to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup-
When you configure a bond using
active-backupbond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active | passive | off]-
Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use
bond_mode=balance-slborbond_mode=active-backup. other-config:lacp-fallback-ab=true- Set active-backup as the bond mode if LACP fails.
other_config:lacp-time=[fast | slow]- Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow.
other_config:bond-detect-mode=[miimon | carrier]- Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
other_config:bond-miimon-interval=100- If using miimon, set the heartbeat interval (milliseconds).
bond_updelay=1000- Set the interval (milliseconds) that a link must be up to be activated to prevent flapping.
other_config:bond-rebalance-interval=10000- Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members.
12.4. Using Link Aggregation Control Protocol (LACP) with Open vSwitch (OVS) bonding modes Copy linkLink copied to clipboard!
You can use bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance.
Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options.
The OVS/OVS-DPDK balance-tcp mode is available as a technology preview only.
On control and storage networks, Red Hat recommends that you use Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond/LACP/VLAN configuration provides NIC management without the OVS disruption potential.
| Objective | OVS bond mode | Compatible LACP options | Notes |
| High availability (active-passive) |
|
| |
| Increased throughput (active-active) |
|
|
|
|
|
|
|
12.5. Creating Linux bonds Copy linkLink copied to clipboard!
You create linux bonds in your network interface templates. For example, you can create a linux bond that bond two interfaces:
The bonding_options parameter sets the specific bonding options for the Linux bond.
mode-
Sets the bonding mode, which in the example is
802.3ador LACP mode. For more information about Linux bonding modes, see "Upstream Switch Configuration Depending on the Bonding Modes" in the Red Hat Enterprise Linux 8 Configuring and Managing Networking guide. lacp_rate- Defines whether LACP packets are sent every 1 second, or every 30 seconds.
updelay- Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages.
miimon- The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver.
Use the following additional examples as guides to configure your own Linux bonds:
Linux bond set to
active-backupmode with one VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Linux bond on OVS bridge. Bond set to
802.3adLACP mode with one VLAN:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 13. Controlling node placement Copy linkLink copied to clipboard!
By default, director selects nodes for each role randomly, usually according to the profile tag of the node. However, you can also define specific node placement. This is useful in the following scenarios:
-
Assign specific node IDs, for example,
controller-0,controller-1 - Assign custom host names
- Assign specific IP addresses
- Assign specific Virtual IP addresses
Manually setting predictable IP addresses, virtual IP addresses, and ports for a network alleviates the need for allocation pools. However, it is recommended to retain allocation pools for each network to ease with scaling new nodes. Ensure that any statically defined IP addresses fall outside the allocation pools.
13.1. Assigning specific node IDs Copy linkLink copied to clipboard!
You can assign node IDs to specific nodes, for example, controller-0, controller-1, compute-0, and compute-1.
Procedure
Assign the ID as a per-node capability that the Compute scheduler matches on deployment:
openstack baremetal node set --property capabilities='node:controller-0,boot_option:local' <id>
openstack baremetal node set --property capabilities='node:controller-0,boot_option:local' <id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command assigns the capability
node:controller-0to the node. Repeat this pattern using a unique continuous index, starting from 0, for all nodes. Ensure that all nodes for a given role (Controller, Compute, or each of the storage roles) are tagged in the same way, or the Compute scheduler cannot match the capabilities correctly.Create a heat environment file (for example,
scheduler_hints_env.yaml) that uses scheduler hints to match the capabilities for each node:parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%'parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following parameters to configure scheduler hints for other role types:
-
ControllerSchedulerHintsfor Controller nodes. -
ComputeSchedulerHintsfor Compute nodes. -
BlockStorageSchedulerHintsfor Block Storage nodes. -
ObjectStorageSchedulerHintsfor Object Storage nodes. -
CephStorageSchedulerHintsfor Ceph Storage nodes. -
[ROLE]SchedulerHintsfor custom roles. Replace[ROLE]with the role name.
-
-
Include the
scheduler_hints_env.yamlenvironment file in theovercloud deploy command.
Node placement takes priority over profile matching. To avoid scheduling failures, use the default baremetal flavor for deployment and not the flavors that are designed for profile matching (compute, control):. Set the respective flavor parameters to baremetal in an environment file:
parameter_defaults: OvercloudControllerFlavor: baremetal OvercloudComputeFlavor: baremetal
parameter_defaults:
OvercloudControllerFlavor: baremetal
OvercloudComputeFlavor: baremetal
13.2. Assigning custom host names Copy linkLink copied to clipboard!
In combination with the node ID configuration in Section 13.1, “Assigning specific node IDs”, director can also assign a specific custom host name to each node. This is useful when you need to define where a system is located (for example, rack2-row12), match an inventory identifier, or other situations where a custom hostname is desirable.
Do not rename a node after it has been deployed. Renaming a node after deployment creates issues with instance management.
Procedure
Use the
HostnameMapparameter in an environment file, such as thescheduler_hints_env.yamlfile from Section 13.1, “Assigning specific node IDs”:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the
HostnameMapin theparameter_defaultssection, and set each mapping as the original hostname that heat defines withHostnameFormatparameters (for example,overcloud-controller-0) and the second value is the desired custom hostname for that node (overcloud-controller-prod-123-0).
Use this method in combination with the node ID placement to ensure that each node has a custom hostname.
13.3. Assigning predictable IPs Copy linkLink copied to clipboard!
For further control over the resulting environment, director can assign overcloud nodes with specific IP addresses on each network.
Procedure
Create an environment file to define the predictive IP addressing:
touch ~/templates/predictive_ips.yaml
$ touch ~/templates/predictive_ips.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
parameter_defaultssection in the~/templates/predictive_ips.yamlfile and use the following syntax to define predictive IP addressing for each node on each network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Each node role has a unique parameter. Replace
<role_name>IPswith the relevant parameter:-
ControllerIPsfor Controller nodes. -
ComputeIPsfor Compute nodes. -
CephStorageIPsfor Ceph Storage nodes. -
BlockStorageIPsfor Block Storage nodes. -
SwiftStorageIPsfor Object Storage nodes. [ROLE]IPsfor custom roles. Replace[ROLE]with the role name.Each parameter is a map of network names to a list of addresses. Each network type must have at least as many addresses as there will be nodes on that network. Director assigns addresses in order. The first node of each type receives the first address on each respective list, the second node receives the second address on each respective lists, and so forth.
For example, use the following example syntax if you want to deploy three Ceph Storage nodes in your overcloud with predictive IP addresses:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The first Ceph Storage node receives two addresses: 172.16.1.100 and 172.16.3.100. The second receives 172.16.1.101 and 172.16.3.101, and the third receives 172.16.1.102 and 172.16.3.102. The same pattern applies to the other node types.
To configure predictable IP addresses on the control plane, copy the
/usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-ctlplane.yamlfile to thetemplatesdirectory of thestackuser:cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-ctlplane.yaml ~/templates/.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-ctlplane.yaml ~/templates/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the new
ips-from-pool-ctlplane.yamlfile with the following parameter example. You can combine the control plane IP address declarations with the IP address declarations for other networks and use only one file to declare the IP addresses for all networks on all roles. You can also use predictable IP addresses for spine/leaf. Each node must have IP addresses from the correct subnet.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the IP addresses that you choose fall outside the allocation pools for each network that you define in your network environment file. For example, ensure that the
internal_apiassignments fall outside of theInternalApiAllocationPoolsrange to avoid conflicts with any IPs chosen automatically. Also ensure that the IP assignments do not conflict with the VIP configuration, either for standard predictable VIP placement (see Section 13.4, “Assigning predictable Virtual IPs”) or external load balancing (see Section 21.4, “Configuring external load balancing”).ImportantIf an overcloud node is deleted, do not remove its entries in the IP lists. The IP list is based on the underlying heat indices, which do not change even if you delete nodes. To indicate a given entry in the list is no longer used, replace the IP value with a value such as
DELETEDorUNUSED. Entries should never be removed from the IP lists, only changed or added.
-
To apply this configuration during a deployment, include the
predictive_ips.yamlenvironment file with theopenstack overcloud deploycommand.ImportantIf you use network isolation, include the
predictive_ips.yamlfile after thenetwork-isolation.yamlfile:openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/predictive_ips.yaml \ [OTHER OPTIONS]
$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/predictive_ips.yaml \ [OTHER OPTIONS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.4. Assigning predictable Virtual IPs Copy linkLink copied to clipboard!
In addition to defining predictable IP addresses for each node, you can also define predictable Virtual IPs (VIPs) for clustered services.
Procedure
Edit the network environment file and add the VIP parameters in the
parameter_defaultssection:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select these IPs from outside of their respective allocation pool ranges. For example, select an IP address for
InternalApiVirtualFixedIPsthat is not within theInternalApiAllocationPoolsrange.
This step is only for overclouds that use the default internal load balancing configuration. If you want to assign VIPs with an external load balancer, use the procedure in the dedicated External Load Balancing for the Overcloud guide.
Chapter 14. Enabling SSL/TLS on overcloud public endpoints Copy linkLink copied to clipboard!
By default, the overcloud uses unencrypted endpoints for the overcloud services. To enable SSL/TLS in your overcloud, Red Hat recommends that you use a certificate authority (CA) solution.
When you use a certificate authority (CA) solution, you have production ready solutions such as a certificate renewals, certificate revocation lists (CRLs), and industry accepted cryptography. For information on using Red Hat Identity Manager (IdM) as a CA, see Implementing TLS-e with Ansible.
You can use the following manual process to enable SSL/TLS for Public API endpoints only, the Internal and Admin APIs remain unencrypted. You must also manually update SSL/TLS certificates if you do not use a CA. For more information, see Manually updating SSL/TLS certificates.
Prerequisites
- Network isolation to define the endpoints for the Public API.
-
The
openssl-perlpackage is installed. - You have an SSL/TLS certificate. For more information see Configuring custom SSL/TLS certificates.
14.1. Initializing the signing host Copy linkLink copied to clipboard!
The signing host is the host that generates and signs new certificates with a certificate authority. If you have never created SSL certificates on the chosen signing host, you might need to initialize the host so that it can sign new certificates.
Procedure
The
/etc/pki/CA/index.txtfile contains records of all signed certificates. Ensure that the filesystem path andindex.txtfile are present:sudo mkdir -p /etc/pki/CA sudo touch /etc/pki/CA/index.txt
$ sudo mkdir -p /etc/pki/CA $ sudo touch /etc/pki/CA/index.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
/etc/pki/CA/serialfile identifies the next serial number to use for the next certificate to sign. Check if this file exists. If the file does not exist, create a new file with a new starting value:echo '1000' | sudo tee /etc/pki/CA/serial
$ echo '1000' | sudo tee /etc/pki/CA/serialCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.2. Creating a certificate authority Copy linkLink copied to clipboard!
Normally you sign your SSL/TLS certificates with an external certificate authority. In some situations, you might want to use your own certificate authority. For example, you might want to have an internal-only certificate authority.
Procedure
Generate a key and certificate pair to act as the certificate authority:
openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem
$ openssl genrsa -out ca.key.pem 4096 $ openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
openssl reqcommand requests certain details about your authority. Enter these details at the prompt. These commands create a certificate authority file calledca.crt.pem. Set the certificate location as the value for the
PublicTLSCAFileparameter in theenable-tls.yamlfile. When you set the certificate location as the value for thePublicTLSCAFileparameter, you ensure that the CA certificate path is added to theclouds.yamlauthentication file.parameter_defaults: PublicTLSCAFile: /etc/pki/ca-trust/source/anchors/cacert.pemparameter_defaults: PublicTLSCAFile: /etc/pki/ca-trust/source/anchors/cacert.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.3. Adding the certificate authority to clients Copy linkLink copied to clipboard!
For any external clients aiming to communicate using SSL/TLS, copy the certificate authority file to each client that requires access to your Red Hat OpenStack Platform environment.
Procedure
Copy the certificate authority to the client system:
sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/
$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you copy the certificate authority file to each client, run the following command on each client to add the certificate to the certificate authority trust bundle:
sudo update-ca-trust extract
$ sudo update-ca-trust extractCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.4. Creating an SSL/TLS key Copy linkLink copied to clipboard!
Enabling SSL/TLS on an OpenStack environment requires an SSL/TLS key to generate your certificates.
Procedure
Run the following command to generate the SSL/TLS key (
server.key.pem):openssl genrsa -out server.key.pem 2048
$ openssl genrsa -out server.key.pem 2048Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.5. Creating an SSL/TLS certificate signing request Copy linkLink copied to clipboard!
Complete the following steps to create a certificate signing request.
Procedure
Copy the default OpenSSL configuration file:
cp /etc/pki/tls/openssl.cnf .
$ cp /etc/pki/tls/openssl.cnf .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the new
openssl.cnffile and configure the SSL parameters that you want to use for director. An example of the types of parameters to modify include:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
commonName_defaultto one of the following entries:-
If you are using an IP address to access director over SSL/TLS, use the
undercloud_public_hostparameter in theundercloud.conffile. - If you are using a fully qualified domain name to access director over SSL/TLS, use the domain name.
Edit the
alt_namessection to include the following entries:-
IP- A list of IP addresses that clients use to access director over SSL. -
DNS- A list of domain names that clients use to access director over SSL. Also include the Public API IP address as a DNS entry at the end of thealt_namessection.
NoteFor more information about
openssl.cnf, run theman openssl.cnfcommand.-
If you are using an IP address to access director over SSL/TLS, use the
Run the following command to generate a certificate signing request (
server.csr.pem):openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem
$ openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you include your OpenStack SSL/TLS key with the
-keyoption.
This command generates a server.csr.pem file, which is the certificate signing request. Use this file to create your OpenStack SSL/TLS certificate.
14.6. Creating the SSL/TLS certificate Copy linkLink copied to clipboard!
To generate the SSL/TLS certificate for your OpenStack environment, the following files must be present:
openssl.cnf- The customized configuration file that specifies the v3 extensions.
server.csr.pem- The certificate signing request to generate and sign the certificate with a certificate authority.
ca.crt.pem- The certificate authority, which signs the certificate.
ca.key.pem- The certificate authority private key.
Procedure
Create the
newcertsdirectory if it does not already exist:sudo mkdir -p /etc/pki/CA/newcerts
sudo mkdir -p /etc/pki/CA/newcertsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create a certificate for your undercloud or overcloud:
sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem
$ sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command uses the following options:
-config-
Use a custom configuration file, which is the
openssl.cnffile with v3 extensions. -extensions v3_req- Enabled v3 extensions.
-days- Defines how long in days until the certificate expires.
-in'- The certificate signing request.
-out- The resulting signed certificate.
-cert- The certificate authority file.
-keyfile- The certificate authority private key.
This command creates a new certificate named server.crt.pem. Use this certificate in conjunction with your OpenStack SSL/TLS key
14.7. Enabling SSL/TLS Copy linkLink copied to clipboard!
To enable SSL/TLS in your overcloud, you must create an environment file that contains parameters for your SSL/TLS certiciates and private key.
Procedure
Copy the
enable-tls.yamlenvironment file from the heat template collection:cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit this file and make the following changes for these parameters:
- SSLCertificate
Copy the contents of the certificate file (
server.crt.pem) into theSSLCertificateparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe certificate contents require the same indentation level for all new lines.
- SSLIntermediateCertificate
If you have an intermediate certificate, copy the contents of the intermediate certificate into the
SSLIntermediateCertificateparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe certificate contents require the same indentation level for all new lines.
- SSLKey
Copy the contents of the private key (
server.key.pem) into theSSLKeyparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe private key contents require the same indentation level for all new lines.
14.8. Injecting a root certificate Copy linkLink copied to clipboard!
If the certificate signer is not in the default trust store on the overcloud image, you must inject the certificate authority into the overcloud image.
Procedure
Copy the
inject-trust-anchor-hiera.yamlenvironment file from the heat template collection:cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit this file and make the following changes for these parameters:
- CAMap
Lists each certificate authority content (CA) to inject into the overcloud. The overcloud requires the CA files used to sign the certificates for both the undercloud and the overcloud. Copy the contents of the root certificate authority file (
ca.crt.pem) into an entry. For example, yourCAMapparameter might look like the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe certificate authority contents require the same indentation level for all new lines.
You can also inject additional CAs with the CAMap parameter.
14.9. Configuring DNS endpoints Copy linkLink copied to clipboard!
If you use a DNS hostname to access the overcloud through SSL/TLS, copy the /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml file into the /home/stack/templates directory.
It is not possible to redeploy with a TLS-everywhere architecture if this environment file is not included in the initial deployment.
Configure the host and domain names for all fields, adding parameters for custom networks if needed:
- CloudDomain
- the DNS domain for hosts.
- CloudName
- The DNS hostname of the overcloud endpoints.
- CloudNameCtlplane
- The DNS name of the provisioning network endpoint.
- CloudNameInternal
- The DNS name of the Internal API endpoint.
- CloudNameStorage
- The DNS name of the storage endpoint.
- CloudNameStorageManagement
- The DNS name of the storage management endpoint.
- DnsServers
-
A list of DNS servers that you want to use. The configured DNS servers must contain an entry for the configured
CloudNamethat matches the IP address of the Public API.
Procedure
Add a list of DNS servers to use under parameter defaults, in either a new or existing environment file:
parameter_defaults: DnsServers: ["10.0.0.254"] ....
parameter_defaults: DnsServers: ["10.0.0.254"] ....Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can use the
CloudName{network.name}definition to set the DNS name for an API endpoint on a composable network that uses a virtual IP.For more information, see Adding a composable network.
14.10. Adding environment files during overcloud creation Copy linkLink copied to clipboard!
Use the -e option with the deployment command openstack overcloud deploy to include environment files in the deployment process. Add the environment files from this section in the following order:
-
The environment file to enable SSL/TLS (
enable-tls.yaml) -
The environment file to set the DNS hostname (
custom-domain.yaml) -
The environment file to inject the root certificate authority (
inject-trust-anchor-hiera.yaml) The environment file to set the public endpoint mapping:
-
If you use a DNS name for accessing the public endpoints, use
/usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml -
If you use a IP address for accessing the public endpoints, use
/usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml
-
If you use a DNS name for accessing the public endpoints, use
Procedure
- Use the following deployment command snippet as an example of how to include your SSL/TLS environment files:
14.11. Manually Updating SSL/TLS Certificates Copy linkLink copied to clipboard!
Complete the following steps if you are using your own SSL/TLS certificates that are not auto-generated from the TLS everywhere (TLS-e) process.
Procedure
Edit your heat templates with the following content:
-
Edit the
enable-tls.yamlfile and update theSSLCertificate,SSLKey, andSSLIntermediateCertificateparameters. -
If your certificate authority has changed, edit the
inject-trust-anchor-hiera.yamlfile and update theCAMapparameter.
-
Edit the
Rerun the deployment command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. Enabling SSL/TLS on internal and public endpoints with Identity Management Copy linkLink copied to clipboard!
You can enable SSL/TLS on certain overcloud endpoints. Due to the number of certificates required, director integrates with a Red Hat Identity Management (IdM) server to act as a certificate authority and manage the overcloud certificates.
To check the status of TLS support across the OpenStack components, refer to the TLS Enablement status matrix.
15.1. Identity Management (IdM) server recommendations for OpenStack Copy linkLink copied to clipboard!
Red Hat provides the following information to help you integrate your IdM server and OpenStack environment.
For information on preparing Red Hat Enterprise Linux for an IdM installation, see Installing Identity Management.
Run the ipa-server-install command to install and configure IdM. You can use command parameters to skip interactive prompts. Use the following recommendations so that your IdM server can integrate with your Red Hat OpenStack Platform environment:
| Option | Recommendation |
|---|---|
|
| Note the value you provide. You will need this password when configuring Red Hat OpenStack Platform to work with IdM. |
|
| Note the value you provide. The undercloud and overcloud nodes require network access to this ip address. |
|
| Use this option to install an integrated DNS service on the IdM server. The undercloud and overcloud nodes use the IdM server for domain name resolution. |
|
|
Use this option to use the addresses in |
|
| Use this option to resolve reverse records and zones for the IdM server IP addresses. If neither reverse records or zones are resolvable, IdM creates the reverse zones. This simplifies the IdM deployment. |
|
| You can use both or either of these options to configure your NTP source. Both the IdM server and your OpenStack environment must have correct and synchronized time. |
You must open the firewall ports required by IdM to enable communication with Red Hat OpenStack Platform nodes. For more information, see Opening the ports required by IdM.
Additional resources
15.2. Implementing TLS-e with Ansible Copy linkLink copied to clipboard!
You can use the new tripleo-ipa method to enable SSL/TLS on overcloud endpoints, called TLS everywhere (TLS-e). Due to the number of certificates required, Red Hat OpenStack Platform integrates with Red Hat Identity management (IdM). When you use tripleo-ipa to configure TLS-e, IdM is the certificate authority.
Prerequisites
Ensure that all configuration steps for the undercloud, such as the creation of the stack user, are complete. For more details, see Director Installation and Usage for more details
Procedure
Use the following procedure to implement TLS-e on a new installation of Red Hat OpenStack Platform, or an existing deployment that you want to configure with TLS-e. You must use this method if you deploy Red Hat OpenStack Platform with TLS-e on pre-provisioned nodes.
If you are implementing TLS-e for an existing environment, you are required to run commands such as openstack undercloud install, and openstack overcloud deploy. These procedures are idempotent and only adjust your existing deployment configuration to match updated templates and configuration files.
Configure the
/etc/resolv.conffile:Set the appropriate search domains and the nameserver on the undercloud in
/etc/resolv.conf. For example, if the deployment domain isexample.com, and the domain of the FreeIPA server isbigcorp.com, then add the following lines to /etc/resolv.conf:search example.com bigcorp.com nameserver $IDM_SERVER_IP_ADDR
search example.com bigcorp.com nameserver $IDM_SERVER_IP_ADDRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install required software:
sudo dnf install -y python3-ipalib python3-ipaclient krb5-devel
sudo dnf install -y python3-ipalib python3-ipaclient krb5-develCopy to Clipboard Copied! Toggle word wrap Toggle overflow Export environmental variables with values specific to your environment.:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
undercloud-ipa-install.yamlansible playbook on the undercloud:ansible-playbook \ --ssh-extra-args "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" \ /usr/share/ansible/tripleo-playbooks/undercloud-ipa-install.yaml
ansible-playbook \ --ssh-extra-args "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" \ /usr/share/ansible/tripleo-playbooks/undercloud-ipa-install.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following parameters to undercloud.conf
undercloud_nameservers = $IDM_SERVER_IP_ADDR overcloud_domain_name = example.com
undercloud_nameservers = $IDM_SERVER_IP_ADDR overcloud_domain_name = example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow [Optional] If your IPA realm does not match your IPA domain, set the value of the
certmonger_krb_realmparameter:Set the value of the
certmonger_krb_realmin/home/stack/hiera_override.yaml:parameter_defaults: certmonger_krb_realm: EXAMPLE.COMPANY.COM
parameter_defaults: certmonger_krb_realm: EXAMPLE.COMPANY.COMCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the value of the
custom_env_filesparameter inundercloud.confto/home/stack/hiera_override.yaml:custom_env_files = /home/stack/hiera_override.yaml
custom_env_files = /home/stack/hiera_override.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy the undercloud:
openstack undercloud install
openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the undercloud was enrolled correctly by completing the following steps:
List the hosts in IdM:
kinit admin ipa host-find
$ kinit admin $ ipa host-findCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that
/etc/novajoin/krb5.keytabexists on the undercloud.ls /etc/novajoin/krb5.keytab
ls /etc/novajoin/krb5.keytabCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The novajoin directory name is for legacy naming purposes only.
Configuring TLS-e on the overcloud
When you deploy the overcloud with TLS everywhere (TLS-e), IP addresses from the Undercloud and Overcloud will automatically be registered with IdM.
Before deploying the overcloud, create a YAML file
tls-parameters.yamlwith contents similar to the following. The values you select will be specific for your environment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The shown value of the
OS::TripleO::Services::IpaClientparameter overrides the default setting in theenable-internal-tls.yamlfile. You must ensure thetls-parameters.yamlfile followsenable-internal-tls.yamlin theopenstack overcloud deploycommand.
-
The shown value of the
[Optional] If your IPA realm does not match your IPA domain, you must also include value of the
CertmongerKerberosRealmparameter in thetls-parameters.yamlfile:CertmongerKerberosRealm: EXAMPLE.COMPANY.COM
CertmongerKerberosRealm: EXAMPLE.COMPANY.COMCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the overcloud. You will need to include the tls-parameters.yaml in the deployment command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm each endpoint is using HTTPS by querying keystone for a list of endpoints:
openstack endpoint list
openstack endpoint listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.3. Enrolling nodes in Red Hat Identity Manager (IdM) with novajoin Copy linkLink copied to clipboard!
Novajoin is the default tool that you use to enroll your nodes with Red Hat Identity Manager (IdM) as part of the deployment process. Red Hat recommends the new ansible-based tripleo-ipa solution over the default novajoin solution to configure your undercloud and overcloud with TLS-e. For more information see Implementing TLS-e with Ansible.
You must perform the enrollment process before you proceed with the rest of the IdM integration. The enrollment process includes the following steps:
- Adding the undercloud node to the certificate authority (CA)
- Adding the undercloud node to IdM
- Optional: Setting the IdM server as the DNS server for the overcloud
- Preparing the environment files and deploying the overcloud
- Testing the overcloud enrollment in IdM and in RHOSP
- Optional: Adding DNS entries for novajoin in IdM
IdM enrollment with novajoin is currently only available for the undercloud and overcloud nodes. Novajoin integration for overcloud instances is expected to be supported in a later release.
15.4. Adding the undercloud node to the certificate authority Copy linkLink copied to clipboard!
Before you deploy the overcloud, add the undercloud to the certificate authority (CA) by installing the python3-novajoin package on the undercloud node and running the novajoin-ipa-setup script.
Procedure
On the undercloud node, install the
python3-novajoinpackage:sudo dnf install python3-novajoin
$ sudo dnf install python3-novajoinCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the undercloud node, run the
novajoin-ipa-setupscript, and adjust the values to suit your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the resulting One-Time Password (OTP) to enroll the undercloud.
15.5. Adding the undercloud node to Red Hat Identity Manager (IdM) Copy linkLink copied to clipboard!
After you add the undercloud node to the certificate authority (CA), register the undercloud with IdM and configure novajoin. Configure the following settings in the [DEFAULT] section of the undercloud.conf file.
Procedure
Enable the
novajoinservice:[DEFAULT] enable_novajoin = true
[DEFAULT] enable_novajoin = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a One-Time Password (OTP) so that you can register the undercloud node with IdM:
ipa_otp = <otp>
ipa_otp = <otp>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the overcloud’s domain name to be served by neutron’s DHCP server:
overcloud_domain_name = <domain>
overcloud_domain_name = <domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the hostname for the undercloud:
undercloud_hostname = <undercloud FQDN>
undercloud_hostname = <undercloud FQDN>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set IdM as the nameserver for the undercloud:
undercloud_nameservers = <IdM IP>
undercloud_nameservers = <IdM IP>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For larger environments, review the novajoin connection timeout values. In the
undercloud.conffile, add a reference to a new file calledundercloud-timeout.yaml:hieradata_override = /home/stack/undercloud-timeout.yaml
hieradata_override = /home/stack/undercloud-timeout.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following options to
undercloud-timeout.yaml. You can specify the timeout value in seconds, for example,5:nova::api::vendordata_dynamic_connect_timeout: <timeout value> nova::api::vendordata_dynamic_read_timeout: <timeout value>
nova::api::vendordata_dynamic_connect_timeout: <timeout value> nova::api::vendordata_dynamic_read_timeout: <timeout value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you want the local openSSL certificate authority to generate the SSL certificates for the public endpoints in director, set the
generate_service_certificateparameter totrue:generate_service_certificate = true
generate_service_certificate = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the
undercloud.conffile. Run the undercloud deployment command to apply the changes to your existing undercloud:
openstack undercloud install
$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the undercloud was enrolled correctly by completing the following steps:
List the hosts in IdM:
kinit admin ipa host-find
$ kinit admin $ ipa host-findCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that
/etc/novajoin/krb5.keytabexists on the undercloud.ls /etc/novajoin/krb5.keytab
ls /etc/novajoin/krb5.keytabCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.6. Setting Red Hat Identity Manager (IdM) as the DNS server for the overcloud Copy linkLink copied to clipboard!
To enable automatic detection of your IdM environment and easier enrollment, set IdM as your DNS server. This procedure is optional but recommended.
Procedure
Connect to your undercloud:
source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the control plane subnet to use IdM as the DNS name server:
openstack subnet set ctlplane-subnet --dns-nameserver <idm_server_address>
$ openstack subnet set ctlplane-subnet --dns-nameserver <idm_server_address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
DnsServersparameter in an environment file to use your IdM server:parameter_defaults: DnsServers: ["<idm_server_address>"]
parameter_defaults: DnsServers: ["<idm_server_address>"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow This parameter is usually defined in a custom
network-environment.yamlfile.
15.7. Preparing environment files and deploying the overcloud with novajoin enrollment Copy linkLink copied to clipboard!
To deploy the overcloud with IdM integration, you create and edit environment files to configure the overcloud to use the custom domain parameters CloudDomain and CloudName based on the domains that you define in the overcloud. You then deploy the overcloud with all the environment files and any additional environment files that you need for the deployment.
Procedure
Create a copy of the
/usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yamlenvironment file:cp /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml \ /home/stack/templates/custom-domain.yaml
$ cp /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml \ /home/stack/templates/custom-domain.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/home/stack/templates/custom-domain.yamlenvironment file and set theCloudDomainandCloudName*values to suit your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose the implementation of TLS appropriate for your environment:
Use the
enable-tls.yamlenvironment file to protect external endpoints with your custom certificate:-
Copy
/usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yamlto/home/stack/templates. -
Modify the
/home/stack/enable-tls.yamlenvironment file to include your custom certificate and key. Include the following environment files in your deployment to protect internal and external endpoints:
- enable-internal-tls.yaml
- tls-every-endpoints-dns.yaml
- custom-domain.yaml
enable-tls.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Copy
Use the
haproxy-public-tls-certmonger.yamlenvironment file to protect external endpoints with an IdM issued certificate. For this implementation, you must create DNS entries for the VIP endpoints used by novajoin:You must create DNS entries for the VIP endpoints used by novajoin. Identify the overcloud networks located in your custom
network-environment.yaml file in `/home/stack/templates:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a list of virtual IP addresses for each overcloud network in a heat template, for example,
/home/stack/public_vip.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add DNS entries to the IdM for each of the VIPs, and zones as needed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include the following environment files in your deployment to protect internal and external endpoints:
- enable-internal-tls.yaml
- tls-everywhere-endpoints-dns.yaml
- haproxy-public-tls-certmonger.yaml
- custom-domain.yaml
public_vip.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You cannot use novajoin to implement TLS everywhere (TLS-e) on a pre-existing deployment.
Chapter 17. Storage configuration Copy linkLink copied to clipboard!
This chapter outlines several methods that you can use to configure storage options for your overcloud.
The overcloud uses local ephemeral storage and Logical Volume Manager (LVM) storage for the default storage options. Local ephemeral storage is supported in production environments but LVM storage is not supported.
17.1. Configuring NFS storage Copy linkLink copied to clipboard!
You can configure the overcloud to use shared NFS storage.
17.1.1. Supported configurations and limitations Copy linkLink copied to clipboard!
Supported NFS storage
- Red Hat recommends that you use a certified storage back end and driver. Red Hat does not recommend that you use NFS storage that comes from the generic NFS back end, because its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. Red Hat OpenStack Platform (RHOSP) does not support earlier versions of NFS.
Unsupported NFS configuration
RHOSP does not support the NetApp feature NAS secure, because it interferes with normal volume operations. Director disables the feature by default. Therefore, do not edit the following heat parameters that control whether an NFS back end or a NetApp NFS Block Storage back end supports NAS secure:
-
CinderNetappNasSecureFileOperations -
CinderNetappNasSecureFilePermissions -
CinderNasSecureFileOperations -
CinderNasSecureFilePermissions
-
Limitations when using NFS shares
- Instances that have a swap disk cannot be resized or rebuilt when the back end is an NFS share.
17.1.2. Configuring NFS storage Copy linkLink copied to clipboard!
You can configure the overcloud to use shared NFS storage.
Procedure
-
Create an environment file to configure your NFS storage, for example,
nfs_storage.yaml. Add the following parameters to your new environment file to configure NFS storage:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not configure the
CinderNfsMountOptionsandGlanceNfsOptionsparameters, as their default values enable NFS mount options that are suitable for most Red Hat OpenStack Platform (RHOSP) environments. You can see the value of theGlanceNfsOptionsparameter in theenvironments/storage/glance-nfs.yamlfile. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.Add your NFS storage environment file to the stack with your other environment files and deploy the overcloud:
openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/nfs_storage.yaml
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/nfs_storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Configuring Ceph Storage Copy linkLink copied to clipboard!
Use one of the following methods to integrate Red Hat Ceph Storage into a Red Hat OpenStack Platform overcloud.
- Creating an overcloud with its own Ceph Storage cluster
- You can create a Ceph Storage Cluster during the creation on the overcloud. Director creates a set of Ceph Storage nodes that use the Ceph OSD to store data. Director also installs the Ceph Monitor service on the overcloud Controller nodes. This means that if an organization creates an overcloud with three highly available Controller nodes, the Ceph Monitor also becomes a highly available service. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph.
- Integrating an existing Ceph Storage cluster into an overcloud
- If you have an existing Ceph Storage Cluster, you can integrate this cluster into a Red Hat OpenStack Platform overcloud during deployment. This means that you can manage and scale the cluster outside of the overcloud configuration. For more information, see the Integrating an Overcloud with an Existing Red Hat Ceph Cluster.
17.3. Using an external Object Storage cluster Copy linkLink copied to clipboard!
You can reuse an external OpenStack Object Storage (swift) cluster by disabling the default Object Storage service deployment on your Controller nodes. This disables both the proxy and storage services for Object Storage and configures haproxy and OpenStack Identify (keystone) to use the given external Object Storage endpoint.
You must manage user accounts on the external Object Storage (swift) cluster manually.
Prerequisites
-
You need the endpoint IP address of the external Object Storage cluster as well as the
authtokenpassword from the external Object Storageproxy-server.conffile. You can find this information by using theopenstack endpoint listcommand.
Procedure
Create a new file named
swift-external-params.yamlwith the following content:-
Replace
EXTERNAL.IP:PORTwith the IP address and port of the external proxy and Replace
AUTHTOKENwith theauthtokenpassword for the external proxy on theSwiftPasswordline.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
-
Save this file as
swift-external-params.yaml. Deploy the overcloud with the following external Object Storage service environment files, as well as any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \ -e swift-external-params.yaml
openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml \ -e swift-external-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Configuring Ceph Object Store to use external Ceph Object Gateway Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).
For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.
Procedure
Add the following
parameter_defaultsto a custom environment file, for example,swift-external-params.yaml, and adjust the values to suit your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example code snippet contains parameter values that might differ from values that you use in your environment:
-
The default port where the remote RGW instance listens is
8080. The port might be different depending on how the external RGW is configured. -
The
swiftuser created in the overcloud uses the password defined by theSwiftPasswordparameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using thergw_keystone_admin_password.
-
The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDirector creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Log in to the undercloud as the
stackuser. Source the
overcloudrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the endpoints exist in the Identity service (keystone):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a test container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a configuration file to confirm that you can upload data to the container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the test container:
openstack container delete -r <testcontainer>
$ openstack container delete -r <testcontainer>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.5. Configuring cinder back end for the Image service Copy linkLink copied to clipboard!
Use the GlanceBackend parameter to set the back end that the Image service uses to store images.
The default maximum number of volumes you can create for a project is 10.
Procedure
To configure
cinderas the Image service back end, add the following line to an environment file:parameter_defaults: GlanceBackend: cinder
parameter_defaults: GlanceBackend: cinderCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
cinderback end is enabled, the following parameters and values are set by default:cinder_store_auth_address = http://172.17.1.19:5000/v3 cinder_store_project_name = service cinder_store_user_name = glance cinder_store_password = ****secret****
cinder_store_auth_address = http://172.17.1.19:5000/v3 cinder_store_project_name = service cinder_store_user_name = glance cinder_store_password = ****secret****Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a custom user name, or any custom value for the
cinder_store_parameters, add theExtraConfigparameter toparameter_defaultsand include your custom values:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6. Configuring the maximum number of storage devices to attach to one instance Copy linkLink copied to clipboard!
By default, you can attach an unlimited number of storage devices to a single instance. To limit the maximum number of devices, add the max_disk_devices_to_attach parameter to your Compute environment file. Use the following example to change the value of max_disk_devices_to_attach to "30":
parameter_defaults:
ComputeExtraConfig:
nova::config::nova_config:
compute/max_disk_devices_to_attach:
value: '30'
parameter_defaults:
ComputeExtraConfig:
nova::config::nova_config:
compute/max_disk_devices_to_attach:
value: '30'
Guidelines and considerations
- The number of storage disks supported by an instance depends on the bus that the disk uses. For example, the IDE disk bus is limited to 4 attached devices.
-
Changing the
max_disk_devices_to_attachon a Compute node with active instances can cause rebuilds to fail if the maximum number is lower than the number of devices already attached to instances. For example, if instance A has 26 devices attached and you changemax_disk_devices_to_attachto 20, a request to rebuild instance A will fail. - During cold migration, the configured maximum number of storage devices is enforced only on the source for the instance that you want to migrate. The destination is not checked before the move. This means that if Compute node A has 26 attached disk devices, and Compute node B has a configured maximum of 20 attached disk devices, a cold migration of an instance with 26 attached devices from Compute node A to Compute node B succeeds. However, a subsequent request to rebuild the instance in Compute node B fails because 26 devices are already attached which exceeds the configured maximum of 20.
- The configured maximum is not enforced on shelved offloaded instances, as they have no Compute node.
- Attaching a large number of disk devices to instances can degrade performance on the instance. Tune the maximum number based on the boundaries of what your environment can support.
- Instances with machine type Q35 can attach a maximum of 500 disk devices.
17.7. Improving scalability with Image service caching Copy linkLink copied to clipboard!
Use the glance-api caching mechanism to store copies of images on Image service (glance) API servers and retrieve them automatically to improve scalability. With Image service caching, glance-api can run on multiple hosts. This means that it does not need to retrieve the same image from back end storage multiple times. Image service caching does not affect any Image service operations.
Configure Image service caching with the Red Hat OpenStack Platform director (tripleo) heat templates:
Procedure
In an environment file, set the value of the
GlanceCacheEnabledparameter totrue, which automatically sets theflavorvalue tokeystone+cachemanagementin theglance-api.confheat template:parameter_defaults: GlanceCacheEnabled: trueparameter_defaults: GlanceCacheEnabled: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include the environment file in the
openstack overcloud deploycommand when you redeploy the overcloud. Optional: Tune the
glance_cache_prunerto an alternative frequency when you redeploy the overcloud. The following example shows a frequency of 5 minutes:parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'parameter_defaults: ControllerExtraConfig: glance::cache::pruner::minute: '*/5'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the frequency according to your needs to avoid file system full scenarios. Include the following elements when you choose an alternative frequency:
- The size of the files that you want to cache in your environment.
- The amount of available file system space.
- The frequency at which the environment caches images.
17.8. Configuring third party storage Copy linkLink copied to clipboard!
The following environment files are present in the core heat template collection /usr/share/openstack-tripleo-heat-templates.
- Dell EMC Storage Center
Deploys a single Dell EMC Storage Center back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.- Dell EMC PS Series
Deploys a single Dell EMC PS Series back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellps-config.yaml.- NetApp Block Storage
Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.
The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml.
Chapter 18. Security enhancements Copy linkLink copied to clipboard!
The following sections provide some suggestions to harden the security of your overcloud.
18.1. Using secure root user access Copy linkLink copied to clipboard!
The overcloud image automatically contains hardened security for the root user. For example, each deployed overcloud node automatically disables direct SSH access to the root user. You can still access the root user on overcloud nodes.
Procedure
-
Log in to the undercloud node as the
stackuser. -
Each overcloud node has a
heat-adminuser account. This user account contains the undercloud public SSH key, which provides SSH access without a password from the undercloud to the overcloud node. On the undercloud node, log in to the an overcloud node through SSH as theheat-adminuser. -
Switch to the
rootuser withsudo -i.
18.2. Managing the overcloud firewall Copy linkLink copied to clipboard!
Each of the core OpenStack Platform services contains firewall rules in their respective composable service templates. This automatically creates a default set of firewall rules for each overcloud node.
The overcloud heat templates contain a set of parameters that can help with additional firewall management:
- ManageFirewall
-
Defines whether to automatically manage the firewall rules. Set this parameter to
trueto allow Puppet to automatically configure the firewall on each node. Set tofalseif you want to manually manage the firewall. The default istrue. - PurgeFirewallRules
-
Defines whether to purge the default Linux firewall rules before configuring new ones. The default is
false.
If you set the ManageFirewall parameter to true, you can create additional firewall rules on deployment. Set the tripleo::firewall::firewall_rules hieradata using a configuration hook (see Section 4.5, “Puppet: Customizing hieradata for roles”) in an environment file for your overcloud. This hieradata is a hash containing the firewall rule names and their respective parameters as keys, all of which are optional:
- port
- The port associated to the rule.
- dport
- The destination port associated to the rule.
- sport
- The source port associated to the rule.
- proto
-
The protocol associated to the rule. Defaults to
tcp. - action
-
The action policy associated to the rule. Defaults to
accept. - jump
-
The chain to jump to. If present, it overrides
action. - state
-
An Array of states associated to the rule. Defaults to
['NEW']. - source
- The source IP address associated to the rule.
- iniface
- The network interface associated to the rule.
- chain
-
The chain associated to the rule. Defaults to
INPUT. - destination
- The destination CIDR associated to the rule.
The following example demonstrates the syntax of the firewall rule format:
This applies two additional firewall rules to all nodes through ExtraConfig.
Each rule name becomes the comment for the respective iptables rule. Each rule name starts with a three-digit prefix to help Puppet order all defined rules in the final iptables file. The default Red Hat OpenStack Platform rules use prefixes in the 000 to 200 range.
18.3. Changing the Simple Network Management Protocol (SNMP) strings Copy linkLink copied to clipboard!
Director provides a default read-only SNMP configuration for your overcloud. It is advisable to change the SNMP strings to mitigate the risk of unauthorized users learning about your network devices.
When you configure the ExtraConfig interface with a string parameter, you must use the following syntax to ensure that heat and Hiera do not interpret the string as a Boolean value: '"<VALUE>"'.
Set the following hieradata using the ExtraConfig hook in an environment file for your overcloud:
SNMP traditional access control settings
- snmp::ro_community
-
IPv4 read-only SNMP community string. The default value is
public. - snmp::ro_community6
-
IPv6 read-only SNMP community string. The default value is
public. - snmp::ro_network
-
Network that is allowed to
RO querythe daemon. This value can be a string or an array. Default value is127.0.0.1. - snmp::ro_network6
-
Network that is allowed to
RO querythe daemon with IPv6. This value can be a string or an array. The default value is::1/128. - tripleo::profile::base::snmp::snmpd_config
-
Array of lines to add to the snmpd.conf file as a safety valve. The default value is
[]. See the SNMP Configuration File web page for all available options.
For example:
parameter_defaults:
ExtraConfig:
snmp::ro_community: mysecurestring
snmp::ro_community6: myv6securestring
parameter_defaults:
ExtraConfig:
snmp::ro_community: mysecurestring
snmp::ro_community6: myv6securestring
This changes the read-only SNMP community string on all nodes.
SNMP view-based access control settings (VACM)
- snmp::com2sec
- An array of VACM com2sec mappings. Must provide SECNAME, SOURCE and COMMUNITY.
- snmp::com2sec6
- An array of VACM com2sec6 mappings. Must provide SECNAME, SOURCE and COMMUNITY.
For example:
parameter_defaults:
ExtraConfig:
snmp::com2sec: ["notConfigUser default mysecurestring"]
snmp::com2sec6: ["notConfigUser default myv6securestring"]
parameter_defaults:
ExtraConfig:
snmp::com2sec: ["notConfigUser default mysecurestring"]
snmp::com2sec6: ["notConfigUser default myv6securestring"]
This changes the read-only SNMP community string on all nodes.
For more information, see the snmpd.conf man page.
18.4. Changing the SSL/TLS cipher and rules for HAProxy Copy linkLink copied to clipboard!
If you enabled SSL/TLS in the overcloud, consider hardening the SSL/TLS ciphers and rules that are used with the HAProxy configuration. By hardening the SSL/TLS ciphers, you help avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability.
Create a heat template environment file called
tls-ciphers.yaml:touch ~/templates/tls-ciphers.yaml
touch ~/templates/tls-ciphers.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ExtraConfighook in the environment file to apply values to thetripleo::haproxy::ssl_cipher_suiteandtripleo::haproxy::ssl_optionshieradata:parameter_defaults: ExtraConfig: tripleo::haproxy::ssl_cipher_suite: 'DHE-RSA-AES128-CCM:DHE-RSA-AES256-CCM:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-CCM:ECDHE-ECDSA-AES256-CCM:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305' tripleo::haproxy::ssl_options: 'no-sslv3 no-tls-tickets'parameter_defaults: ExtraConfig: tripleo::haproxy::ssl_cipher_suite: 'DHE-RSA-AES128-CCM:DHE-RSA-AES256-CCM:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-CCM:ECDHE-ECDSA-AES256-CCM:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305' tripleo::haproxy::ssl_options: 'no-sslv3 no-tls-tickets'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe cipher collection is one continuous line.
Include the
tls-ciphers.yamlenvironment file with the overcloud deploy command when deploying the overcloud:openstack overcloud deploy --templates \ ... -e /home/stack/templates/tls-ciphers.yaml ...
openstack overcloud deploy --templates \ ... -e /home/stack/templates/tls-ciphers.yaml ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.5. Using the Open vSwitch firewall Copy linkLink copied to clipboard!
You can configure security groups to use the Open vSwitch (OVS) firewall driver in Red Hat OpenStack Platform director. Use the NeutronOVSFirewallDriver parameter to specify firewall driver that you want to use:
-
iptables_hybrid- Configures the Networking service (neutron) to use the iptables/hybrid based implementation. -
openvswitch- Configures the Networking service to use the OVS firewall flow-based driver.
The openvswitch firewall driver includes higher performance and reduces the number of interfaces and bridges used to connect guests to the project network.
Multicast traffic is handled differently by the Open vSwitch (OVS) firewall driver than by the iptables firewall driver. With iptables, by default, VRRP traffic is denied, and you must enable VRRP in the security group rules for any VRRP traffic to reach an endpoint. With OVS, all ports share the same OpenFlow context, and multicast traffic cannot be processed individually per port. Because security groups do not apply to all ports (for example, the ports on a router), OVS uses the NORMAL action and forwards multicast traffic to all ports as specified by RFC 4541.
The iptables_hybrid option is not compatible with OVS-DPDK. The openvswitch option is not compatible with OVS Hardware Offload.
Configure the NeutronOVSFirewallDriver parameter in the network-environment.yaml file:
NeutronOVSFirewallDriver: openvswitch
NeutronOVSFirewallDriver: openvswitch
-
NeutronOVSFirewallDriver: Configures the name of the firewall driver that you want to use when you implement security groups. Possible values depend on your system configuration. Some examples arenoop,openvswitch, andiptables_hybrid. The default value of an empty string results in a supported configuration.
Chapter 19. Configuring network plugins Copy linkLink copied to clipboard!
Director includes environment files that you can use when you configure third-party network plugins:
19.1. Fujitsu Converged Fabric (C-Fabric) Copy linkLink copied to clipboard!
You can enable the Fujitsu Converged Fabric (C-Fabric) plugin by using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml.
Procedure
Copy the environment file to your
templatessubdirectory:cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml /home/stack/templates/
$ cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml /home/stack/templates/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
resource_registryto use an absolute path:resource_registry: OS::TripleO::Services::NeutronML2FujitsuCfab: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-cfab.yaml
resource_registry: OS::TripleO::Services::NeutronML2FujitsuCfab: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-cfab.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
parameter_defaultsin/home/stack/templates/neutron-ml2-fujitsu-cfab.yaml:-
NeutronFujitsuCfabAddress- The telnet IP address of the C-Fabric. (string) -
NeutronFujitsuCfabUserName- The C-Fabric username to use. (string) -
NeutronFujitsuCfabPassword- The password of the C-Fabric user account. (string) -
NeutronFujitsuCfabPhysicalNetworks- List of<physical_network>:<vfab_id>tuples that specifyphysical_networknames and their corresponding vfab IDs. (comma_delimited_list) -
NeutronFujitsuCfabSharePprofile- Determines whether to share a C-Fabric pprofile among neutron ports that use the same VLAN ID. (boolean) -
NeutronFujitsuCfabPprofilePrefix- The prefix string for pprofile name. (string) -
NeutronFujitsuCfabSaveConfig- Determines whether to save the configuration. (boolean)
-
To apply the template to your deployment, include the environment file in the
openstack overcloud deploycommand:openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml [OTHER OPTIONS] ...
$ openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml [OTHER OPTIONS] ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
19.2. Fujitsu FOS Switch Copy linkLink copied to clipboard!
You can enable the Fujitsu FOS Switch plugin by using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml.
Procedure
Copy the environment file to your
templatessubdirectory:cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml /home/stack/templates/
$ cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml /home/stack/templates/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
resource_registryto use an absolute path:resource_registry: OS::TripleO::Services::NeutronML2FujitsuFossw: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-fossw.yaml
resource_registry: OS::TripleO::Services::NeutronML2FujitsuFossw: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-fossw.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
parameter_defaultsin/home/stack/templates/neutron-ml2-fujitsu-fossw.yaml:-
NeutronFujitsuFosswIps- The IP addresses of all FOS switches. (comma_delimited_list) -
NeutronFujitsuFosswUserName- The FOS username to use. (string) -
NeutronFujitsuFosswPassword- The password of the FOS user account. (string) -
NeutronFujitsuFosswPort- The port number to use for the SSH connection. (number) -
NeutronFujitsuFosswTimeout- The timeout period of the SSH connection. (number) -
NeutronFujitsuFosswUdpDestPort- The port number of the VXLAN UDP destination on the FOS switches. (number) -
NeutronFujitsuFosswOvsdbVlanidRangeMin- The minimum VLAN ID in the range that is used for binding VNI and physical port. (number) -
NeutronFujitsuFosswOvsdbPort- The port number for the OVSDB server on the FOS switches. (number)
-
To apply the template to your deployment, include the environment file in the
openstack overcloud deploycommand:openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml [OTHER OPTIONS] ...
$ openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml [OTHER OPTIONS] ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 20. Configuring Identity Copy linkLink copied to clipboard!
Director includes parameters to help configure Identity Service (keystone) settings:
20.1. Region name Copy linkLink copied to clipboard!
By default, your overcloud region is named regionOne. You can change this by adding a KeystoneRegion entry your environment file. You cannot modify this value after you deploy the overcloud.
parameter_defaults: KeystoneRegion: 'SampleRegion'
parameter_defaults:
KeystoneRegion: 'SampleRegion'
Chapter 21. Miscellaneous overcloud configuration Copy linkLink copied to clipboard!
Use the following configurations to configure miscellaneous features in the overcloud.
21.1. Debug modes Copy linkLink copied to clipboard!
You can enable and disable the DEBUG level logging mode for certain services in the overcloud.
To configure debug mode for a service, set the respective debug parameter. For example, OpenStack Identity (keystone) uses the KeystoneDebug parameter.
Procedure
Set the parameter in the
parameter_defaultssection of an environment file:parameter_defaults: KeystoneDebug: True
parameter_defaults: KeystoneDebug: TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After you have set the KeystoneDebug parameter to True, the /var/log/containers/keystone/keystone.log standard keystone log file is updated with DEBUG level logs.
For a full list of debug parameters, see "Debug Parameters" in the Overcloud Parameters guide.
21.2. Configuring the kernel on overcloud nodes Copy linkLink copied to clipboard!
Red Hat OpenStack Platform director includes parameters that configure the kernel on overcloud nodes.
- ExtraKernelModules
Kernel modules to load. The modules names are listed as a hash key with an empty value:
ExtraKernelModules: <MODULE_NAME>: {}ExtraKernelModules: <MODULE_NAME>: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - ExtraKernelPackages
Kernel-related packages to install prior to loading the kernel modules from
ExtraKernelModules. The package names are listed as a hash key with an empty value.ExtraKernelPackages: <PACKAGE_NAME>: {}ExtraKernelPackages: <PACKAGE_NAME>: {}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - ExtraSysctlSettings
Hash of sysctl settings to apply. Set the value of each parameter using the
valuekey.ExtraSysctlSettings: <KERNEL_PARAMETER>: value: <VALUE>ExtraSysctlSettings: <KERNEL_PARAMETER>: value: <VALUE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This example shows the syntax of these parameters in an environment file:
21.3. Configuring the server console Copy linkLink copied to clipboard!
Console output from overcloud nodes is not always sent to the server console. If you want to view this output in the server console, you must configure the overcloud to use the correct console for your hardware. Use one of the following methods to perform this configuration:
-
Modify the
KernelArgsheat parameter for each overcloud role. -
Customize the
overcloud-full.qcow2image that director uses to provision the overcloud nodes.
Prerequisites
- A successful undercloud installation. For more information, see the Director Installation and Usage guide.
- Overcloud nodes ready for deployment.
Modifying KernelArgs with heat during deployment
-
Log in to the undercloud host as the
stackuser. Source the
stackrccredentials file:source stackrc
$ source stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an environment file
overcloud-console.yamlwith the following content:parameter_defaults: <role>Parameters: KernelArgs: "console=<console-name>"parameter_defaults: <role>Parameters: KernelArgs: "console=<console-name>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<role>with the name of the overcloud role that you want to configure, and replace<console-name>with the ID of the console that you want to use. For example, use the following snippet to configure all overcloud nodes in the default roles to usetty0:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Include the
overcloud-console-tty0.yamlfile in your deployment command with the-eoption.
Modifying the overcloud-full.qcow2 image
-
Log in to the undercloud host as the
stackuser. Source the
stackrccredentials file:source stackrc
$ source stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the kernel arguments in the
overcloud-full.qcow2image to set the correct console for your hardware. For example, set the console totty0:virt-customize --selinux-relabel -a overcloud-full.qcow2 --run-command 'grubby --update-kernel=ALL --args="console=tty0"'
$ virt-customize --selinux-relabel -a overcloud-full.qcow2 --run-command 'grubby --update-kernel=ALL --args="console=tty0"'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the image into director:
openstack overcloud image upload --image-path /home/stack/images/overcloud-full.qcow2
$ openstack overcloud image upload --image-path /home/stack/images/overcloud-full.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy the overcloud.
Verification
Log in to an overcloud node from the undercloud:
ssh heat-admin@<IP-address>
$ ssh heat-admin@<IP-address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<IP-address>with the IP address of an overcloud node.Inspect the contents of the
/proc/cmdlinefile and ensure thatconsole=parameter is set to the value of the console that you want to use:cat /proc/cmdline BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-193.29.1.el8_2.x86_64 root=UUID=0ec3dea5-f293-4729-b676-5d38a611ce81 ro console=tty0 console=ttyS0,115200n81 no_timer_check crashkernel=auto rhgb quiet
[heat-admin@controller-0 ~]$ cat /proc/cmdline BOOT_IMAGE=(hd0,msdos2)/boot/vmlinuz-4.18.0-193.29.1.el8_2.x86_64 root=UUID=0ec3dea5-f293-4729-b676-5d38a611ce81 ro console=tty0 console=ttyS0,115200n81 no_timer_check crashkernel=auto rhgb quietCopy to Clipboard Copied! Toggle word wrap Toggle overflow
21.4. Configuring external load balancing Copy linkLink copied to clipboard!
An overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node. You can also use an external load balancer to perform this distribution. For example, you can use your own hardware-based load balancer to handle traffic distribution to the Controller nodes.
For more information about configuring external load balancing, see the dedicated External Load Balancing for the Overcloud guide.
21.5. Configuring IPv6 networking Copy linkLink copied to clipboard!
This section examines the network configuration for the overcloud. This includes isolating the OpenStack services to use specific network traffic and configuring the overcloud with IPv6 options.