Este contenido no está disponible en el idioma seleccionado.
Chapter 6. Composable services and custom roles
The overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core heat template collection on the director node. However, you can also create custom roles that contain specific sets of services.
You can use this flexibility to create different combinations of services on different roles. This chapter explores the architecture of custom roles, composable services, and methods for using them.
6.1. Supported role architecture
The following architectures are available when you use custom roles and composable services:
- Default architecture
-
Uses the default
roles_data
files. All controller services are contained within one Controller role. - Supported standalone roles
-
Use the predefined files in
/usr/share/openstack-tripleo-heat-templates/roles
to generate a customroles_data
file. For more information, see Section 6.4, “Supported custom roles”. - Custom composable services
-
Create your own roles and use them to generate a custom
roles_data
file. Note that only a limited number of composable service combinations have been tested and verified and Red Hat cannot support all composable service combinations.
6.2. Examining the roles_data file
The roles_data
file contains a YAML-formatted list of the roles that director deploys onto nodes. Each role contains definitions of all of the services that comprise the role. Use the following example snippet to understand the roles_data
syntax:
- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient ... - name: Compute description: | Basic Compute Node role ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient ...
The core heat template collection contains a default roles_data
file located at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
. The default file contains definitions of the following role types:
-
Controller
-
Compute
-
BlockStorage
-
ObjectStorage
-
CephStorage
.
The openstack overcloud deploy
command includes the default roles_data.yaml
file during deployment. However, you can use the -r
argument to override this file with a custom roles_data
file:
$ openstack overcloud deploy --templates -r ~/templates/roles_data-custom.yaml
6.3. Creating a roles_data file
Although you can create a custom roles_data
file manually, you can also generate the file automatically using individual role templates. Director provides several commands to manage role templates and automatically generate a custom roles_data
file.
Procedure
List the default role templates:
$ openstack overcloud roles list BlockStorage CephStorage Compute ComputeHCI ComputeOvsDpdk Controller ...
View the role definition in YAML format with the
openstack overcloud roles show
command:$ openstack overcloud roles show Compute
Generate a custom
roles_data
file. Use theopenstack overcloud roles generate
command to join multiple predefined roles into a single file. For example, run the following command to generate aroles_data.yaml
file that contains theController
,Compute
, andNetworker
roles:$ openstack overcloud roles generate -o ~/roles_data.yaml Controller Compute Networker
Use the
-o
option to define the name out of the output file.This command creates a custom
roles_data
file. However, the previous example uses theController
andNetworker
roles, which both contain the same networking agents. This means that the networking services scale from theController
role to theNetworker
role and the overcloud balances the load for networking services between theController
andNetworker
nodes.To make this
Networker
role standalone, you can create your own customController
role, as well as any other role that you require. This allows you to generate aroles_data
file from your own custom roles.Copy the directory from the core heat template collection to the home directory of the
stack
user:$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
Add or modify the custom role files in this directory. Use the
--roles-path
option with any of the role sub-commands to use this directory as the source for your custom roles:$ openstack overcloud roles generate -o my_roles_data.yaml \ --roles-path ~/roles \ Controller Compute Networker
This command generates a single
my_roles_data.yaml
file from the individual roles in the~/roles
directory.
The default roles collection also contains the ControllerOpenStack
role, which does not include services for Networker
, Messaging
, and Database
roles. You can use the ControllerOpenStack
in combination with the standalone Networker
, Messaging
, and Database
roles.
6.4. Supported custom roles
The following table contains information about the available custom roles. You can find custom role templates in the /usr/share/openstack-tripleo-heat-templates/roles
directory.
Role | Description | File |
---|---|---|
| OpenStack Block Storage (cinder) node. |
|
| Full standalone Ceph Storage node. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. |
|
| Standalone scale-out Ceph Storage file role. Includes OSD and Object Operations (MDS). |
|
| Standalone scale-out Ceph Storage object role. Includes OSD and Object Gateway (RGW). |
|
| Ceph Storage OSD node role. |
|
| Alternate Compute node role. |
|
| DVR enabled Compute node role. |
|
| Compute node with hyper-converged infrastructure. Includes Compute and Ceph OSD services. |
|
|
Compute Instance HA node role. Use in conjunction with the |
|
| Compute node with Cavium Liquidio Smart NIC. |
|
| Compute OVS DPDK RealTime role. |
|
| Compute OVS DPDK role. |
|
| Compute role for ppc64le servers. |
|
|
Compute role optimized for real-time behaviour. When using this role, it is mandatory that an |
|
| Compute SR-IOV RealTime role. |
|
| Compute SR-IOV role. |
|
| Standard Compute node role. |
|
|
Controller role that does not contain the database, messaging, networking, and OpenStack Compute (nova) control components. Use in combination with the |
|
| Controller role with core Controller services loaded but no Ceph Storage (MON) components. This role handles database, messaging, and network functions but not any Ceph Storage functions. |
|
|
Controller role that does not contain the OpenStack Compute (nova) control component. Use in combination with the |
|
|
Controller role that does not contain the database, messaging, and networking components. Use in combination with the |
|
| Controller role with all core services loaded and uses Ceph NFS. This roles handles database, messaging, and network functions. |
|
| Controller role with all core services loaded. This roles handles database, messaging, and network functions. |
|
| Same as the normal Controller role but with the OVN Metadata agent deployed. |
|
| Standalone database role. Database managed as a Galera cluster using Pacemaker. |
|
| Compute node with hyper-converged infrastructure and all Ceph Storage services. Includes OSD, MON, Object Gateway (RGW), Object Operations (MDS), Manager (MGR), and RBD Mirroring. |
|
| Compute node with hyper-converged infrastructure and Ceph Storage file services. Includes OSD and Object Operations (MDS). |
|
| Compute node with hyper-converged infrastructure and Ceph Storage block services. Includes OSD, MON, and Manager. |
|
| Compute node with hyper-converged infrastructure and Ceph Storage object services. Includes OSD and Object Gateway (RGW). |
|
| Ironic Conductor node role. |
|
| Standalone messaging role. RabbitMQ managed with Pacemaker. |
|
| Standalone networking role. Runs OpenStack networking (neutron) agents on their own. If your deployment uses the ML2/OVN mechanism driver, see additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide. |
|
| Same as the normal Networker role but with the OVN Metadata agent deployed. See additional steps in Deploying a Custom Role with ML2/OVN in the Networking Guide. |
|
|
Standalone |
|
| Swift Object Storage node role. |
|
| Telemetry role with all the metrics and alarming services. |
|
6.5. Examining role parameters
Each role contains the following parameters:
- name
-
(Mandatory) The name of the role, which is a plain text name with no spaces or special characters. Check that the chosen name does not cause conflicts with other resources. For example, use
Networker
as a name instead ofNetwork
. - description
- (Optional) A plain text description for the role.
- tags
(Optional) A YAML list of tags that define role properties. Use this parameter to define the primary role with both the
controller
andprimary
tags together:- name: Controller ... tags: - primary - controller ...
If you do not tag the primary role, the first role that you define becomes the primary role. Ensure that this role is the Controller role.
- networks
A YAML list or dictionary of networks that you want to configure on the role. If you use a YAML list, list each composable network:
networks: - External - InternalApi - Storage - StorageMgmt - Tenant
If you use a dictionary, map each network to a specific
subnet
in your composable networks.networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet
Default networks include
External
,InternalApi
,Storage
,StorageMgmt
,Tenant
, andManagement
.- CountDefault
- (Optional) Defines the default number of nodes that you want to deploy for this role.
- HostnameFormatDefault
(Optional) Defines the default hostname format for the role. The default naming convention uses the following format:
[STACK NAME]-[ROLE NAME]-[NODE ID]
For example, the default Controller nodes are named:
overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ...
- disable_constraints
- (Optional) Defines whether to disable OpenStack Compute (nova) and OpenStack Image Storage (glance) constraints when deploying with director. Use this parameter when you deploy an overcloud with pre-provisioned nodes. For more information, see Configuring a Basic Overcloud with Pre-Provisioned Nodes in the Director Installation and Usage guide.
- update_serial
(Optional) Defines how many nodes to update simultaneously during the OpenStack update options. In the default
roles_data.yaml
file:-
The default is
1
for Controller, Object Storage, and Ceph Storage nodes. -
The default is
25
for Compute and Block Storage nodes.
If you omit this parameter from a custom role, the default is
1
.-
The default is
- ServicesDefault
- (Optional) Defines the default list of services to include on the node. For more information, see Section 6.8, “Examining composable service architecture”.
You can use these parameters to create new roles and also define which services to include in your roles.
The openstack overcloud deploy
command integrates the parameters from the roles_data
file into some of the Jinja2-based templates. For example, at certain points, the overcloud.j2.yaml
heat template iterates over the list of roles from roles_data.yaml
and creates parameters and resources specific to each respective role.
For example, the following snippet contains the resource definition for each role in the overcloud.j2.yaml
heat template:
{{role.name}}: type: OS::Heat::ResourceGroup depends_on: Networks properties: count: {get_param: {{role.name}}Count} removal_policies: {get_param: {{role.name}}RemovalPolicies} resource_def: type: OS::TripleO::{{role.name}} properties: CloudDomain: {get_param: CloudDomain} ServiceNetMap: {get_attr: [ServiceNetMap, service_net_map]} EndpointMap: {get_attr: [EndpointMap, endpoint_map]} ...
This snippet shows how the Jinja2-based template incorporates the {{role.name}}
variable to define the name of each role as an OS::Heat::ResourceGroup
resource. This in turn uses each name
parameter from the roles_data
file to name each respective OS::Heat::ResourceGroup
resource.
6.6. Creating a new role
You can use the composable service architecture to create new roles according to the requirements of your deployment. For example, you might want to create a new Horizon
role to host only the OpenStack Dashboard (horizon
).
Role names must start with a letter, end with a letter or digit, and contain only letters, digits, and hyphens. Underscores must never be used in role names.
Procedure
Create a custom copy of the default
roles
directory:$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
Create a new file called
~/roles/Horizon.yaml
and create a newHorizon
role that contains base and core OpenStack Dashboard services:- name: Horizon CountDefault: 1 HostnameFormatDefault: '%stackname%-horizon-%index%' ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::Kernel - OS::TripleO::Services::Ntp - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::SensuClient - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::AuditD - OS::TripleO::Services::Collectd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Apache - OS::TripleO::Services::Horizon
-
Set the
name
parameter to the name of the custom role. Custom role names have a maximum length of 47 characters. -
Set the
CountDefault
parameter to1
so that a default overcloud always includes theHorizon
node.
-
Set the
Optional: If you want to scale the services in an existing overcloud, retain the existing services on the
Controller
role. If you want to create a new overcloud and you want the OpenStack Dashboard to remain on the standalone role, remove the OpenStack Dashboard components from theController
role definition:- name: Controller CountDefault: 1 ServicesDefault: ... - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatEngine # - OS::TripleO::Services::Horizon # Remove this service - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived ...
Generate the new
roles_data-horizon.yaml
file using the~/roles
directory as the source:$ openstack overcloud roles generate -o roles_data-horizon.yaml \ --roles-path ~/roles \ Controller Compute Horizon
Define a new flavor for this role so that you can tag specific nodes. For this example, use the following commands to create a
horizon
flavor:Create a
horizon
flavor:(undercloud)$ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 horizon
NoteThese properties are not used for scheduling instances, however, the Compute scheduler does use the disk size to determine the root partition size.
Tag each bare metal node that you want to designate for the Dashboard service (horizon) with a custom resource class:
(undercloud)$ openstack baremetal node set --resource-class baremetal.HORIZON <NODE>
Replace
<NODE>
with the ID of the bare metal node.Associate the
horizon
flavor with the custom resource class:(undercloud)$ openstack flavor set --property resources:CUSTOM_BAREMETAL_HORIZON=1 horizon
To determine the name of a custom resource class that corresponds to a resource class of a bare metal node, convert the resource class to uppercase, replace punctuation with an underscore, and prefix the value with
CUSTOM_
.NoteA flavor can request only one instance of a bare metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties for scheduling instances:
(undercloud)$ openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 horizon
Define the Horizon node count and flavor using the following environment file snippet:
parameter_defaults: OvercloudHorizonFlavor: horizon HorizonCount: 1
Include the new
roles_data-horizon.yaml
file and environment file in theopenstack overcloud deploy
command, along with any other environment files relevant to your deployment:$ openstack overcloud deploy --templates -r ~/templates/roles_data-horizon.yaml -e ~/templates/node-count-flavor.yaml
This configuration creates a three-node overcloud that consists of one Controller node, one Compute node, and one Networker node. To view the list of nodes in your overcloud, run the following command:
$ openstack server list
6.7. Guidelines and limitations
Note the following guidelines and limitations for the composable role architecture.
For services not managed by Pacemaker:
- You can assign services to standalone custom roles.
- You can create additional custom roles after the initial deployment and deploy them to scale existing services.
For services managed by Pacemaker:
- You can assign Pacemaker-managed services to standalone custom roles.
-
Pacemaker has a 16 node limit. If you assign the Pacemaker service (
OS::TripleO::Services::Pacemaker
) to 16 nodes, subsequent nodes must use the Pacemaker Remote service (OS::TripleO::Services::PacemakerRemote
) instead. You cannot have the Pacemaker service and Pacemaker Remote service on the same role. -
Do not include the Pacemaker service (
OS::TripleO::Services::Pacemaker
) on roles that do not contain Pacemaker-managed services. -
You cannot scale up or scale down a custom role that contains
OS::TripleO::Services::Pacemaker
orOS::TripleO::Services::PacemakerRemote
services.
General limitations:
- You cannot change custom roles and composable services during a major version upgrade.
- You cannot modify the list of services for any role after deploying an overcloud. Modifying the service lists after Overcloud deployment can cause deployment errors and leave orphaned services on nodes.
6.8. Examining composable service architecture
The core heat template collection contains two sets of composable service templates:
-
deployment
contains the templates for key OpenStack services. -
puppet/services
contains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in thedeployment
directory.
Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml
service template contains the following description:
description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.
These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means that you can call each resource using a unique heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml
file. All services use the OS::TripleO::Services
namespace for their resource type.
Some resources use the base composable service templates directly:
resource_registry: ... OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml ...
However, core services require containers and use the containerized service templates. For example, the keystone
containerized service uses the following resource:
resource_registry: ... OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml ...
These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml
template stores the output of the base template in the ContainersCommon
resource:
resources: ContainersCommon: type: ../containers-common.yaml
The containerized template can then incorporate functions and data from the containers-common.yaml
template.
The overcloud.j2.yaml
heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml
file:
{{role.name}}Services: description: A list of service resources (configured in the heat resource_registry) which represent nested stacks for each service that should get installed on the {{role.name}} role. type: comma_delimited_list default: {{role.ServicesDefault|default([])}}
For the default roles, this creates the following service list parameters: ControllerServices
, ComputeServices
, BlockStorageServices
, ObjectStorageServices
, and CephStorageServices
.
You define the default services for each custom role in the roles_data.yaml
file. For example, the default Controller role contains the following content:
- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Core - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry ...
These services are then defined as the default list for the ControllerServices
parameter.
You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices
as a parameter_default
in an environment file to override the services list from the roles_data.yaml
file.
6.9. Adding and removing services from roles
The basic method of adding or removing services involves creating a copy of the default service list for a node role and then adding or removing services. For example, you might want to remove OpenStack Orchestration (heat) from the Controller nodes.
Procedure
Create a custom copy of the default
roles
directory:$ cp -r /usr/share/openstack-tripleo-heat-templates/roles ~/.
Edit the
~/roles/Controller.yaml
file and modify the service list for theServicesDefault
parameter. Scroll to the OpenStack Orchestration services and remove them:- OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::HeatApi # Remove this service - OS::TripleO::Services::HeatApiCfn # Remove this service - OS::TripleO::Services::HeatApiCloudwatch # Remove this service - OS::TripleO::Services::HeatEngine # Remove this service - OS::TripleO::Services::MySQL - OS::TripleO::Services::NeutronDhcpAgent
Generate the new
roles_data
file:$ openstack overcloud roles generate -o roles_data-no_heat.yaml \ --roles-path ~/roles \ Controller Compute Networker
Include this new
roles_data
file when you run theopenstack overcloud deploy
command:$ openstack overcloud deploy --templates -r ~/templates/roles_data-no_heat.yaml
This command deploys an overcloud without OpenStack Orchestration services installed on the Controller nodes.
You can also disable services in the roles_data
file using a custom environment file. Redirect the services to disable to the OS::Heat::None
resource. For example:
resource_registry: OS::TripleO::Services::HeatApi: OS::Heat::None OS::TripleO::Services::HeatApiCfn: OS::Heat::None OS::TripleO::Services::HeatApiCloudwatch: OS::Heat::None OS::TripleO::Services::HeatEngine: OS::Heat::None
6.10. Enabling disabled services
Some services are disabled by default. These services are registered as null operations (OS::Heat::None
) in the overcloud-resource-registry-puppet.j2.yaml
file. For example, the Block Storage backup service (cinder-backup
) is disabled:
OS::TripleO::Services::CinderBackup: OS::Heat::None
To enable this service, include an environment file that links the resource to its respective heat templates in the puppet/services
directory. Some services have predefined environment files in the environments
directory. For example, the Block Storage backup service uses the environments/cinder-backup.yaml
file, which contains the following entry:
Procedure
Add an entry in an environment file that links the
CinderBackup
service to the heat template that contains thecinder-backup
configuration:resource_registry: OS::TripleO::Services::CinderBackup: ../podman/services/pacemaker/cinder-backup.yaml ...
This entry overrides the default null operation resource and enables the service.
Include this environment file when you run the
openstack overcloud deploy
command:$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
6.11. Creating a generic node with no services
You can create generic Red Hat Enterprise Linux 8.4 nodes without any OpenStack services configured. This is useful when you need to host software outside of the core Red Hat OpenStack Platform (RHOSP) environment. For example, RHOSP provides integration with monitoring tools such as Kibana and Sensu. For more information, see the Monitoring Tools Configuration Guide. While Red Hat does not provide support for the monitoring tools themselves, director can create a generic Red Hat Enterprise Linux 8.4 node to host these tools.
The generic node still uses the base overcloud-full
image rather than a base Red Hat Enterprise Linux 8 image. This means the node has some Red Hat OpenStack Platform software installed but not enabled or configured.
Procedure
Create a generic role in your custom
roles_data.yaml
file that does not contain aServicesDefault
list:- name: Generic - name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient ... - name: Compute CountDefault: 1 ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient ...
Ensure that you retain the existing
Controller
andCompute
roles.Create an environment file
generic-node-params.yaml
to specify how many generic Red Hat Enterprise Linux 8 nodes you require and the flavor when selecting nodes to provision:parameter_defaults: OvercloudGenericFlavor: baremetal GenericCount: 1
Include both the roles file and the environment file when you run the
openstack overcloud deploy
command:$ openstack overcloud deploy --templates \ -r ~/templates/roles_data_with_generic.yaml \ -e ~/templates/generic-node-params.yaml
This configuration deploys a three-node environment with one Controller node, one Compute node, and one generic Red Hat Enterprise Linux 8 node.