Search

Chapter 4. Configuring the Overcloud

download PDF

This section runs through the process of creating an Overcloud that uses the external load balancer. This includes registering the nodes, configuring the network setup, and the configuring options required for the Overcloud creation command.

4.1. Setting up your Environment

This section uses a cutdown version of the process from the Director Installation and Usage guide.

Use the following workflow to setup our environment:

  • Create a node definition template and register blank nodes in the director.
  • Inspect hardware of all nodes.
  • Manually tag nodes into roles.
  • Create flavors and tag them into roles.

4.1.1. Initializing the Stack User

Log into the director host as the stack user and run the following command to initialize your director configuration:

$ source ~/stackrc

This sets up environment variables containing authentication details to access the director’s CLI tools.

4.1.2. Registering Nodes

A node definition template (instackenv.json) is a JSON format file and contains the hardware and power management details for registering nodes. For example:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        },
        {
            "mac":[
                "dd:dd:dd:dd:dd:dd"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.207"
        },
        {
            "mac":[
                "ee:ee:ee:ee:ee:ee"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.208"
        }
    ]
}

After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json), then import it into the director. Use the following command to accomplish this:

$ openstack overcloud node import ~/instackenv.json

This imports the template and registers each node from the template into the director.

Assign the kernel and ramdisk images to all nodes:

$ openstack overcloud node configure

The nodes are now registered and configured in the director.

4.1.3. Inspecting the Hardware of Nodes

After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:

$ openstack overcloud node introspect --all-manageable
Important

The nodes must be in the manageable state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

4.1.4. Manually Tagging the Nodes

After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match our nodes to flavors, and in turn the flavors are assigned to a deployment role.

Retrieve a list of your nodes to identify their UUIDs:

$ openstack baremetal node list

To manually tag a node to a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:

$ openstack baremetal node set 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 --property capabilities="profile:control,boot_option:local"
$ openstack baremetal node set 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a --property capabilities="profile:control,boot_option:local"
$ openstack baremetal node set 5e3b2f50-fcd9-4404-b0a2-59d79924b38e --property capabilities="profile:control,boot_option:local"
$ openstack baremetal node set 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 --property capabilities="profile:compute,boot_option:local"

The addition of the profile:compute and profile:control options tag the nodes into each respective profiles.

4.2. Configuring the Network

This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our load balancing options.

4.2.1. Isolating the Network

The director provides methods to configure isolated overcloud networks. This means the Overcloud environment separates network traffic types into different networks, which in turn assigns network traffic to specific network interfaces or bonds. After configuring isolated networks, the director configures the OpenStack services to use the isolated networks. If no isolated networks are configured, all services run on the Provisioning network.

First, the Overcloud requires a set of network interface templates. You customize these templates to configure the node interfaces on a per role basis. These templates are standard Heat templates in YAML format. The director contains a set of example templates to get you started:

  • /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans - Directory containing templates for single NIC with VLANs configuration on a per role basis.
  • /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans - Directory containing templates for bonded NIC configuration on a per role basis.

For more information on network interface configuration, see the Director Installation and Usage guide.

Next, create a network environment file. This file is a Heat environment file that describes the Overcloud’s network environment and points to the network interface configuration templates. This file also defines the subnets and VLANs for our network along with IP address ranges. You customize these values for the local environment.

This scenario uses the following network environment file saved as /home/stack/network-environment.yaml:

resource_registry:
  OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/controller.yaml
  OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/ceph-storage.yaml

parameter_defaults:
  InternalApiNetCidr: 172.16.20.0/24
  TenantNetCidr: - 172.16.22.0/24
  StorageNetCidr: 172.16.21.0/24
  StorageMgmtNetCidr: 172.16.19.0/24
  ExternalNetCidr: 172.16.23.0/24
  InternalApiAllocationPools: [{'start': '172.16.20.10', 'end': '172.16.20.200'}]
  TenantAllocationPools: [{'start': '172.16.22.10', 'end': '172.16.22.200'}]
  StorageAllocationPools: [{'start': '172.16.21.10', 'end': '172.16.21.200'}]
  StorageMgmtAllocationPools: [{'start': '172.16.19.10', 'end': '172.16.19.200'}]
  # Leave room for floating IPs in the External allocation pool
  ExternalAllocationPools: [{'start': '172.16.23.10', 'end': '172.16.23.60'}]
  # Set to the router gateway on the external network
  ExternalInterfaceDefaultRoute: 172.16.23.1
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 192.0.2.254
  # The IP address of the EC2 metadata server. Generally the IP of the Undercloud
  EC2MetadataIp: 192.0.2.1
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["8.8.8.8","8.8.4.4"]
  InternalApiNetworkVlanID: 201
  StorageNetworkVlanID: 202
  StorageMgmtNetworkVlanID: 203
  TenantNetworkVlanID: 204
  ExternalNetworkVlanID: 100
  # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
  NeutronExternalNetworkBridge: "''"
  # Customize bonding options if required
  BondInterfaceOvsOptions:
    "bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true"

For more information about network environment configuration, see the Director Installation and Usage guide.

Ensure the director host has access to the Internal API network so that it can connect to the keystone_admin_ssh VIP.

4.2.2. Configuring Load Balancing Options

The director provides a method for creating an Overcloud where an external load balancers hosts the virtual IPs instead of HAProxy managing them internally. This configuration assumes a number of virtual IPs are configured on the external load balancer, one per isolated network, plus one for the Redis service, before the Overcloud deployment starts. Some of the Virtual IPs can be identical if the Overcloud node NICs configuration permits so.

You previously configured the external load balancer using settings from the previous chapter. These settings include the IPs that the director assigns to the Overcloud nodes and uses for service configuration.

The following is an example Heat environment file (external-lb.yaml) that contains the Overcloud configuration for using the external load balancer:

parameter_defaults:
  ControlFixedIPs: [{'ip_address':'192.0.2.250'}]
  PublicVirtualFixedIPs: [{'ip_address':'172.16.23.250'}]
  InternalApiVirtualFixedIPs: [{'ip_address':'172.16.20.250'}]
  StorageVirtualFixedIPs: [{'ip_address':'172.16.21.250'}]
  StorageMgmtVirtualFixedIPs: [{'ip_address':'172.16.19.250'}]
  RedisVirtualFixedIPs: [{'ip_address':'172.16.20.249'}]
  # IPs assignments for the Overcloud Controller nodes. Ensure these IPs are from each respective allocation pools defined in the network environment file.
  ControllerIPs:
    external:
    - 172.16.23.150
    - 172.16.23.151
    - 172.16.23.152
    internal_api:
    - 172.16.20.150
    - 172.16.20.151
    - 172.16.20.152
    storage:
    - 172.16.21.150
    - 172.16.21.151
    - 172.16.21.152
    storage_mgmt:
    - 172.16.19.150
    - 172.16.19.151
    - 172.16.19.152
    tenant:
    - 172.16.22.150
    - 172.16.22.151
    - 172.16.22.152
    # CIDRs
    external_cidr: "24"
    internal_api_cidr: "24"
    storage_cidr: "24"
    storage_mgmt_cidr: "24"
    tenant_cidr: "24"
  RedisPassword: p@55w0rd!
  ServiceNetMap:
    NeutronTenantNetwork: tenant
    CeilometerApiNetwork: internal_api
    AodhApiNetwork: internal_api
    GnocchiApiNetwork: internal_api
    MongoDbNetwork: internal_api
    CinderApiNetwork: internal_api
    CinderIscsiNetwork: storage
    GlanceApiNetwork: storage
    GlanceRegistryNetwork: internal_api
    KeystoneAdminApiNetwork: internal_api
    KeystonePublicApiNetwork: internal_api
    NeutronApiNetwork: internal_api
    HeatApiNetwork: internal_api
    NovaApiNetwork: internal_api
    NovaMetadataNetwork: internal_api
    NovaVncProxyNetwork: internal_api
    SwiftMgmtNetwork: storage_mgmt
    SwiftProxyNetwork: storage
    HorizonNetwork: internal_api
    MemcachedNetwork: internal_api
    RabbitMqNetwork: internal_api
    RedisNetwork: internal_api
    MysqlNetwork: internal_api
    CephClusterNetwork: storage_mgmt
    CephPublicNetwork: storage
    ControllerHostnameResolveNetwork: internal_api
    ComputeHostnameResolveNetwork: internal_api
    BlockStorageHostnameResolveNetwork: internal_api
    ObjectStorageHostnameResolveNetwork: internal_api
    CephStorageHostnameResolveNetwork: storage

The parameter_defaults section contains the VIP and IP assignments for each network on OpenStack. These settings must match the same IP configuration for each service on the load balancer. This section also defines an administrative password for the Redis service (RedisPassword). This section also contains the ServiceNetMap parameter, which maps each OpenStack service to a specific networks. The load balancing configuration requires this services remap.

4.3. Configuring SSL for Load Balancing

The Overcloud uses unencrypted endpoints for its services by default. This means the Overcloud configuration requires an additional environment file to enable SSL/TLS for its endpoints.

Note

Ensure your external load balancer has a copy of your SSL certificate and key installed.

An overcloud with an internal load balancer uses the enable-tls.yaml environment file from the Heat template collection to install the key and certificate pair. An external load balancer does not require this file. However, the overcloud still requires a list of SSL/TLS endpoints. Depending on whether you are using an IP address or domain name to access the public endpoints, use the following environment file:

  • If using a DNS name for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml
  • If using a IP address for accessing the public endpoints, use /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml

If using a self-signed certificate or the certificate signer is not in the default trust store on the Overcloud image, inject the certificate into the Overcloud image. Copy the inject-trust-anchor.yaml environment file from the Heat template collection:

$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.

Edit this file and make the following changes for these parameters:

SSLRootCertificate

Copy the contents of the root certificate authority file into the SSLRootCertificate parameter. For example:

parameter_defaults:
  SSLRootCertificate: |
  -----BEGIN CERTIFICATE-----
  MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQEBCwUAMFgxCzAJBgNV
  ...
  sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvp
  -----END CERTIFICATE-----
Important

The certificate authority contents require the same indentation level for all new lines.

OS::TripleO::NodeTLSCAData

Change the resource URL for OS::TripleO::NodeTLSCAData:` to an absolute URL:

resource_registry:
  OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml

If using a DNS hostname to access the Overcloud through SSL/TLS, create a new environment file (~/templates/cloudname.yaml) to define the hostname of the Overcloud’s endpoints. Use the following parameters:

CloudName
The DNS hostname for the Overcloud endpoints.
DnsServers
A list of DNS servers to use. The configured DNS servers must contain an entry for the configured CloudName that matches the IP for the Public API.

The following is an example of the contents for this file:

parameter_defaults:
 CloudName: overcloud.example.com
 DnsServers: 10.0.0.1

The deployment command (openstack overcloud deploy) in Section 4.4, “Creating the Overcloud” uses the -e option to add environment files. Add the environment files from this section in the following order:

  • The environment file with the SSL/TLS endpoints (tls-endpoints-public-dns.yaml or tls-endpoints-public-ip.yaml)
  • The environment file to set the DNS hostname (cloudname.yaml)
  • The environment file to inject the root certificate authority (inject-trust-anchor.yaml)

For example:

$ openstack overcloud deploy --templates [...] -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml

4.4. Creating the Overcloud

The creation of an Overcloud that uses an external load balancer requires additional arguments to the openstack overcloud deploy command. For example:

$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/network-environment.yaml  -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml -e ~/external-lb.yaml --control-scale 3 --compute-scale 1 --control-flavor control --compute-flavor compute [ADDITIONAL OPTIONS]

The above command uses the following options:

  • --templates - Creates the Overcloud from the default Heat template collection.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes the resources network isolation configuration.
  • -e ~/network-environment.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is the network environment file created previously.
  • -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes the external load balancing configuration. Note that you should include this environment file after the network configuration files.
  • -e ~/external-lb.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is the environment file containing our external load balancer configuration. Note that you should include this environment file after the network configuration files.
  • --control-scale 3 - Scale the Controller nodes to three.
  • --compute-scale 3 - Scale the Compute nodes to three.
  • --control-flavor control - Use a specific flavor for the Controller nodes.
  • --compute-flavor compute - Use a specific flavor for the Compute nodes.
Note

For a full list of options, run:

$ openstack help overcloud deploy

See also the Director Installation and Usage guide for parameter examples.

The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack user and run:

$ source ~/stackrc
$ heat stack-list --show-nested

4.5. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home directory. Run the following command to use this file:

$ source ~/overcloudrc

This loads the necessary environment variables to interact with your Overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:

$ source ~/stackrc

4.6. Completing the Overcloud Configuration

This concludes the creation of the Advanced Overcloud.

For fencing the high availability cluster, see the Director Installation and Usage guide.

For post-creation functions, see the Director Installation and Usage guide.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.