Search

Chapter 5. Configuring Basic Overcloud Requirements with the CLI Tools

download PDF

This chapter provides the basic configuration steps for an OpenStack Platform environment using the CLI tools. An overcloud with a basic configuration contains no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.

For the examples in this chapter, all nodes in this chapter are bare metal systems using IPMI for power management. For more supported power management types and their options, see Appendix B, Power Management Drivers.

Workflow

  1. Create a node definition template and register blank nodes in the director.
  2. Inspect hardware of all nodes.
  3. Tag nodes into roles.
  4. Define additional node properties.

Requirements

  • The director node created in Chapter 4, Installing the Undercloud
  • A set of bare metal machines for your nodes. The number of node required depends on the type of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7 image to each node.
  • One network connection for our Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. For the examples in this chapter, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:

    Table 5.1. Provisioning Network IP Assignments

    Node Name

    IP Address

    MAC Address

    IPMI IP Address

    Director

    192.0.2.1

    aa:aa:aa:aa:aa:aa

    None required

    Controller

    DHCP defined

    bb:bb:bb:bb:bb:bb

    192.0.2.205

    Compute

    DHCP defined

    cc:cc:cc:cc:cc:cc

    192.0.2.206

  • All other network types use the Provisioning network for OpenStack services. However, you can create additional networks for other network traffic types.

5.1. Registering Nodes for the Overcloud

The director requires a node definition template, which you create manually. This file (instackenv.json) uses the JSON format file, and contains the hardware and power management details for your nodes. For example, a template for registering two nodes might look like this:

{
    "nodes":[
        {
            "mac":[
                "bb:bb:bb:bb:bb:bb"
            ],
            "name":"node01",
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.205"
        },
        {
            "mac":[
                "cc:cc:cc:cc:cc:cc"
            ],
            "name":"node02",
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.0.2.206"
        }
    ]
}

This template uses the following attributes:

name
The logical name for the node.
pm_type
The power management driver to use. This example uses the IPMI driver (pxe_ipmitool), which is the preferred driver for power management.
pm_user; pm_password
The IPMI username and password.
pm_addr
The IP address of the IPMI device.
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
Note

IPMI is the preferred supported power management driver. For more supported power management types and their options, see Appendix B, Power Management Drivers. If these power management drivers do not work as expected, use IPMI for your power management.

After creating the template, run the following command to verify the formatting and syntax:

$ openstack baremetal instackenv validate --file instackenv.json

Save the file to the stack user’s home directory (/home/stack/instackenv.json), then run the following command to import the template to the director:

$ openstack overcloud node import ~/instackenv.json

This imports the template and registers each node from the template into the director.

After the node registration and configuration completes, wiew a list of these nodes in the CLI:

$ openstack baremetal node list

5.2. Inspecting the Hardware of Nodes

The director can run an introspection process on each node. This process causes each node to boot an introspection agent over PXE. This agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.

Note

You can also create policy files to automatically tag nodes into profiles immediately after introspection. For more information on creating policy files and including them in the introspection process, see Appendix E, Automatic Profile Tagging. Alternatively, you can manually tag nodes into profiles as per the instructions in Section 5.3, “Tagging Nodes into Profiles”.

Set all nodes to a managed state:

$ for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done

Run the following command to inspect the hardware attributes of each node:

$ openstack overcloud node introspect --all-manageable --provide
  • The --all-manageable option introspects only nodes in a managed state. In this example, it is all of them.
  • The --provide option resets all nodes to an active state after introspection.

Monitor the progress of the introspection using the following command in a separate terminal window:

$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
Important

Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

Alternatively, perform a single introspection on each node individually. Set the node to management mode, perform the introspection, then move the node out of management mode:

$ openstack baremetal node manage [NODE UUID]
$ openstack overcloud node introspect [NODE UUID] --provide

5.3. Tagging Nodes into Profiles

After registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. The following example shows the relationship across roles, flavors, profiles, and nodes for Controller nodes:

TypeDescription

Role

The Controller role defines how to configure controller nodes.

Flavor

The control flavor defines the hardware profile for nodes to use as controllers. You assign this flavor to the Controller role so the director can decide which nodes to use.

Profile

The control profile is a tag you apply to the control flavor. This defines the nodes that belong to the flavor.

Node

You also apply the control profile tag to individual nodes, which groups them to the control flavor and, as a result, the director configures them using the Controller role.

Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created during undercloud installation and are usable without modification in most environments.

Note

For a large number of nodes, use automatic profile tagging. See Appendix E, Automatic Profile Tagging for more details.

To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the following commands:

$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13
$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0

The addition of the profile:compute and profile:control options tag the two nodes into each respective profiles.

These commands also set the boot_option:local parameter, which defines how each node boots. Depending on your hardware, you might also need to add the boot_mode parameter to uefi so that nodes boot using UEFI instead of the default BIOS mode. For more information, see Section D.2, “UEFI Boot Mode”.

After completing node tagging, check the assigned profiles or possible profiles:

$ openstack overcloud profiles list

Custom Role Profiles

If using custom roles, you might need to create additional flavors and profiles to accommodate these new roles. For example, to create a new flavor for a Networker role, run the following command:

$ openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 networker
$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="networker" networker

Assign nodes with this new profile:

$ openstack baremetal node set --property capabilities='profile:networker,boot_option:local' dad05b82-0c74-40bf-9d12-193184bfc72d

5.4. Defining the Root Disk for Nodes

Some nodes might use multiple disks. This means the director needs to identify the disk to use for the root disk during provisioning. There are several properties you can use to help the director identify the root disk:

  • model (String): Device identifier.
  • vendor (String): Device vendor.
  • serial (String): Disk serial number.
  • hctl (String): Host:Channel:Target:Lun for SCSI.
  • size (Integer): Size of the device in GB.
  • wwn (String): Unique storage identifier.
  • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
  • wwn_vendor_extension (String): Unique vendor storage identifier.
  • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
  • name (String): The name of the device, for example: /dev/sdb1 Only use this for devices with persistent names.

This example shows how to use a disk’s serial number to specify the root disk to deploy the overcloud image.

Check the disk information from each node’s hardware introspection. The following command displays the disk information from a node:

$ openstack baremetal introspection data save 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 | jq ".inventory.disks"

For example, the data for one node might show three disks:

[
  {
    "size": 299439751168,
    "rotational": true,
    "vendor": "DELL",
    "name": "/dev/sda",
    "wwn_vendor_extension": "0x1ea4dcc412a9632b",
    "wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
    "model": "PERC H330 Mini",
    "wwn": "0x61866da04f380700",
    "serial": "61866da04f3807001ea4dcc412a9632b"
  }
  {
    "size": 299439751168,
    "rotational": true,
    "vendor": "DELL",
    "name": "/dev/sdb",
    "wwn_vendor_extension": "0x1ea4e13c12e36ad6",
    "wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
    "model": "PERC H330 Mini",
    "wwn": "0x61866da04f380d00",
    "serial": "61866da04f380d001ea4e13c12e36ad6"
  }
  {
    "size": 299439751168,
    "rotational": true,
    "vendor": "DELL",
    "name": "/dev/sdc",
    "wwn_vendor_extension": "0x1ea4e31e121cfb45",
    "wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
    "model": "PERC H330 Mini",
    "wwn": "0x61866da04f37fc00",
    "serial": "61866da04f37fc001ea4e31e121cfb45"
  }
]

For this example, set the root device to disk 2, which has 61866da04f380d001ea4e13c12e36ad6 as the serial number. This requires a change to the root_device parameter for the node definition:

$ openstack baremetal node set --property root_device='{"serial": "61866da04f380d001ea4e13c12e36ad6"}' 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0

This helps the director identify the specific disk to use as the root disk. When we initiate our overcloud creation, the director provisions this node and writes the overcloud image to this disk.

Note

Make sure to configure the BIOS of each node to include booting from the chosen root disk. The recommended boot order is network boot, then root disk boot.

Important

Do not use name to set the root disk as this value can change when the node boots.

5.5. Customizing the Overcloud

The undercloud includes a set of Heat templates that acts as a plan for your overcloud creation. You can customize advanced features for your overcloud using the Advanced Overcloud Customization guide.

Otherwise, you can continue and deploy a basic overcloud. See Section 5.6, “Creating the Overcloud with the CLI Tools” for more information.

Important

A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution for block storage.

5.6. Creating the Overcloud with the CLI Tools

The final stage in creating your OpenStack environment is to run the openstack overcloud deploy command to create it. Before running this command, you should familiarize yourself with key options and how to include custom environment files. The following section discusses the openstack overcloud deploy command and the options associated with it.

Warning

Do not run openstack overcloud deploy as a background process. The overcloud creation might hang in mid-deployment if started as a background process.

Setting Overcloud Parameters

The following table lists the additional parameters when using the openstack overcloud deploy command.

Table 5.2. Deployment Parameters
ParameterDescription

--templates [TEMPLATES]

The directory containing the Heat templates to deploy. If blank, the command uses the default template location at /usr/share/openstack-tripleo-heat-templates/

--stack STACK

The name of the stack to create or update

-t [TIMEOUT], --timeout [TIMEOUT]

Deployment timeout in minutes

--libvirt-type [LIBVIRT_TYPE]

Virtualization type to use for hypervisors

--ntp-server [NTP_SERVER]

Network Time Protocol (NTP) server to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: --ntp-server 0.centos.pool.org,1.centos.pool.org. For a high availability cluster deployment, it is essential that your controllers are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices.

--no-proxy [NO_PROXY]

Defines custom values for the environment variable no_proxy, which excludes certain domain extensions from proxy communication.

--overcloud-ssh-user OVERCLOUD_SSH_USER

Defines the SSH user to access the overcloud nodes. Normally SSH access occurs through the heat-admin user.

-e [EXTRA HEAT TEMPLATE], --extra-template [EXTRA HEAT TEMPLATE]

Extra environment files to pass to the overcloud deployment. Can be specified more than once. Note that the order of environment files passed to the openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files.

--environment-directory

The directory containing environment files to include in deployment. The command processes these environment files in numerical, then alphabetical order.

--validation-errors-nonfatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

--validation-warnings-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks.

--dry-run

Performs validation check on the overcloud but does not actually create the overcloud.

--skip-postconfig

Skip the overcloud post-deployment configuration.

--force-postconfig

Force the overcloud post-deployment configuration.

--answers-file ANSWERS_FILE

Path to a YAML file with arguments and parameters.

--rhel-reg

Register overcloud nodes to the Customer Portal or Satellite 6.

--reg-method

Registration method to use for the overcloud nodes. satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal.

--reg-org [REG_ORG]

Organization to use for registration.

--reg-force

Register the system even if it is already registered.

--reg-sat-url [REG_SAT_URL]

The base URL of the Satellite server to register overcloud nodes. Use the Satellite’s HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager, and installs katello-agent. If a Red Hat Satellite 5 server, the overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks.

--reg-activation-key [REG_ACTIVATION_KEY]

Activation key to use for registration.

Some command line parameters are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults section on an environment file. The following table maps deprecated parameters to their Heat Template equivalents.

Table 5.3. Mapping Deprecated CLI Parameters to Heat Template Parameters
ParameterDescriptionHeat Template Parameter

--control-scale

The number of Controller nodes to scale out

ControllerCount

--compute-scale

The number of Compute nodes to scale out

ComputeCount

--ceph-storage-scale

The number of Ceph Storage nodes to scale out

CephStorageCount

--block-storage-scale

The number of Cinder nodes to scale out

BlockStorageCount

--swift-storage-scale

The number of Swift nodes to scale out

ObjectStorageCount

--control-flavor

The flavor to use for Controller nodes

OvercloudControlFlavor

--compute-flavor

The flavor to use for Compute nodes

OvercloudComputeFlavor

--ceph-storage-flavor

The flavor to use for Ceph Storage nodes

OvercloudCephStorageFlavor

--block-storage-flavor

The flavor to use for Cinder nodes

OvercloudBlockStorageFlavor

--swift-storage-flavor

The flavor to use for Swift storage nodes

OvercloudSwiftStorageFlavor

--neutron-flat-networks

Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation

NeutronFlatNetworks

--neutron-physical-bridge

An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed

HypervisorNeutronPhysicalBridge

--neutron-bridge-mappings

The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network

NeutronBridgeMappings

--neutron-public-interface

Defines the interface to bridge onto br-ex for network nodes

NeutronPublicInterface

--neutron-network-type

The tenant network type for Neutron

NeutronNetworkType

--neutron-tunnel-types

The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string

NeutronTunnelTypes

--neutron-tunnel-id-ranges

Ranges of GRE tunnel IDs to make available for tenant network allocation

NeutronTunnelIdRanges

--neutron-vni-ranges

Ranges of VXLAN VNI IDs to make available for tenant network allocation

NeutronVniRanges

--neutron-network-vlan-ranges

The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network

NeutronNetworkVLANRanges

--neutron-mechanism-drivers

The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string

NeutronMechanismDrivers

--neutron-disable-tunneling

Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron

No parameter mapping.

--validation-errors-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

No parameter mapping

These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.

Note

Run the following command for a full list of options:

$ openstack help overcloud deploy

5.7. Including Environment Files in Overcloud Creation

The -e includes an environment file to customize your overcloud. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:

  • The amount of nodes per each role and their flavors. It is vital to include this information for overcloud creation.
  • Any network isolation files, starting with the initialization file (environments/network-isolation.yaml) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations.
  • Any external load balancing environment files.
  • Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
  • Any environment files for Red Hat CDN or Satellite registration.
  • Any other custom environment files.

Any environment files added to the overcloud using the -e option become part of your overcloud’s stack definition. The following command is an example of how to start the overcloud creation with custom environment files included:

$ openstack overcloud deploy --templates \
  -e ~/templates/node-info.yaml\
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ~/templates/network-environment.yaml \
  -e ~/templates/storage-environment.yaml \
  --ntp-server pool.ntp.org \

This command contains the following additional options:

  • --templates - Creates the overcloud using the Heat template collection in /usr/share/openstack-tripleo-heat-templates.
  • -e ~/templates/node-info.yaml - The -e option adds an additional environment file to the overcloud deployment. In this case, it is a custom environment file that defines how many nodes and which flavors to use for each role. For example:

    parameter_defaults:
      OvercloudControlFlavor: control
      OvercloudComputeFlavor: compute
      OvercloudCephStorageFlavor: ceph-storage
      ControllerCount: 3
      ComputeCount: 3
      CephStorageCount: 3
  • -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - The -e option adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration.
  • -e ~/templates/network-environment.yaml - The -e option adds an additional environment file to the overcloud deployment. In this case, it is a network environment file that contains customization for network isolation.
  • -e ~/templates/storage-environment.yaml - The -e option adds an additional environment file to the overcloud deployment. In this case, it is a custom environment file that initializes our storage configuration.
  • --ntp-server pool.ntp.org - Use an NTP server for time synchronization. This is useful for keeping the Controller node cluster in synchronization.

The director requires these environment files for re-deployment and post-deployment functions in Chapter 7, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your overcloud.

If you aim to later modify the overcloud configuration, you should:

  1. Modify parameters in the custom environment files and Heat templates
  2. Run the openstack overcloud deploy command again with the same environment files

Do not edit the overcloud configuration directly as such manual configuration gets overridden by the director’s configuration when updating the overcloud stack with the director.

Including an Environment File Directory

You can add a whole directory containing environment files using the --environment-directory option. The deployment command processes the environment files in this directory in numerical, then alphabetical order. If using this method, it is recommended to use filenames with a numerical prefix to order how they are processed. For example:

$ ls -1 ~/templates
00-node-info.yaml
10-network-isolation.yaml
20-network-environment.yaml
30-storage-environment.yaml
40-rhel-registration.yaml

Run the following deployment command to include the directory:

$ openstack overcloud deploy --templates --environment-directory ~/templates

Using an Answers File

An answers file is a YAML format file that simplifies the inclusion of templates and environment files. The answers file uses the following parameters:

templates
The core Heat template collection to use. This acts as a substitute for the --templates command line option.
environments
A list of environment files to include. This acts as a substitute for the --environment-file (-e) command line option.

For example, an answers file might contain the following:

templates: /usr/share/openstack-tripleo-heat-templates/
environments:
  - ~/templates/00-node-info.yaml
  - ~/templates/10-network-isolation.yaml
  - ~/templates/20-network-environment.yaml
  - ~/templates/30-storage-environment.yaml
  - ~/templates/40-rhel-registration.yaml

Run the following deployment command to include the answers file:

$ openstack overcloud deploy --answers-file ~/answers.yaml

5.8. Managing Overcloud Plans

As an alternative to using the openstack overcloud deploy command, the director can also manage imported plans.

To create a new plan, run the following command as the stack user:

$ openstack overcloud plan create --templates /usr/share/openstack-tripleo-heat-templates my-overcloud

This creates a plan from the core Heat template collection in /usr/share/openstack-tripleo-heat-templates. The director names the plan based on your input. In this example, it is my-overcloud. The director uses this name as a label for the object storage container, the workflow environment, and overcloud stack names.

Add parameters from environment files using the following command:

$ openstack overcloud parameters set my-overcloud ~/templates/my-environment.yaml

Deploy your plans using the following command:

$ openstack overcloud plan deploy my-overcloud

Delete existing plans using the following command:

$ openstack overcloud plan delete my-overcloud
Note

The openstack overcloud deploy command essentially uses all of these commands to remove the existing plan, upload a new plan with environment files, and deploy the plan.

5.9. Validating Overcloud Templates and Plans

Before executing an overcloud creation or stack update, validate your Heat templates and environment files for any errors.

Creating a Rendered Template

The core Heat templates for the overcloud are in a Jinja2 format. To validate your templates, render a version without Jinja2 formatting using the following commands:

$ openstack overcloud plan create --templates /usr/share/openstack-tripleo-heat-templates overcloud-validation
$ mkdir ~/overcloud-validation
$ cd ~/overcloud-validation
$ swift download overcloud-validation

Use the rendered template in ~/overcloud-validation for the validation tests that follow.

Validating Template Syntax

Use the following command to validate the template syntax:

$ openstack orchestration template validate --show-nested --template ~/overcloud-validation/overcloud.yaml -e ~/overcloud-validation/overcloud-resource-registry-puppet.yaml -e [ENVIRONMENT FILE] -e [ENVIRONMENT FILE]
Note

The validation requires the overcloud-resource-registry-puppet.yaml environment file to include overcloud-specific resources. Add any additional environment files to this command with -e option. Also include the --show-nested option to resolve parameters from nested templates.

This command identifies any syntax errors in the template. If the template syntax validates successfully, the output shows a preview of the resulting overcloud template.

5.10. Monitoring the Overcloud Creation

The overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and run:

$ source ~/stackrc
$ openstack stack list --nested

The openstack stack list --nested command shows the current stage of the overcloud creation.

5.11. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the following command to use this file:

$ source ~/overcloudrc

This loads the necessary environment variables to interact with your overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:

$ source ~/stackrc

Each node in the overcloud also contains a user called heat-admin. The stack user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:

$ nova list

Then connect to the node using the heat-admin user and the node’s IP address:

$ ssh heat-admin@192.0.2.23

5.12. Completing the Overcloud Creation

This concludes the creation of the overcloud using the command line tools. For post-creation functions, see Chapter 7, Performing Tasks after Overcloud Creation.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.