Chapter 5. Administering bare metal nodes


After you deploy an overcloud that includes the Bare Metal Provisioning service (ironic), you can provision a physical machine on an enrolled bare metal node and launch bare metal instances in your overcloud.

Prerequisites

5.1. Launching bare metal instances

You can launch instances either from the command line or from the OpenStack dashboard.

Prerequisites

5.1.1. Launching instances with the command line interface

You can create a bare-metal instance by using the OpenStack Client CLI.

Prerequisites

Procedure

  1. Configure the shell to access the Identity service (keystone) as the administrative user:

    $ source ~/overcloudrc
  2. Create your bare-metal instance:

    $ openstack server create \
     --nic net-id=<network_uuid> \
     --flavor baremetal \
     --image <image_uuid> \
     myBareMetalInstance
    • Replace <network_uuid> with the unique identifier for the network that you created to use with the Bare Metal Provisioning service.
    • Replace <image_uuid> with the unique identifier for the image that has the software profile that your instance requires.
  3. Check the status of the instance:

    $ openstack server list --name myBareMetalInstance

5.1.2. Launching instances with the dashboard

Use the dashboard graphical user interface to deploy a bare metal instance.

Prerequisites

Procedure

  1. Log in to the dashboard at http[s]://DASHBOARD_IP/dashboard.
  2. Click Project > Compute > Instances
  3. Click Launch Instance.

    • In the Details tab, specify the Instance Name and select 1 for Count.
    • In the Source tab, select an Image from Select Boot Source, then click the + (plus) symbol to select an operating system disk image. The image that you choose moves to Allocated.
    • In the Flavor tab, select baremetal.
    • In the Networks tab, use the + (plus) and - (minus) buttons to move required networks from Available to Allocated. Ensure that the shared network that you created for the Bare Metal Provisioning service is selected here.
    • If you want to assign the instance to a security group, in the Security Groups tab, use the arrow to move the group to Allocated.
  4. Click Launch Instance.

5.2. Configuring port groups in the Bare Metal Provisioning service

Note

Port group functionality for bare metal nodes is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Port groups (bonds) provide a method to aggregate multiple network interfaces into a single ‘bonded’ interface. Port group configuration always takes precedence over an individual port configuration.

If a port group has a physical network, then all the ports in that port group must have the same physical network. The Bare Metal Provisioning service uses configdrive to support configuration of port groups in the instances.

Note

Bare Metal Provisioning service API version 1.26 supports port group configuration. .Prerequisites

5.2.1. Configuring port groups on switches manually

To configure port groups in a bare metal deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch.

Note

You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE.

With port group fallback, all the ports in a port group can fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the --support-standalone-ports and --unsupport-standalone-ports options.

Prerequisites

5.2.2. Configuring port groups in the Bare Metal Provisioning service

Create a port group to aggregate multiple network interfaces into a single bonded interface.

Prerequisites

Procedure

  1. Create a port group by specifying the node to which it belongs, its name, address, mode, properties and whether it supports fallback to standalone ports.

    # openstack baremetal port group create --node NODE_UUID --name NAME --address MAC_ADDRESS --mode MODE  --property miimon=100 --property xmit_hash_policy="layer2+3" --support-standalone-ports

    You can also use the openstack baremetal port group set command to update a port group.

    If you do not specify an address, the deployed instance port group address is the same as the OpenStack Networking port. If you do not attach the neutron port, the port group configuration fails.

    During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify whether a port group or a port is desired in an interface attachment request. Port groups that do not have any ports are ignored.

    Note

    You must configure port groups manually in standalone mode either in the image or by generating the configdrive and adding it to the node’s instance_info. Ensure that you have cloud-init version 0.7.7 or later for the port group configuration to work.

  2. Associate a port with a port group:

    • During port creation:

      # openstack baremetal port create --node NODE_UUID --address MAC_ADDRESS --port-group test
    • During port update:

      # openstack baremetal port set PORT_UUID --port-group PORT_GROUP_UUID
  3. Boot an instance by providing an image that has cloud-init or supports bonding.

    To check if the port group is configured properly, run the following command:

    # cat /proc/net/bonding/bondX

    Here, X is a number that cloud-init generates automatically for each configured port group, starting with a 0 and incremented by one for each configured port group.

5.3. Determining the host to IP address mapping

Use the following commands to determine which IP addresses are assigned to which host and bare metal node. With these commands, you can view the host to IP mapping from the undercloud without accessing the hosts directly.

Prerequisites

Procedure

  1. Run the following command to display the IP address for each host:

    (undercloud) [stack@host01 ~]$ openstack stack output show overcloud HostsEntry --max-width 80
    
    +--------------+---------------------------------------------------------------+
    | Field        | Value                                                         |
    +--------------+---------------------------------------------------------------+
    | description  | The content that should be appended to your /etc/hosts if you |
    |              | want to get                                                   |
    |              | hostname-based access to the deployed nodes (useful for       |
    |              | testing without                                               |
    |              | setting up a DNS).                                            |
    |              |                                                               |
    | output_key   | HostsEntry                                                    |
    | output_value | 172.17.0.10 overcloud-controller-0.localdomain overcloud-     |
    |              | controller-0                                                  |
    |              | 10.8.145.18 overcloud-controller-0.external.localdomain       |
    |              | overcloud-controller-0.external                               |
    |              | 172.17.0.10 overcloud-controller-0.internalapi.localdomain    |
    |              | overcloud-controller-0.internalapi                            |
    |              | 172.18.0.15 overcloud-controller-0.storage.localdomain        |
    |              | overcloud-controller-0.storage                                |
    |              | 172.21.2.12 overcloud-controller-0.storagemgmt.localdomain    |
    |              | overcloud-controller-0.storagemgmt                            |
    |              | 172.16.0.15 overcloud-controller-0.tenant.localdomain         |
    |              | overcloud-controller-0.tenant                                 |
    |              | 10.8.146.13 overcloud-controller-0.management.localdomain     |
    |              | overcloud-controller-0.management                             |
    |              | 10.8.146.13 overcloud-controller-0.ctlplane.localdomain       |
    |              | overcloud-controller-0.ctlplane                               |
    |              |                                                               |
    |              | 172.17.0.21 overcloud-compute-0.localdomain overcloud-        |
    |              | compute-0                                                     |
    |              | 10.8.146.12 overcloud-compute-0.external.localdomain          |
    |              | overcloud-compute-0.external                                  |
    |              | 172.17.0.21 overcloud-compute-0.internalapi.localdomain       |
    |              | overcloud-compute-0.internalapi                               |
    |              | 172.18.0.20 overcloud-compute-0.storage.localdomain           |
    |              | overcloud-compute-0.storage                                   |
    |              | 10.8.146.12 overcloud-compute-0.storagemgmt.localdomain       |
    |              | overcloud-compute-0.storagemgmt                               |
    |              | 172.16.0.16 overcloud-compute-0.tenant.localdomain overcloud- |
    |              | compute-0.tenant                                              |
    |              | 10.8.146.12 overcloud-compute-0.management.localdomain        |
    |              | overcloud-compute-0.management                                |
    |              | 10.8.146.12 overcloud-compute-0.ctlplane.localdomain          |
    |              | overcloud-compute-0.ctlplane                                  |
    |              |                                                               |
    |              |                                                               |
    |              |                                                               |
    |              |                                                               |
    |              | 10.8.145.16  overcloud.localdomain                            |
    |              | 10.8.146.7  overcloud.ctlplane.localdomain                    |
    |              | 172.17.0.19  overcloud.internalapi.localdomain                |
    |              | 172.18.0.19  overcloud.storage.localdomain                    |
    |              | 172.21.2.16  overcloud.storagemgmt.localdomain                |
    +--------------+---------------------------------------------------------------+
  2. To filter a particular host, run the following command:

    (undercloud) [stack@host01 ~]$ openstack stack output show overcloud HostsEntry -c output_value -f value | grep overcloud-controller-0
    
    172.17.0.12 overcloud-controller-0.localdomain overcloud-controller-0
    10.8.145.18 overcloud-controller-0.external.localdomain overcloud-controller-0.external
    172.17.0.12 overcloud-controller-0.internalapi.localdomain overcloud-controller-0.internalapi
    172.18.0.12 overcloud-controller-0.storage.localdomain overcloud-controller-0.storage
    172.21.2.13 overcloud-controller-0.storagemgmt.localdomain overcloud-controller-0.storagemgmt
    172.16.0.19 overcloud-controller-0.tenant.localdomain overcloud-controller-0.tenant
    10.8.146.13 overcloud-controller-0.management.localdomain overcloud-controller-0.management
    10.8.146.13 overcloud-controller-0.ctlplane.localdomain overcloud-controller-0.ctlplane
  3. To map the hosts to bare metal nodes, run the following command:

    (undercloud) [stack@host01 ~]$ openstack baremetal node list --fields uuid name instance_info -f json
    [
      {
        "UUID": "c0d2568e-1825-4d34-96ec-f08bbf0ba7ae",
        "Instance Info": {
          "root_gb": "40",
          "display_name": "overcloud-compute-0",
          "image_source": "24a33990-e65a-4235-9620-9243bcff67a2",
          "capabilities": "{\"boot_option\": \"local\"}",
          "memory_mb": "4096",
          "vcpus": "1",
          "local_gb": "557",
          "configdrive": "******",
          "swap_mb": "0",
          "nova_host_id": "host01.lab.local"
        },
        "Name": "host2"
      },
      {
        "UUID": "8c3faec8-bc05-401c-8956-99c40cdea97d",
        "Instance Info": {
          "root_gb": "40",
          "display_name": "overcloud-controller-0",
          "image_source": "24a33990-e65a-4235-9620-9243bcff67a2",
          "capabilities": "{\"boot_option\": \"local\"}",
          "memory_mb": "4096",
          "vcpus": "1",
          "local_gb": "557",
          "configdrive": "******",
          "swap_mb": "0",
          "nova_host_id": "host01.lab.local"
        },
        "Name": "host3"
      }
    ]

5.4. Attaching and detaching virtual network interfaces

The Bare Metal Provisioning service has an API that you can use to manage the mapping between virtual network interfaces. For example, the interfaces in the OpenStack Networking service and your physical interfaces (NICs). You can configure these interfaces for each Bare Metal Provisioning node to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic. To configure the interfaces, use the openstack baremetal node vif* commands.

Prerequisites

Procedure

  1. List the VIF IDs currently connected to the bare metal node:

    $ openstack baremetal node vif list baremetal-0
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 |
    +--------------------------------------+
  2. After the VIF is attached, the Bare Metal Provisioning service updates the virtual port in the OpenStack Networking service with the actual MAC address of the physical port. Check this port address:

    $ openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips
    +-------------+-----------------------------------------------------------------------------+
    | Field       | Value                                                                       |
    +-------------+-----------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                           |
    +-------------+-----------------------------------------------------------------------------+
  3. Create a new port on the network where you created the baremetal-0 node:

    $ openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 baremetal-0-extra
  4. Remove a port from the instance:

    $ openstack server remove port overcloud-baremetal-0 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16
  5. Check that the IP address no longer exists on the list:

    $ openstack server list
  6. Check if there are VIFs attached to the node:

    $ openstack baremetal node vif list baremetal-0
    $ openstack port list
  7. Add the newly created port:

    $ openstack server add port overcloud-baremetal-0 baremetal-0-extra
  8. Verify that the new IP address shows the new port:

    $ openstack server list
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
    | ID                                   | Name                    | Status | Networks               | Image          | Flavor  |
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
    | 53095a64-1646-4dd1-bbf3-b51cbcc38789 | overcloud-controller-2  | ACTIVE | ctlplane=192.168.24.7  | overcloud-full | control |
    | 3a1bc89c-5d0d-44c7-a569-f2a3b4c73d65 | overcloud-controller-0  | ACTIVE | ctlplane=192.168.24.8  | overcloud-full | control |
    | 6b01531a-f55d-40e9-b3a2-6d02be0b915b | overcloud-controller-1  | ACTIVE | ctlplane=192.168.24.16 | overcloud-full | control |
    | c61cc52b-cc48-4903-a971-073c60f53091 | overcloud-novacompute-0overcloud-baremetal-0 | ACTIVE | ctlplane=192.168.24.24 | overcloud-full | compute |
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
  9. Check if the VIF ID is the UUID of the new port:

    $ openstack baremetal node vif list baremetal-0
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 6181c089-7e33-4f1c-b8fe-2523ff431ffc |
    +--------------------------------------+
  10. Check if the OpenStack Networking port MAC address is updated and matches one of the Bare Metal Provisioning service ports:

    $ openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips
    +-------------+------------------------------------------------------------------------------+
    | Field       | Value                                                                        |
    +-------------+------------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                            |
    +-------------+------------------------------------------------------------------------------+
  11. Reboot the bare metal node so that it recognizes the new IP address:

    $ openstack server reboot overcloud-baremetal-0

    After you detach or attach interfaces, the bare metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this might take some time because the old DHCP lease is still valid. To initiate these changes immediately, reboot the bare metal host.

5.5. Configuring notifications for the Bare Metal Provisioning service

You can configure the Bare Metal Provisioning service (ironic) to display notifications for different events that occur within the service. External services can use these notifications for billing purposes, monitoring a data store, and other purposes. To enable notifications for the Bare Metal Provisioning service, you must set the following options in your ironic.conf configuration file.

Prerequisites

Procedure

  • The notification_level option in the [DEFAULT] section determines the minimum priority level for which notifications are sent. You can set the values for this option to debug, info, warning, error, or critical. If the option is set to warning, all notifications with priority level warning, error, or critical are sent, but not notifications with priority level debug or info. If this option is not set, no notifications are sent. The priority level of each available notification is documented below.
  • The transport_url option in the [oslo_messaging_notifications] section determines the message bus used when sending notifications. If this is not set, the default transport used for RPC is used.

All notifications are emitted on the ironic_versioned_notifications topic in the message bus. Generally, each type of message that traverses the message bus is associated with a topic that describes the contents of the message.

5.6. Configuring automatic power fault recovery

The Bare Metal Provisioning service (ironic) has a string field fault that records power, cleaning, and rescue abort failures for nodes.

Table 5.1. Ironic node faults
FaultDescription

power failure

The node is in maintenance mode due to power sync failures that exceed the maximum number of retries.

clean failure

The node is in maintenance mode due to the failure of a cleaning operation.

rescue abort failure

The node is in maintenance mode due to the failure of a cleaning operation during rescue abort.

none

There is no fault present.

Conductor checks the value of this field periodically. If the conductor detects a power failure state and can successfully restore power to the node, the node is removed from maintenance mode and restored to operation.

Note

If the operator places a node in maintenance mode manually, the conductor does not automatically remove the node from maintenance mode.

The default interval is 300 seconds, however, you can configure this interval with director using hieradata.

Prerequisites

Procedure

  • Include the following hieradata to configure a custom recovery interval:

    ironic::conductor::power_failure_recovery_interval

    To disable automatic power fault recovery, set the value to 0.

5.7. Introspecting overcloud nodes

Perform introspection of overcloud nodes to identify and store the specification of your nodes in director.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the overcloudrc credentials file:

    $ source ~/overcloudrc
  3. Run the introspection command:

    $ openstack baremetal introspection start [--wait] <NODENAME>

    Replace <NODENAME> with the name or UUID of the node that you want to inspect.

  4. Check the introspection status:

    $ openstack baremetal introspection status <NODENAME>

    Replace <NODENAME> with the name or UUID of the node.

Next steps

  • Extract introspection data:

    $ openstack baremetal introspection data save <NODE-UUID>

    Replace <NODENAME> with the name or UUID of the node.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.