Search

Chapter 5. Administering Bare Metal Nodes

download PDF

This chapter describes how to provision a physical machine on an enrolled bare metal node. Instances can be launched either from the command line or from the OpenStack dashboard.

5.1. Launching an Instance Using the Command Line Interface

Use the openstack command line interface to deploy a bare metal instance.

Deploying an Instance on the Command Line

  1. Configure the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
  2. Deploy the instance:

    $ openstack server create \
      --nic net-id=NETWORK_UUID \
      --flavor baremetal \
      --image IMAGE_UUID \
      INSTANCE_NAME

    Replace the following values:

    • Replace NETWORK_UUID with the unique identifier for the network that was created for use with the Bare Metal service.
    • Replace IMAGE_UUID with the unique identifier for the disk image that was uploaded to the Image service.
    • Replace INSTANCE_NAME with a name for the bare metal instance.

    To assign the instance to a security group, include --security-group SECURITY_GROUP, replacing SECURITY_GROUP with the name of the security group. Repeat this option to add the instance to multiple groups. For more information on security group management, see the Users and Identity Management Guide.

  3. Check the status of the instance:

    $ openstack server list --name INSTANCE_NAME

5.2. Launch an Instance Using the Dashboard

Use the dashboard graphical user interface to deploy a bare metal instance.

Deploying an Instance in the Dashboard

  1. Log in to the dashboard at http[s]://DASHBOARD_IP/dashboard.
  2. Click Project > Compute > Instances
  3. Click Launch Instance.

    • In the Details tab, specify the Instance Name and select 1 for Count.
    • In the Source tab, select an Image from Select Boot Source, then click the (up arrow) symbol to select an operating system disk image. The chosen image moves to Allocated.
    • In the Flavor tab, select baremetal.
    • In the Networks tab, use the + (plus) and - (minus) buttons to move required networks from Available to Allocated. Ensure that the shared network created for the Bare Metal service is selected here.
    • If you want to assign the instance to a security group, in the Security Groups tab, use the arrow to move the group to Allocated.
  4. Click Launch Instance.

5.3. Configure Port Groups in the Bare Metal Provisioning Service

Note

Port group functionality for bare metal nodes is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Port groups (bonds) provide a method to aggregate multiple network interfaces into a single ‘bonded’ interface. Port group configuration always takes precedence over an individual port configuration.

If a port group has a physical network, then all the ports in that port group should have the same physical network. The Bare Metal Provisioning service supports configuration of port groups in the instances using configdrive.

Note

Bare Metal Provisioning service API version 1.26 supports port group configuration.

5.3.1. Configure the Switches

To configure port groups in a Bare Metal Provisioning deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch.

Note

You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE.

Port group fallback allows all the ports in a port group to fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the ``--support-standalone-ports`` and ``--unsupport-standalone-ports`` options.

5.3.2. Configure Port Groups in the Bare Metal Provisioning Service

  1. Create a port group by specifying the node to which it belongs, its name, address, mode, properties and whether it supports fallback to standalone ports.

    # openstack baremetal port group create --node NODE_UUID --name NAME --address MAC_ADDRESS --mode MODE  --property miimon=100 --property xmit_hash_policy="layer2+3" --support-standalone-ports

    You can also update a port group using the openstack baremetal port group set command.

    If you do not specify an address, the deployed instance port group address is the same as the OpenStack Networking port. The port group will not be configured if the neutron port is not attached.

    During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify whether a port group or a port is desired in an interface attachment request. Port groups that do not have any ports will be ignored.

    Note

    Port groups must be configured manually in standalone mode either in the image or by generating the configdrive and adding it to the node’s instance_info. Ensure that you have cloud-init version 0.7.7 or later for the port group configuration to work.

  2. Associate a port with a port group:

    • During port creation:

      # openstack baremetal port create --node NODE_UUID --address MAC_ADDRESS --port-group test
    • During port update:

      # openstack baremetal port set PORT_UUID --port-group PORT_GROUP_UUID
  3. Boot an instance by providing an image that has cloud-init or supports bonding.

    To check if the port group has been configured properly, run the following command:

    # cat /proc/net/bonding/bondX

    Here, X is a number autogenerated by cloud-init for each configured port group, starting with a 0 and incremented by one for each configured port group.

5.4. Determining the Host to IP Address Mapping

Use the following commands to determine which IP addresses are assigned to which host and also to which bare metal node.

This feature allows you to know the host to IP mapping from the undercloud without needing to access the hosts directly.

(undercloud) [stack@host01 ~]$ openstack stack output show overcloud HostsEntry --max-width 80

+--------------+---------------------------------------------------------------+
| Field        | Value                                                         |
+--------------+---------------------------------------------------------------+
| description  | The content that should be appended to your /etc/hosts if you |
|              | want to get                                                   |
|              | hostname-based access to the deployed nodes (useful for       |
|              | testing without                                               |
|              | setting up a DNS).                                            |
|              |                                                               |
| output_key   | HostsEntry                                                    |
| output_value | 172.17.0.10 overcloud-controller-0.localdomain overcloud-     |
|              | controller-0                                                  |
|              | 10.8.145.18 overcloud-controller-0.external.localdomain       |
|              | overcloud-controller-0.external                               |
|              | 172.17.0.10 overcloud-controller-0.internalapi.localdomain    |
|              | overcloud-controller-0.internalapi                            |
|              | 172.18.0.15 overcloud-controller-0.storage.localdomain        |
|              | overcloud-controller-0.storage                                |
|              | 172.21.2.12 overcloud-controller-0.storagemgmt.localdomain    |
|              | overcloud-controller-0.storagemgmt                            |
|              | 172.16.0.15 overcloud-controller-0.tenant.localdomain         |
|              | overcloud-controller-0.tenant                                 |
|              | 10.8.146.13 overcloud-controller-0.management.localdomain     |
|              | overcloud-controller-0.management                             |
|              | 10.8.146.13 overcloud-controller-0.ctlplane.localdomain       |
|              | overcloud-controller-0.ctlplane                               |
|              |                                                               |
|              | 172.17.0.21 overcloud-compute-0.localdomain overcloud-        |
|              | compute-0                                                     |
|              | 10.8.146.12 overcloud-compute-0.external.localdomain          |
|              | overcloud-compute-0.external                                  |
|              | 172.17.0.21 overcloud-compute-0.internalapi.localdomain       |
|              | overcloud-compute-0.internalapi                               |
|              | 172.18.0.20 overcloud-compute-0.storage.localdomain           |
|              | overcloud-compute-0.storage                                   |
|              | 10.8.146.12 overcloud-compute-0.storagemgmt.localdomain       |
|              | overcloud-compute-0.storagemgmt                               |
|              | 172.16.0.16 overcloud-compute-0.tenant.localdomain overcloud- |
|              | compute-0.tenant                                              |
|              | 10.8.146.12 overcloud-compute-0.management.localdomain        |
|              | overcloud-compute-0.management                                |
|              | 10.8.146.12 overcloud-compute-0.ctlplane.localdomain          |
|              | overcloud-compute-0.ctlplane                                  |
|              |                                                               |
|              |                                                               |
|              |                                                               |
|              |                                                               |
|              | 10.8.145.16  overcloud.localdomain                            |
|              | 10.8.146.7  overcloud.ctlplane.localdomain                    |
|              | 172.17.0.19  overcloud.internalapi.localdomain                |
|              | 172.18.0.19  overcloud.storage.localdomain                    |
|              | 172.21.2.16  overcloud.storagemgmt.localdomain                |
+--------------+---------------------------------------------------------------+

To filter a particular host, run the following command:

(undercloud) [stack@host01 ~]$ openstack stack output show overcloud HostsEntry -c output_value -f value | grep overcloud-controller-0

172.17.0.12 overcloud-controller-0.localdomain overcloud-controller-0
10.8.145.18 overcloud-controller-0.external.localdomain overcloud-controller-0.external
172.17.0.12 overcloud-controller-0.internalapi.localdomain overcloud-controller-0.internalapi
172.18.0.12 overcloud-controller-0.storage.localdomain overcloud-controller-0.storage
172.21.2.13 overcloud-controller-0.storagemgmt.localdomain overcloud-controller-0.storagemgmt
172.16.0.19 overcloud-controller-0.tenant.localdomain overcloud-controller-0.tenant
10.8.146.13 overcloud-controller-0.management.localdomain overcloud-controller-0.management
10.8.146.13 overcloud-controller-0.ctlplane.localdomain overcloud-controller-0.ctlplane

To map the hosts to bare metal nodes, run the following command:

(undercloud) [stack@host01 ~]$ openstack baremetal node list --fields uuid name instance_info -f json
[
  {
    "UUID": "c0d2568e-1825-4d34-96ec-f08bbf0ba7ae",
    "Instance Info": {
      "root_gb": "40",
      "display_name": "overcloud-compute-0",
      "image_source": "24a33990-e65a-4235-9620-9243bcff67a2",
      "capabilities": "{\"boot_option\": \"local\"}",
      "memory_mb": "4096",
      "vcpus": "1",
      "local_gb": "557",
      "configdrive": "******",
      "swap_mb": "0",
      "nova_host_id": "host01.lab.local"
    },
    "Name": "host2"
  },
  {
    "UUID": "8c3faec8-bc05-401c-8956-99c40cdea97d",
    "Instance Info": {
      "root_gb": "40",
      "display_name": "overcloud-controller-0",
      "image_source": "24a33990-e65a-4235-9620-9243bcff67a2",
      "capabilities": "{\"boot_option\": \"local\"}",
      "memory_mb": "4096",
      "vcpus": "1",
      "local_gb": "557",
      "configdrive": "******",
      "swap_mb": "0",
      "nova_host_id": "host01.lab.local"
    },
    "Name": "host3"
  }
]

5.5. Attaching and Detaching a Virtual Network Interface

The Bare Metal Provisioning service has an API to manage the mapping between virtual network interfaces, for example, the ones used in the OpenStack Networking service and the physical interfaces (NICs). These interfaces are configurable for each Bare Metal Provisioning node, allowing you to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic using the openstack baremetal node vif* commands.

The following example procedure describes the steps to attach and detach VIFs.

  1. List the VIF IDs currently connected to the bare metal node:

    $ openstack baremetal node vif list baremetal-0
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 |
    +--------------------------------------+
  2. After the VIF is attached, the Bare Metal service updates the virtual port in the OpenStack Networking service with the actual MAC address of the physical port.

    This can be checked using the following command:

    $ openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips
    +-------------+-----------------------------------------------------------------------------+
    | Field       | Value                                                                       |
    +-------------+-----------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                           |
    +-------------+-----------------------------------------------------------------------------+
  3. Create a new port on the network where you have created the baremetal-0 node:

    $ openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 baremetal-0-extra
  4. Remove a port from the instance:

    $ openstack server remove port overcloud-baremetal-0 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16
  5. Check that the IP address no longer exists on the list:

    $ openstack server list
  6. Check if there are VIFs attached to the node:

    $ openstack baremetal node vif list baremetal-0
    $ openstack port list
  7. Add the newly created port:

    $ openstack server add port overcloud-baremetal-0 baremetal-0-extra
  8. Verify that the new IP address shows the new port:

    $ openstack server list
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
    | ID                                   | Name                    | Status | Networks               | Image          | Flavor  |
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
    | 53095a64-1646-4dd1-bbf3-b51cbcc38789 | overcloud-controller-2  | ACTIVE | ctlplane=192.168.24.7  | overcloud-full | control |
    | 3a1bc89c-5d0d-44c7-a569-f2a3b4c73d65 | overcloud-controller-0  | ACTIVE | ctlplane=192.168.24.8  | overcloud-full | control |
    | 6b01531a-f55d-40e9-b3a2-6d02be0b915b | overcloud-controller-1  | ACTIVE | ctlplane=192.168.24.16 | overcloud-full | control |
    | c61cc52b-cc48-4903-a971-073c60f53091 | overcloud-novacompute-0overcloud-baremetal-0 | ACTIVE | ctlplane=192.168.24.24 | overcloud-full | compute |
    +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+
  9. Check if the VIF ID is the UUID of the new port:

    $ openstack baremetal node vif list baremetal-0
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 6181c089-7e33-4f1c-b8fe-2523ff431ffc |
    +--------------------------------------+
  10. Check if the OpenStack Networking port MAC address is updated and matches one of the Bare Metal service ports:

    $ openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips
    +-------------+------------------------------------------------------------------------------+
    | Field       | Value                                                                        |
    +-------------+------------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                            |
    +-------------+------------------------------------------------------------------------------+
  11. Reboot the bare metal node so that it recognizes the new IP address:

    $ openstack server reboot overcloud-baremetal-0

    After detaching or attaching interfaces, the bare metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this may take some time since the old DHCP lease is still valid. The simplest way to initiate these changes immediately is to reboot the bare metal host.

5.6. Configuring Notifications for the Bare Metal Service

You can configure the bare metal service to display notifications for different events that occur within the service. These notifications can be used by external services for billing purposes, monitoring a data store, and so on. This section describes how to enable these notifications.

To enable notifications for the baremetal service, you must set the following options in your ironic.conf configuration file.

  • The notification_level option in the [DEFAULT] section determines the minimum priority level for which notifications are sent. The values for this option can be set to debug, info, warning, error, or critical. If the option is set to warning, all notifications with priority level warning, error, or critical are sent, but not notifications with priority level debug or info. If this option is not set, no notifications are sent. The priority level of each available notification is documented below.
  • The transport_url option in the [oslo_messaging_notifications] section determines the message bus used when sending notifications. If this is not set, the default transport used for RPC is used.

All notifications are emitted on the ironic_versioned_notifications topic in the message bus. Generally, each type of message that traverses the message bus is associated with a topic that describes the contents of the message.

Note

The notifications can be lost and there is no guarantee that a notification will make it across the message bus to the end user.

5.7. Configuring Automatic Power Fault Recovery

Ironic has a string field fault that records power, cleaning, and rescue abort failures for nodes.

Table 5.1. Ironic node faults
FaultDescription

power failure

The node is in maintenance mode due to power sync failures that exceed the maximum number of retries.

clean failure

The node is in maintenance mode due to the failure of a cleaning operation.

rescue abort failure

The node is in maintenance mode due to the failure of a cleaning operation during rescue abort.

none

There is no fault present.

Conductor checks the value of this field periodically. If the conductor detects a power failure state and can successfully restore power to the node, the node is removed from maintenance mode and restored to operation.

Note

If the operator places a node in maintenance mode manually, the conductor does not automatically remove the node from maintenance mode.

The default interval is 300 seconds, however, you can configure this interval with director using hieradata:

ironic::conductor::power_failure_recovery_interval

To disable automatic power fault recovery, set the value to 0.

5.8. Introspecting Overcloud Nodes

You can perform introspection of Overcloud nodes to monitor the specification of the nodes.

  1. Source the rc file:

    $ source ~/overcloudrc
  2. Run the introspection command:

    $ openstack baremetal introspection start [--wait] <NODENAME>

    Replace <NODENAME> with the name of the node that you want to inspect.

  3. Check the introspection status:

    $ openstack baremetal introspection status <NODENAME>

    Replace <NODENAME> with the name of the node.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.