Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Adding physical machines as bare-metal nodes


Use one of the following methods to enroll a bare-metal node:

  • Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
  • Register a physical machine as a bare-metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses.

4.1. Prerequisites

  • The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.
  • The oc command line tool is installed on the workstation.

4.2. Enrolling bare-metal nodes with an inventory file

You can create an inventory file that defines the details of each bare-metal node. You import the file into the Bare Metal Provisioning service (ironic) to enroll the bare-metal nodes, and then make each node available.

Note

Some drivers might require specific configuration. For more information, see Bare metal drivers.

Procedure

  1. Create an inventory file to define the details of each node, for example, ironic-nodes.yaml.
  2. For each node, define the node name and the address and credentials for the bare-metal driver. For details on the available properties for your enabled driver, see Bare metal drivers.

    nodes:
      - name: <node>
        driver: <driver>
        driver_info:
          <driver>_address: <ip>
          <driver>_username: <user>
          <driver>_password: <password>
          [<property>: <value>]
    • Replace <node> with the name of the node.
    • Replace <driver> with a supported bare-metal driver, for example,redfish.
    • Replace <ip> with the IP address of the Bare Metal controller.
    • Replace <user> with your username.
    • Replace <password> with your password.
    • Optional: Replace <property> with a driver property that you want to configure, and replace <value> with the value of the property. For information on the available properties, see Bare metal drivers.
  3. Define the node properties and ports:

    nodes:
      - name: <node>
        ...
        properties:
          cpus: <cpu_count>
          cpu_arch: <cpu_arch>
          memory_mb: <memory>
          local_gb: <root_disk>
          root_device:
            serial: <serial>
          network_interface: <interface_type>
        ports:
          - address: <mac_address>
    • Replace <cpu_count> with the number of CPUs.
    • Replace <cpu_arch> with the type of architecture of the CPUs.
    • Replace <memory> with the amount of memory in MiB.
    • Replace <root_disk> with the size of the root disk in GiB. Only required when the machine has multiple disks.
    • Replace <serial> with the serial number of the disk that you want to use for deployment.
    • Optional: Include the network_interface property if you want to override the default network type of flat. You can change the network type to one of the following valid values:

      • neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6.
      • noop: Use for standalone deployments where network switching is not required.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  4. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  5. Import the inventory file into the Bare Metal Provisioning service:

    $ openstack baremetal create ironic-nodes.yaml

    The nodes are now in the enroll state.

  6. Wait for the extra network interface port configuration data to populate the Networking service (neutron). This process takes at least 60 seconds.
  7. Set the provisioning state of each node to available:

    $ openstack baremetal node manage <node>
    $ openstack baremetal node provide <node>

    The Bare Metal Provisioning service cleans the node if you enabled node cleaning.

  8. Check that the nodes are enrolled:

    $ openstack baremetal node list

    There might be a delay between enrolling a node and its state being shown.

  9. Exit the openstackclient pod:

    $ exit

4.3. Enrolling a bare-metal node manually

Register a physical machine as a bare-metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Add a new node:

    $ openstack baremetal node create --driver <driver_name> --name <node_name>
    • Replace <driver_name> with the name of the driver, for example, redfish.
    • Replace <node_name> with the name of your new bare-metal node.
  3. Note the UUID assigned to the node when it is created.
  4. Update the node properties to match the hardware specifications on the node:

    $ openstack baremetal node set <node> \
      --property cpus=<cpu> \
      --property memory_mb=<ram> \
      --property local_gb=<disk> \
      --property cpu_arch=<arch>
    • Replace <node> with the ID of the bare metal node.
    • Replace <cpu> with the number of CPUs.
    • Replace <ram> with the RAM in MB.
    • Replace <disk> with the disk size in GB.
    • Replace <arch> with the architecture type.
  5. Optional: Set the network_interface property to override the default network type of flat:

    $ openstack baremetal node set <node> --network-interace <network_interface>
    • Replace <network_interface> with one of the following valid network types:

      • neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6.
      • noop: Use for standalone deployments where network switching is not required.
  6. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:

    $ openstack baremetal node set <node> \
      --property root_device='{"<property>": "<value>"}'
    • Replace <node> with the ID of the bare metal node.
    • Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}'

      RHOSP supports the following properties:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  7. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:

    $ openstack baremetal port create --node <node_uuid> <mac_address>
    • Replace <node> with the unique ID of the bare metal node.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  8. Validate the configuration of the node:

    $ openstack baremetal node validate <node>
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+

    The validation output Result indicates the following:

    • False: The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation.
    • True: The interface has passed validation.
    • None: The interface is not supported for your driver.
  9. Exit the openstackclient pod:

    $ exit

You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.

Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.

To launch bare-metal instances with the redfish hardware type over virtual media, set the boot interface of each bare-metal node to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.

Prerequisites

  • The bare-metal node is registered and enrolled.
  • The IPA and instance images are available in the Image Service (glance).
  • For UEFI nodes, an EFI system partition image (ESP) is available in the Image Service (glance).

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Set the Bare Metal service boot interface to redfish-virtual-media:

    $ openstack baremetal node set --boot-interface redfish-virtual-media <node_name>
    • Replace <node_name> with the name of the node.
  3. For UEFI nodes, define the EFI System Partition (ESP) image:

    $ openstack baremetal node set --driver-info bootloader=<esp_image> <node>
    • Replace <esp_image> with the image UUID or URL for the ESP image.
    • Replace <node> with the name of the node.
    Note

    For BIOS nodes, do not complete this step.

  4. Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare metal node:

    $ openstack baremetal port create --pxe-enabled True --node <node_uuid> <mac_address>
    • Replace <node_uuid> with the UUID of the bare-metal node.
    • Replace <mac_address> with the MAC address of the NIC on the bare-metal node.
  5. Exit the openstackclient pod:

    $ exit

4.5. Creating flavors for launching bare-metal instances

You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Tag each bare-metal node with a custom bare-metal resource class:

    $ openstack baremetal node set \
     --resource-class baremetal.<CUSTOM> <node>
    • Replace <CUSTOM> with a string that identifies the purpose of the resource class. For example, set to GPU to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads.
    • Replace <node> with the ID of the bare metal node.
  4. Create a flavor for bare-metal instances:

    $ openstack flavor create --id auto \
     --ram <ram_size_mb> --disk <disk_size_gb> \
     --vcpus <no_vcpus> baremetal
    • Replace <ram_size_mb> with the RAM of the bare metal node, in MB.
    • Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB.
    • Replace <no_vcpus> with the number of CPUs on the bare metal node.

      Note

      These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.

  5. Associate the flavor for bare-metal instances with the custom resource class:

    $ openstack flavor set \
     --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \
     baremetal

    To determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_.

    Note

    A flavor can request only one instance of a bare-metal resource class.

  6. Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances:

    $ openstack flavor set \
     --property resources:VCPU=0 \
     --property resources:MEMORY_MB=0 \
     --property resources:DISK_GB=0 baremetal
  7. Verify that the new flavor has the correct values:

    $ openstack flavor list
  8. Exit the openstackclient pod:

    $ exit

4.6. Bare-metal node provisioning states

A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".

Use the following table to understand the node provisioning states and the actions you can perform to transition a node from one state to another.

Expand
Table 4.1. Provisioning states
StateCategoryDescription

enroll

Stable

The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes.

verifying

In transition

The Bare Metal Provisioning service validates that it can manage the node by using the driver_info configuration provided during the node enrollment.

manageable

Stable

The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the manageable state to one of the following states by using the following commands:

  • openstack baremetal node adopt adopting active
  • openstack baremetal node provide cleaning available
  • openstack baremetal node clean cleaning available
  • openstack baremetal node inspect inspecting manageable

You must move a node to the manageable state after it is transitioned to one of the following failed states:

  • adopt failed
  • clean failed
  • inspect failed

Move a node into the manageable state when you need to update the node.

inspecting

In transition

The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to manageable for synchronous inspection, and inspect wait for asynchronous inspection. The node transitions to inspect failed if an error occurs.

inspect wait

In transition

The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the manageable state.

inspect failed

Stable

The provisioning state that indicates that the node inspection failed. You can transition the node from the inspect failed state to one of the following states by using the following commands:

  • openstack baremetal node inspect inspecting manageable
  • openstack baremetal node manage manageable

cleaning

In transition

Nodes in the cleaning state are being scrubbed and reprogrammed into a known configuration. When a node is in the cleaning state, depending on the network management, the conductor performs the following tasks:

  • Out-of-band: The conductor performs the clean step.
  • In-band: The conductor prepares the environment to boot the ramdisk for running the in-band clean steps. The preparation tasks include building the PXE configuration files, and configuring the DHCP.

clean wait

In transition

Nodes in the clean wait state are being scrubbed and reprogrammed into a known configuration. This state is similar to the cleaning state except that in the clean wait state, the conductor is waiting for the ramdisk to boot or the clean step to finish.

You can interrupt the cleaning process of a node in the clean wait state by running openstack baremetal node abort.

available

Stable

After nodes have been successfully preconfigured and cleaned, they are moved into the available state and are ready to be provisioned. You can transition the node from the available state to one of the following states by using the following commands:

  • openstack baremetal node deploy deploying active
  • openstack baremetal node manage manageable

deploying

In transition

Nodes in the deploying state are being prepared for a workload, which involves performing the following tasks:

  • Setting appropriate BIOS options for the node deployment.
  • Partitioning drives and creating file systems.
  • Creating any additional resources that might be required by additional subsystems, such as the node-specific network configuration, and a configuraton drive partition.

wait call-back

In transition

Nodes in the wait call-back state are being prepared for a workload. This state is similar to the deploying state except that in the wait call-back state, the conductor is waiting for a task to complete before preparing the node. For example, the following tasks must be completed before the conductor can prepare the node:

  • The ramdisk has booted.
  • The bootloader is installed.
  • The image is written to the disk.

You can interrupt the deployment of a node in the wait call-back state by running openstack baremetal node delete or openstack baremetal node undeploy.

deploy failed

Stable

The provisioning state that indicates that the node deployment failed. You can transition the node from the deploy failed state to one of the following states by using the following commands:

  • openstack baremetal node deploy deploying active
  • openstack baremetal node rebuild deploying active
  • openstack baremetal node delete deleting cleaning clean wait cleaning available
  • openstack baremetal node undeploy deleting cleaning clean wait cleaning available

active

Stable

Nodes in the active state have a workload running on them. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the active state to one of the following states by using the following commands:

  • openstack baremetal node delete deleting available
  • openstack baremetal node undeploy cleaning available
  • openstack baremetal node rebuild deploying active
  • openstack baremetal node rescue rescuing rescue

deleting

In transition

When a node is in the deleting state, the Bare Metal Provisioning service disassembles the active workload and removes any configuration and resources it added to the node during the node deployment or rescue. Nodes transition quickly from the deleting state to the cleaning state, and then to the clean wait state.

error

Stable

If a node deletion is unsuccessful, the node is moved into the error state. You can transition the node from the error state to one of the following states by using the following commands:

  • openstack baremetal node delete deleting available
  • openstack baremetal node undeploy cleaning available

adopting

In transition

You can use the openstack baremetal node adopt command to transition a node with an existing workload directly from manageable to active state without first cleaning and deploying the node. When a node is in the adopting state the Bare Metal Provisioning service has taken over management of the node with its existing workload.

rescuing

In transition

Nodes in the rescuing state are being prepared to perform the following rescue operations:

  • Setting appropriate BIOS options for the node deployment.
  • Creating any additional resources that might be required by additional subsystems, such as node-specific network configurations.

rescue wait

In transition

Nodes in the rescue wait state are being rescued. This state is similar to the rescuing state except that in the rescue wait state, the conductor is waiting for the ramdisk to boot, or to execute parts of the rescue which need to run in-band on the node, such as setting the password for user named rescue.

You can interrupt the rescue operation of a node in the rescue wait state by running openstack baremetal node abort.

rescue failed

Stable

The provisioning state that indicates that the node rescue failed. You can transition the node from the rescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescue rescuing rescue
  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available

rescue

Stable

Nodes in the rescue state are running a rescue ramdisk. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the rescue state to one of the following states by using the following commands:

  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available

unrescuing

In transition

Nodes in the unrescuing state are being prepared to transition from the rescue state to the active state.

unrescue failed

Stable

The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the unrescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescue rescuing rescue
  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début