Search

Chapter 4. Configuring the Bare Metal Provisioning service after deployment

download PDF

When you have deployed your overcloud with the Bare Metal Provisioning service (ironic), you must prepare your overcloud for bare-metal workloads. To prepare your overcloud for bare-metal workloads and enable your cloud users to create bare-metal instances, complete the following tasks:

  • Configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service.
  • Configure node cleaning.
  • Create the bare-metal flavor and resource class.
  • Optional: Create the bare-metal images.
  • Add physical machines as bare-metal nodes.
  • Optional: Configure Redfish virtual media boot.
  • Optional: Create host aggregates to separate physical and virtual machine provisioning.

4.1. Configuring the Networking service for bare metal provisioning

You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic). You can configure the bare-metal network by using one of the following methods:

  • Create a single flat bare-metal network for the Bare Metal Provisioning conductor services, ironic-conductor. This network must route to the Bare Metal Provisioning services on the control plane network.
  • Create a custom composable network to implement Bare Metal Provisioning services in the overcloud.

4.1.1. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a flat network

You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a single flat bare-metal network for the Bare Metal Provisioning conductor services, ironic-conductor. This network must route to the Bare Metal Provisioning services on the control plane network.

Procedure

  1. Log in to the node that hosts the Networking service (neutron) as the root user.
  2. Source your overcloud credentials file:

    # source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  3. Create the flat network over which to provision bare-metal instances:

    # openstack network create \
      --provider-network-type flat \
      --provider-physical-network <provider_physical_network> \
      --share <network_name>
    • Replace <provider_physical_network> with the name of the physical network over which you implement the virtual network, which is configured with the parameter NeutronBridgeMappings in your network-environment.yaml file.
    • Replace <network_name> with a name for this network.
  4. Create the subnet on the flat network:

    # openstack subnet create \
      --network <network_name> \
      --subnet-range <network_cidr> \
      --ip-version 4 \
      --gateway <gateway_ip> \
      --allocation-pool start=<start_ip>,end=<end_ip> \
      --dhcp <subnet_name>
    • Replace <network_name> with the name of the provisioning network that you created in the previous step.
    • Replace <network_cidr> with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with <start_ip> and ending with <end_ip> must be within the block of IP addresses specified by <network_cidr>.
    • Replace <gateway_ip> with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by <network_cidr>, but outside of the block of IP addresses specified by the range that starts with <start_ip> and ends with <end_ip>.
    • Replace <start_ip> with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <end_ip> with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <subnet_name> with a name for the subnet.
  5. Create a router for the network and subnet to ensure that the Networking service serves metadata requests:

    # openstack router create <router_name>
    • Replace <router_name> with a name for the router.
  6. Attach the subnet to the new router to enable the metadata requests from cloud-init to be served and the node to be configured: :

    # openstack router add subnet <router_name> <subnet>
    • Replace <router_name> with the name of your router.
    • Replace <subnet> with the ID or name of the bare-metal subnet that you created in the step 4.

4.1.2. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a custom composable network

You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a custom composable network to implement Bare Metal Provisioning services in the overcloud.

Procedure

  1. Log in to the undercloud host.
  2. Source your overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  3. Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service:

    (overcloud)$ openstack network show <network_name> -f value -c id
    • Replace <network_name> with the name of the provider network that you want to use for the bare-metal instance provisioning network.
  4. Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example, ironic-overrides.yaml.
  5. Configure the network to use as the bare-metal instance provisioning network:

    parameter_defaults:
      IronicProvisioningNetwork: <network_uuid>
    • Replace <network_uuid> with the UUID of the provider network retrieved in step 3.
  6. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  7. To apply the bare-metal instance provisioning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/node-info.yaml \
      -r /home/stack/templates/roles_data.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \
      -e /home/stack/templates/network_environment_overrides.yaml \
      -n /home/stack/templates/network_data.yaml \
      -e /home/stack/templates/ironic-overrides.yaml
    • Replace <default_ironic_template> with either ironic.yaml or ironic-overcloud.yaml, depending on the Networking service mechanism driver for your deployment.

4.2. Cleaning bare-metal nodes

The Bare Metal Provisioning service cleans nodes to prepare them for provisioning. You can clean bare-metal nodes by using one of the following methods:

  • Automatic: You can configure your overcloud to automatically perform node cleaning when you unprovision a node.
  • Manual: You can manually clean individual nodes when required.

4.2.1. Configuring automatic node cleaning

Automatic bare-metal node cleaning runs after you enroll a node, and before the node reaches the available provisioning state. Automatic cleaning is run each time the node is unprovisioned.

By default, the Bare Metal Provisioning service uses a network named provisioning for node cleaning. However, network names are not unique in the Networking service (neutron), so it is possible for a project to create a network with the same name, which causes a conflict with the Bare Metal Provisioning service. To avoid the conflict, use the network UUID to configure the node cleaning network.

Procedure

  1. Log in to the undercloud host.
  2. Source your overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  3. Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service:

    (overcloud)$ openstack network show <network_name> -f value -c id
    • Replace <network_name> with the name of the network that you want to use for the bare-metal node cleaning network.
  4. Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example, ironic-overrides.yaml.
  5. Configure the network to use as the node cleaning network:

    parameter_defaults:
      IronicCleaningNetwork: <network_uuid>
    • Replace <network_uuid> with the UUID of the provider network that you retrieved in step 3.
  6. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  7. To apply the node cleaning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud:

    (undercloud)$ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/node-info.yaml \
      -r /home/stack/templates/roles_data.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \
      -e /home/stack/templates/network_environment_overrides.yaml \
      -n /home/stack/templates/network_data.yaml \
      -e /home/stack/templates/ironic-overrides.yaml
    • Replace <default_ironic_template> with either ironic.yaml or ironic-overcloud.yaml, depending on the Networking service mechanism driver for your deployment.

4.2.2. Cleaning nodes manually

You can clean specific nodes manually as required. Node cleaning has two modes:

  • Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments.
  • Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete.

Procedure

  1. Source your overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. Check the current state of the node:

    $ openstack baremetal node show \
     -f value -c provision_state <node>
    • Replace <node> with the name or UUID of the node to clean.
  3. If the node is not in the manageable state, then set it to manageable:

    $ openstack baremetal node manage <node>
  4. Clean the node:

    $ openstack baremetal node clean <node> \
      --clean-steps '[{"interface": "deploy", "step": "<clean_mode>"}]'
    • Replace <node> with the name or UUID of the node to clean.
    • Replace <clean_mode> with the type of cleaning to perform on the node:

      • erase_devices: Performs a full clean.
      • erase_devices_metadata: Performs a metadata only clean.
  5. Wait for the clean to complete, then check the status of the node:

    • manageable: The clean was successful, and the node is ready to provision.
    • clean failed: The clean was unsuccessful. Inspect the last_error field for the cause of failure.

4.3. Creating flavors for launching bare-metal instances

You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances.

Procedure

  1. Source the overcloud credentials file:

    $ source ~/overcloudrc
  2. Create a flavor for bare-metal instances:

    (overcloud)$ openstack flavor create --id auto \
     --ram <ram_size_mb> --disk <disk_size_gb> \
     --vcpus <no_vcpus> baremetal
    • Replace <ram_size_mb> with the RAM of the bare metal node, in MB.
    • Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB.
    • Replace <no_vcpus> with the number of CPUs on the bare metal node.

      Note

      These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.

  3. Retrieve a list of your nodes to identify their UUIDs:

    (overcloud)$ openstack baremetal node list
  4. Tag each bare-metal node with a custom bare-metal resource class:

    (overcloud)$ openstack baremetal node set \
     --resource-class baremetal.<CUSTOM> <node>
    • Replace <CUSTOM> with a string that identifies the purpose of the resource class. For example, set to GPU to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads.
    • Replace <node> with the ID of the bare metal node.
  5. Associate the flavor for bare-metal instances with the custom resource class:

    (overcloud)$ openstack flavor set \
     --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \
     baremetal

    To determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_.

    Note

    A flavor can request only one instance of a bare-metal resource class.

  6. Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances:

    (overcloud)$ openstack flavor set \
     --property resources:VCPU=0 \
     --property resources:MEMORY_MB=0 \
     --property resources:DISK_GB=0 baremetal
  7. Verify that the new flavor has the correct values:

    (overcloud)$ openstack flavor list

4.4. Creating images for launching bare-metal instances

An overcloud that includes the Bare Metal Provisioning service (ironic) requires two sets of images:

  • Deploy images: The deploy images are the agent.ramdisk and agent.kernel images that the Bare Metal Provisioning agent (ironic-python-agent) requires to boot the RAM disk over the network and copy the user image for the overcloud nodes to the disk. You install the deploy images as part of the undercloud installation. For more information, see Obtaining images for overcloud nodes.
  • User images: The images the cloud user uses to provision their bare-metal instances. The user image consists of a kernel image, a ramdisk image, and a main image. The main image is either a root partition, or a whole-disk image:

    • Whole-disk image: An image that contains the partition table and boot loader.
    • Root partition image: Contains only the root partition of the operating system.

Compatible whole-disk RHEL guest images should work without modification. To create your own custom disk image, see Creating images in the Creating and Managing Images guide.

4.4.1. Uploading the deploy images to the Image service

You must upload the deploy images installed by director to the Image service. The deploy image consists of the following two images:

  • The kernel image: /tftpboot/agent.kernel
  • The ramdisk image: /tftpboot/agent.ramdisk

These images are installed in the home directory. For more information on how the deploy images were installed, see Obtaining images for overcloud nodes.

Procedure

  • Extract the images and upload them to the Image service:
$ openstack image create \
  --container-format aki \
  --disk-format aki \
  --public \
  --file ./tftpboot/agent.kernel bm-deploy-kernel
$ openstack image create \
  --container-format ari \
  --disk-format ari \
  --public \
  --file ./tftpboot/agent.ramdisk bm-deploy-ramdisk

4.5. Configuring deploy interfaces

When you provision bare metal nodes, the Bare Metal Provisioning service (ironic) on the overcloud writes a base operating system image to the disk on the bare metal node. By default, the deploy interface mounts the image on an iSCSI mount and then copies the image to disk on each node. Alternatively, you can use direct deploy, which writes disk images from a HTTP location directly to disk on bare metal nodes.

Note

Support for the iSCSI deploy interface will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy will be the default deploy interface from RHOSP 17.0.

Deploy interfaces have a critical role in the provisioning process. Deploy interfaces orchestrate the deployment and define the mechanism for transferring the image to the target disk.

Prerequisites

  • Dependent packages configured on the bare metal service nodes that run ironic-conductor.
  • Configure OpenStack Compute (nova) to use the bare metal service endpoint.
  • Create flavors for the available hardware, and nova must boot the new node from the correct flavor.
  • Images must be available in the Image service (glance):

    • bm-deploy-kernel
    • bm-deploy-ramdisk
    • user-image
    • user-image-vmlinuz
    • user-image-initrd
  • Hardware to enroll with the Ironic API service.

Workflow

Use the following example workflow to understand the standard deploy process. Depending on the ironic driver interfaces that you use, some of the steps might differ:

  1. The Nova scheduler receives a boot instance request from the Nova API.
  2. The Nova scheduler identifies the relevant hypervisor and identifies the target physical node.
  3. The Nova compute manager claims the resources of the selected hypervisor.
  4. The Nova compute manager creates unbound tenant virtual interfaces (VIFs) in the Networking service according to the network interfaces that the nova boot request specifies.
  5. Nova compute invokes driver.spawn from the Nova compute virt layer to create a spawn task that contains all of the necessary information. During the spawn process, the virt driver completes the following steps.

    1. Updates the target ironic node with information about the deploy image, instance UUID, requested capabilities, and flavor properties.
    2. Calls the ironic API to validate the power and deploy interfaces of the target node.
    3. Attaches the VIFs to the node. Each neutron port can be attached to any ironic port or group. Port groups have higher priority than ports.
    4. Generates config drive.
  6. The Nova ironic virt driver issues a deploy request with the Ironic API to the Ironic conductor that services the bare metal node.
  7. Virtual interfaces are plugged in and the Neutron API updates DHCP to configure PXE/TFTP options.
  8. The ironic node boot interface prepares (i)PXE configuration and caches the deploy kernel and ramdisk.
  9. The ironic node management interface issues commands to enable network boot of the node.
  10. The ironic node deploy interface caches the instance image, kernel, and ramdisk, if necessary.
  11. The ironic node power interface instructs the node to power on.
  12. The node boots the deploy ramdisk.
  13. With iSCSI deployment, the conductor copies the image over iSCSI to the physical node. With direct deployment, the deploy ramdisk downloads the image from a temporary URL. This URL must be a Swift API compatible object store or a HTTP URL.
  14. The node boot interface switches PXE configuration to refer to instance images and instructs the ramdisk agent to soft power off the node. If the soft power off fails, the bare metal node is powered off with IPMI/BMC.
  15. The deploy interface instructs the network interface to remove any provisioning ports, binds the tenant ports to the node, and powers the node on.

The provisioning state of the new bare metal node is now active.

4.5.1. Configuring the direct deploy interface on the overcloud

The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk.

Note

Support for the iSCSI deploy interface will be deprecated in Red Hat OpenStack Platform (RHOSP) version 17.0, and will be removed in RHOSP 18.0. Direct deploy will be the default deploy interface from RHOSP 17.0.

Prerequisites

  • Your overcloud node memory tmpfs must have at least 8GB of RAM.

Procedure

  1. Create or modify a custom environment file /home/stack/templates/direct_deploy.yaml and specify the IronicEnabledDeployInterfaces and the IronicDefaultDeployInterface parameters.

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct
      IronicDefaultDeployInterface: direct

    If you register your nodes with iSCSI, retain the iscsi value in the IronicEnabledDeployInterfaces parameter:

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct,iscsi
      IronicDefaultDeployInterface: direct
  2. By default, the Bare Metal Provisioning service (ironic) agent on each node obtains the image stored in the Object Storage Service (swift) through a HTTP link. Alternatively, ironic can stream this image directly to the node through the ironic-conductor HTTP server. To change the service that provides the image, set the IronicImageDownloadSource to http in the /home/stack/templates/direct_deploy.yaml file:

    parameter_defaults:
      IronicEnabledDeployInterfaces: direct
      IronicDefaultDeployInterface: direct
      IronicImageDownloadSource: http
  3. Include the custom environment with your overcloud deployment:

    $ openstack overcloud deploy \
      --templates \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml \
      -e /home/stack/templates/direct_deploy.yaml \
      ...

    Wait until deployment completes.

Note

If you did not specify IronicDefaultDeployInterface or want to use a different deploy interface, specify the deploy interface when you create or update a node:

$ openstack baremetal node create --driver ipmi --deploy-interface direct
$ openstack baremetal node set <NODE> --deploy-interface direct

4.6. Adding physical machines as bare metal nodes

Use one of the following methods to enroll a bare metal node:

  • Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
  • Register a physical machine as a bare metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc file.

4.6.1. Enrolling a bare metal node with an inventory file

Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service (ironic), and make the nodes available.

Prerequisites

Procedure

  1. Create an inventory file, overcloud-nodes.yaml, that includes the node details. You can enroll multiple nodes with one file.

    nodes:
        - name: node0
          driver: ipmi
          driver_info:
            ipmi_address: <ipmi_ip>
            ipmi_username: <user>
            ipmi_password: <password>
            [<property>: <value>]
          properties:
            cpus: <cpu_count>
            cpu_arch: <cpu_arch>
            memory_mb: <memory>
            local_gb: <root_disk>
            root_device:
                serial: <serial>
          ports:
            - address: <mac_address>
    • Replace <ipmi_ip> with the address of the Bare Metal controller.
    • Replace <user> with your username.
    • Replace <password> with your password.
    • Optional: Replace <property>: <value> with an IPMI property that you want to configure, and the property value. For information on the available properties, see Intelligent Platform Management Interface (IPMI) power management driver.
    • Replace <cpu_count> with the number of CPUs.
    • Replace <cpu_arch> with the type of architecture of the CPUs.
    • Replace <memory> with the amount of memory in MiB.
    • Replace <root_disk> with the size of the root disk in GiB. Only required when the machine has multiple disks.
    • Replace <serial> with the serial number of the disk that you want to use for deployment.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
    • --driver-info <property>=<value>
  2. Source the overcloudrc file:

    $ source ~/overcloudrc
  3. Import the inventory file into the Bare Metal Provisioning service:

    $ openstack baremetal create overcloud-nodes.yaml

    The nodes are now in the enroll state.

  4. Specify the deploy kernel and deploy ramdisk on each node:

    $ openstack baremetal node set <node> \
      --driver-info deploy_kernel=<kernel_file> \
      --driver-info deploy_ramdisk=<initramfs_file>
    • Replace <node> with the name or ID of the node.
    • Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel.
    • Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk.
  5. Optional: Specify the IPMI cipher suite for each node:

    $ openstack baremetal node set <node> \
     --driver-info ipmi_cipher_suite=<version>
    • Replace <node> with the name or ID of the node.
    • Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values:

      • 3 - The node uses the AES-128 with SHA1 cipher suite.
      • 17 - The node uses the AES-128 with SHA256 cipher suite.
  6. Set the provisioning state of the node to available:

    $ openstack baremetal node manage <node>
    $ openstack baremetal node provide <node>

    The Bare Metal Provisioning service cleans the node if you enabled node cleaning.

  7. Set the local boot option on the node:

    $ openstack baremetal node set <node> --property capabilities="boot_option:local"
  8. Check that the nodes are enrolled:

    $ openstack baremetal node list

    There might be a delay between enrolling a node and its state being shown.

4.6.2. Enrolling a bare-metal node manually

Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc file.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the overcloud credentials file:

    (undercloud)$ source ~/overcloudrc
  3. Add a new node:

    $ openstack baremetal node create --driver <driver_name> --name <node_name>
    • Replace <driver_name> with the name of the driver, for example, ipmi.
    • Replace <node_name> with the name of your new bare-metal node.
  4. Note the UUID assigned to the node when it is created.
  5. Set the boot option to local for each registered node:

    $ openstack baremetal node set \
      --property capabilities="boot_option:local" <node>

    Replace <node> with the UUID of the bare metal node.

  6. Specify the deploy kernel and deploy ramdisk for the node driver:

    $ openstack baremetal node set <node> \
      --driver-info deploy_kernel=<kernel_file> \
      --driver-info deploy_ramdisk=<initramfs_file>
    • Replace <node> with the ID of the bare metal node.
    • Replace <kernel_file> with the path to the .kernel image, for example, file:///var/lib/ironic/httpboot/agent.kernel.
    • Replace <initramfs_file> with the path to the .initramfs image, for example, file:///var/lib/ironic/httpboot/agent.ramdisk.
  7. Update the node properties to match the hardware specifications on the node:

    $ openstack baremetal node set <node> \
      --property cpus=<cpu> \
      --property memory_mb=<ram> \
      --property local_gb=<disk> \
      --property cpu_arch=<arch>
    • Replace <node> with the ID of the bare metal node.
    • Replace <cpu> with the number of CPUs.
    • Replace <ram> with the RAM in MB.
    • Replace <disk> with the disk size in GB.
    • Replace <arch> with the architecture type.
  8. Optional: Specify the IPMI cipher suite for each node:

    $ openstack baremetal node set <node> \
     --driver-info ipmi_cipher_suite=<version>
    • Replace <node> with the ID of the bare metal node.
    • Replace <version> with the cipher suite version to use on the node. Set to one of the following valid values:

      • 3 - The node uses the AES-128 with SHA1 cipher suite.
      • 17 - The node uses the AES-128 with SHA256 cipher suite.
  9. Optional: Specify the IPMI details for each node:

    $ openstack baremetal node set <node> \
     --driver-info <property>=<value>
  10. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:

    $ openstack baremetal node set <node> \
      --property root_device='{"<property>": "<value>"}'
    • Replace <node> with the ID of the bare metal node.
    • Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}'

      RHOSP supports the following properties:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  11. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:

    $ openstack baremetal port create --node <node_uuid> <mac_address>
    • Replace <node> with the unique ID of the bare metal node.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  12. Validate the configuration of the node:

    $ openstack baremetal node validate <node>
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+

    The validation output Result indicates the following:

    • False: The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation.
    • True: The interface has passed validation.
    • None: The interface is not supported for your driver.

4.6.3. Bare-metal node provisioning states

A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".

Use the following table to understand the provisioning states a node can be in, and the actions that are available for you to use to transition the node from one provisioning state to another.

Table 4.1. Provisioning states
StateCategoryDescription

enroll

Stable

The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes.

verifying

In transition

The Bare Metal Provisioning service validates that it can manage the node by using the driver_info configuration provided during the node enrollment.

manageable

Stable

The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the manageable state to one of the following states by using the following commands:

  • openstack baremetal node adopt adopting active
  • openstack baremetal node provide cleaning available
  • openstack baremetal node clean cleaning available
  • openstack baremetal node inspect inspecting manageable

You must move a node to the manageable state after it is transitioned to one of the following failed states:

  • adopt failed
  • clean failed
  • inspect failed

Move a node into the manageable state when you need to update the node.

inspecting

In transition

The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to manageable for synchronous inspection, and inspect wait for asynchronous inspection. The node transitions to inspect failed if an error occurs.

inspect wait

In transition

The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the manageable state.

inspect failed

Stable

The provisioning state that indicates that the node inspection failed. You can transition the node from the inspect failed state to one of the following states by using the following commands:

  • openstack baremetal node inspect inspecting manageable
  • openstack baremetal node manage manageable

cleaning

In transition

Nodes in the cleaning state are being scrubbed and reprogrammed into a known configuration. When a node is in the cleaning state, depending on the network management, the conductor performs the following tasks:

  • Out-of-band: The conductor performs the clean step.
  • In-band: The conductor prepares the environment to boot the ramdisk for running the in-band clean steps. The preparation tasks include building the PXE configuration files, and configuring the DHCP.

clean wait

In transition

Nodes in the clean wait state are being scrubbed and reprogrammed into a known configuration. This state is similar to the cleaning state except that in the clean wait state, the conductor is waiting for the ramdisk to boot or the clean step to finish.

You can interrupt the cleaning process of a node in the clean wait state by running openstack baremetal node abort.

available

Stable

After nodes have been successfully preconfigured and cleaned, they are moved into the available state and are ready to be provisioned. You can transition the node from the available state to one of the following states by using the following commands:

  • openstack baremetal node deploy deploying active
  • openstack baremetal node manage manageable

deploying

In transition

Nodes in the deploying state are being prepared for a workload, which involves performing the following tasks:

  • Setting appropriate BIOS options for the node deployment.
  • Partitioning drives and creating file systems.
  • Creating any additional resources that may be required by additional subsystems, such as the node-specific network configuration, and a configuratin drive partition.

wait call-back

In transition

Nodes in the wait call-back state are being prepared for a workload. This state is similar to the deploying state except that in the wait call-back state, the conductor is waiting for a task to complete before preparing the node. For example, the following tasks must be completed before the conductor can prepare the node:

  • The ramdisk has booted.
  • The bootloader is installed.
  • The image is written to the disk.

You can interrupt the deployment of a node in the wait call-back state by running openstack baremetal node delete or openstack baremetal node undeploy.

deploy failed

Stable

The provisioning state that indicates that the node deployment failed. You can transition the node from the deploy failed state to one of the following states by using the following commands:

  • openstack baremetal node deploy deploying active
  • openstack baremetal node rebuild deploying active
  • openstack baremetal node delete deleting cleaning clean wait cleaning available
  • openstack baremetal node undeploy deleting cleaning clean wait cleaning available

active

Stable

Nodes in the active state have a workload running on them. The Bare Metal Provisioning service may regularly collect out-of-band sensor information, including the power state. You can transition the node from the active state to one of the following states by using the following commands:

  • openstack baremetal node delete deleting available
  • openstack baremetal node undeploy cleaning available
  • openstack baremetal node rebuild deploying active
  • openstack baremetal node rescue rescuing rescue

deleting

In transition

When a node is in the deleting state, the Bare Metal Provisioning service disassembles the active workload and removes any configuration and resources it added to the node during the node deployment or rescue. Nodes transition quickly from the deleting state to the cleaning state, and then to the clean wait state.

error

Stable

If a node deletion is unsuccessful, the node is moved into the error state. You can transition the node from the error state to one of the following states by using the following commands:

  • openstack baremetal node delete deleting available
  • openstack baremetal node undeploy cleaning available

adopting

In transition

You can use the openstack baremetal node adopt command to transition a node with an existing workload directly from manageable to active state without first cleaning and deploying the node. When a node is in the adopting state the Bare Metal Provisioning service has taken over management of the node with its existing workload.

rescuing

In transition

Nodes in the rescuing state are being prepared to perform the following rescue operations:

  • Setting appropriate BIOS options for the node deployment.
  • Creating any additional resources that may be required by additional subsystems, such as node-specific network configurations.

rescue wait

In transition

Nodes in the rescue wait state are being rescued. This state is similar to the rescuing state except that in the rescue wait state, the conductor is waiting for the ramdisk to boot, or to execute parts of the rescue which need to run in-band on the node, such as setting the password for user named rescue.

You can interrupt the rescue operation of a node in the rescue wait state by running openstack baremetal node abort.

rescue failed

Stable

The provisioning state that indicates that the node rescue failed. You can transition the node from the rescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescue rescuing rescue
  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available

rescue

Stable

Nodes in the rescue state are running a rescue ramdisk. The Bare Metal Provisioning service may regularly collect out-of-band sensor information, including the power state. You can transition the node from the rescue state to one of the following states by using the following commands:

  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available

unrescuing

In transition

Nodes in the unrescuing state are being prepared to transition from the rescue state to the active state.

unrescue failed

Stable

The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the unrescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescue rescuing rescue
  • openstack baremetal node unrescue unrescuing active
  • openstack baremetal node delete deleting available

4.7. Configuring Redfish virtual media boot

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.

Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.

4.7.1. Deploying a bare metal server with Redfish virtual media boot

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

To boot a node with the redfish hardware type over virtual media, set the boot interface to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.

Prerequisites

  • Redfish driver enabled in the enabled_hardware_types parameter in the undercloud.conf file.
  • A bare metal node registered and enrolled.
  • IPA and instance images in the Image Service (glance).
  • For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance).
  • A bare metal flavor.
  • A network for cleaning and provisioning.
  • Sushy library installed:

    $ sudo yum install sushy

Procedure

  1. Set the Bare Metal service (ironic) boot interface to redfish-virtual-media:

    $ openstack baremetal node set --boot-interface redfish-virtual-media $NODE_NAME

    Replace $NODE_NAME with the name of the node.

  2. For UEFI nodes, set the boot mode to uefi:

    $ openstack baremetal node set --property capabilities="boot_mode:uefi" $NODE_NAME

    Replace $NODE_NAME with the name of the node.

    Note

    For BIOS nodes, do not complete this step.

  3. For UEFI nodes, define the EFI System Partition (ESP) image:

    $ openstack baremetal node set --driver-info bootloader=$ESP $NODE_NAME

    Replace $ESP with the glance image UUID or URL for the ESP image, and replace $NODE_NAME with the name of the node.

    Note

    For BIOS nodes, do not complete this step.

  4. Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node:

    $ openstack baremetal port create --pxe-enabled True --node $UUID $MAC_ADDRESS

    Replace $UUID with the UUID of the bare metal node, and replace $MAC_ADDRESS with the MAC address of the NIC on the bare metal node.

  5. Create the new bare metal server:

    $ openstack server create \
        --flavor baremetal \
        --image $IMAGE \
        --network $NETWORK \
        test_instance

    Replace $IMAGE and $NETWORK with the names of the image and network that you want to use.

4.8. Using host aggregates to separate physical and virtual machine provisioning

OpenStack Compute uses host aggregates to partition availability zones, and group together nodes that have specific shared properties. When an instance is provisioned, the Compute scheduler compares properties on the flavor with the properties assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine.

Complete the steps in this section to perform the following operations:

  • Add the property baremetal to your flavors and set it to either true or false.
  • Create separate host aggregates for bare metal hosts and compute nodes with a matching baremetal property. Nodes grouped into an aggregate inherit this property.

Prerequisites

Procedure

  1. Set the baremetal property to true on the baremetal flavor.

    $ openstack flavor set baremetal --property baremetal=true
  2. Set the baremetal property to false on the flavors that virtual instances use:

    $ openstack flavor set FLAVOR_NAME --property baremetal=false
  3. Create a host aggregate called baremetal-hosts:

    $ openstack aggregate create --property baremetal=true baremetal-hosts
  4. Add each Controller node to the baremetal-hosts aggregate:

    $ openstack aggregate add host baremetal-hosts HOSTNAME
    Note

    If you have created a composable role with the NovaIronic service, add all the nodes with this service to the baremetal-hosts aggregate. By default, only the Controller nodes have the NovaIronic service.

  5. Create a host aggregate called virtual-hosts:

    $ openstack aggregate create --property baremetal=false virtual-hosts
  6. Add each Compute node to the virtual-hosts aggregate:

    $ openstack aggregate add host virtual-hosts HOSTNAME
  7. If you did not add the following Compute filter scheduler when you deployed the overcloud, add it now to the existing list under scheduler_default_filters in the _/etc/nova/nova.conf_ file:

    AggregateInstanceExtraSpecsFilter
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.