Configuring the Bare Metal Provisioning service


Red Hat OpenStack Services on OpenShift 18.0

Enabling and configuring the Bare Metal Provisioning service (ironic) for Bare Metal as a Service (BMaaS)

OpenStack Documentation Team

Abstract

Learn how to enable and configure the Bare Metal Provisioning service (ironic) on the control plane of a Red Hat OpenStack Services on OpenShift (RHOSO) deployment to provision and manage physical machines for cloud users. Also learn how to add physical machines as bare-metal nodes and perform resource management tasks for bare-metal instances.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  4. Click Create.
  5. Review the details of the bug you created.

You use the Bare Metal Provisioning service (ironic) components to provision and manage physical machines as bare-metal instances for your cloud users. To provision and manage bare-metal instances, the Bare Metal Provisioning service interacts with the following Red Hat OpenStack Services on OpenShift (RHOSO) services:

  • The Compute service (nova) provides scheduling, tenant quotas, and a user-facing API for virtual machine instance management.
  • The Identity service (keystone) provides request authentication and assists the Bare Metal Provisioning service to locate other RHOSO services.
  • The Image service (glance) manages disk and instance images and image metadata.
  • The Networking service (neutron) provides DHCP and network configuration, and provisions the virtual or physical networks that instances connect to on boot.
  • The Object Storage service (swift) exposes temporary image URLs for some drivers.

Bare Metal Provisioning service components

The Bare Metal Provisioning service consists of services, named ironic-*. The following services are the core Bare Metal Provisioning services:

Bare Metal Provisioning API (ironic-api)
This service provides the external REST API to users. The API sends application requests to the Bare Metal Provisioning conductor over remote procedure call (RPC).
Bare Metal Provisioning conductor (ironic-conductor)

This service uses drivers to perform the following bare-metal node management tasks:

  • Adds, edits, and deletes bare-metal nodes.
  • Powers bare-metal nodes on and off with IPMI, Redfish, or other vendor-specific protocol.
  • Provisions, deploys, and cleans bare metal nodes.
Bare Metal Provisioning inspector (ironic-inspector)
This service discovers the hardware properties of a bare-metal node that are required for scheduling bare-metal instances, and creates the Bare Metal Provisioning service ports for the discovered ethernet MACs.
Bare Metal Provisioning database
This database tracks hardware information and state.
Bare Metal Provisioning agent (ironic-python-agent)
This service runs in a temporary ramdisk to provide ironic-conductor and ironic-inspector services with remote access, in-band hardware control, and hardware introspection.

Provisioning a bare-metal instance

You can configure the Bare Metal Provisioning service to use PXE, iPXE, or virtual media to provision physical machines as bare-metal instances:

  • PXE or iPXE: The Bare Metal Provisioning service provisions the bare-metal instances by using network boot.
  • Virtual media: The Bare Metal Provisioning service provisions the bare-metal instances by creating a temporary ISO image and requesting the Baseboard Management Controller (BMC) to attach and boot to that image.

To enable cloud users to launch bare-metal instances, your Red Hat OpenStack Services on OpenShift (RHOSO) environment must have the required hardware and network configuration.

2.1. Hardware requirements

The hardware requirements for the bare-metal machines that you want to make available to your cloud users for provisioning depend on the operating system. For information about the hardware requirements for Red Hat Enterprise Linux installations, see the Product Documentation for Red Hat Enterprise Linux.

All bare-metal machines that you want to make available to your cloud users for provisioning must have the following capabilities:

  • A NIC to connect to the bare-metal network.
  • The Redfish power management type, which is connected to a network that is reachable from the ironic-conductor container.

    Note

    Do not use the IPMI power management type due to security concerns. Use Redfish as the power management type to optimize the performance of the Bare Metal Provisioning service.

  • If the Bare Metal Provisioning service is configured to use PXE or iPXE for provisioning, then PXE boot must be enabled on the network interface that is attached to the bare-metal network, and disabled on all other network interfaces for that bare-metal node. This is not a requirement if the Bare Metal Provisioning service is configured to use virtual media for provisioning.
  • If the Bare Metal Provisioning service is configured to use virtual media for provisioning, through Redfish or a vendor-specific boot interface on each node, then the bare-metal nodes must be able to reach cluster resources for virtual media disks or other disk images.

2.2. Networking requirements

The cloud operator must create a private bare-metal network for the Bare Metal Provisioning service to use for the following operations:

  • The provisioning and management of the bare-metal nodes that host the bare-metal instances.
  • Cleaning bare-metal nodes when a node is unprovisioned.
  • Project access to the bare-metal nodes.

In order for the Bare Metal Provisioning service to serve PXE boot and DHCP requests, the bare-metal node must be attached either to a port that does not use a VLAN, or to a port that is a VLAN trunk where the native VLAN is the bare-metal network.

The Bare Metal Provisioning service is designed for a trusted tenant environment because the bare-metal nodes have direct access to the control plane network of your Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Cloud users have direct access to the public OpenStack APIs, and to the bare-metal network. A flat bare-metal network can introduce security concerns because cloud users have indirect access to the control plane network. To mitigate this risk, you can configure an isolated bare metal provisioning network for the Bare Metal Provisioning service that does not have access to the control plane.

The bare-metal network must be untagged for provisioning, and must also have access to the Bare Metal Provisioning API.

You must provide access to the bare-metal network for the following:

  • The control plane that hosts the Bare Metal Provisioning service.
  • The NIC from which the bare-metal machine is configured to PXE-boot.

If you want your cloud users to be able to launch bare-metal instances, you must perform the following tasks:

  • Prepare Red Hat OpenShift Container Platform (RHOCP) for bare-metal networks by creating an isolated bare metal provisioning network on the RHOCP cluster.
  • Create the Networking service (neutron) networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, and rescuing bare-metal nodes.
  • Add the Bare Metal Provisioning service (ironic) to your Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
  • Configure the Bare Metal Provisioning service as required for your environment.

3.1. Prerequisites

  • The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

3.2. Preparing RHOCP for bare-metal networks

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.

Create an isolated network for the Bare Metal Provisioning service (ironic) that the ironic service pod attaches to. The following procedures create an isolated network named baremetal.

For more information about how to create an isolated network, see Preparing RHOCP for RHOSO networks in Deploying Red Hat OpenStack Services on OpenShift.

Note

The example in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on tentant networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interface for the isolated bare-metal network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation to configure the interface for the isolated bare-metal network on each worker node in RHOCP cluster, for example, baremetal-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interface for the isolated bare-metal network on each worker node in the RHOCP cluster, and configure the virtual routing and forwarding (VRF) to avoid asymmetric routing. In the following example, the nncp CR configures the baremetal interface for worker node 1, osp-enp6s0-worker-1, to use a bridge on the enp8s0 interface with IPv4 addresses for network isolation:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
     name: osp-enp6s0-worker-1
    spec:
     desiredState:
       interfaces:
       ....
       - description: Ironic bridge
         name: baremetal
         type: linux-bridge
         mtu: 1500
         bridge:
           options:
             stp:
               enabled: false
           port:
           - name: enp8s0
         ipv4:
           address:
           - ip: 172.17.0.10
             prefix-length: "24"
           enabled: true
         ipv6:
           enabled: false
       - description: Ironic VRF
         name: ironicvrf
         state: up
         type: vrf
         vrf:
           port:
           - baremetal
           route-table-id: 10
       route-rules:
         config: []
       routes:
         config:
         - destination: 0.0.0.0/0
           metric: 150
           next-hop-address: 172.17.0.1
           next-hop-interface: baremetal
           table-id: 10
         - destination: 172.17.0.0/24
           metric: 150
           next-hop-address: 192.168.122.1
           next-hop-interface: ospbr
  5. Create the nncp CR in the cluster:

    $ oc apply -f baremetal-nncp.yaml
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation for the bare-metal network to attach the ironic service pod to the network, for example, baremetal-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for the baremetal network to attach the ironic service deployment pod to the network:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: baremetal
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "baremetal",
          "type": "bridge",
          "master": "baremetal", 
    1
    
          "ipam": {              
    2
    
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30", 
    3
    
            "range_end": "172.17.0.70"
          }
        }
    1
    The node interface name associated with the network, as defined in the nncp CR.
    2
    The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70.
    3
    The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f baremetal-net-attach-def.yaml
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack

3.2.3. Preparing RHOCP for baremetal network VIPs

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.

Procedure

  1. Create an IPAddressPool CR file on your workstation to configure which IPs can be used as VIPs, for example, baremetal-ipaddresspools.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the baremetal network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: baremetal
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90 
    1
    
      autoAssign: true
      avoidBuggyIPs: false
    1
    The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f baremetal-ipaddresspools.yaml
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
  5. Create a L2Advertisement CR file on your workstation to define how the Virtual IPs (VIPs) are announced, for example, baremetal-l2advertisement.yaml.
  6. In the L2Advertisement CR file, configure an L2Advertisement CR to define which node advertises the ironic service to the local network:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: baremetal
      namespace: metallb-system
    spec:
      ipAddressPools:
      - baremetal
      interfaces:
      - baremetal 
    1
    1
    The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CR in the cluster:

    $ oc apply -f baremetal-l2advertisement.yaml
  8. Verify that the L2Advertisement CR is created:

    $ oc get -n metallb-system L2Advertisement
    NAME          IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    baremetal      ["baremetal"]                                ["enp6s0"]

3.3. Creating the bare-metal networks

You use the Networking service (neutron) to create the networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, inspecting, and rescuing bare-metal nodes. The following procedure creates a provisioning network. Repeat the procedure for each Bare Metal Provisioning network you require.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create the network over which to provision bare-metal instances:

    $ openstack network create \
      --provider-network-type <network_type> \
      [--provider-segment <vlan_id>] \
      --provider-physical-network <provider_physical_network> \
      --share <network_name>
    • Replace <network_type> with the type of network, either flat or vlan.
    • Optional: If your network type is vlan then specify the --provider-segment.
    • Replace <provider_physical_network> with the name of the physical network over which you implement the virtual network, which is the bridge mapping configured for the OVN service on the control plane.
    • Replace <network_name> with a name for this network.
  3. Create the subnet on the network:

    $ openstack subnet create \
      --network <network_name> \
      --subnet-range <network_cidr> \
      --ip-version 4 \
      --gateway <gateway_ip> \
      --allocation-pool start=<start_ip>,end=<end_ip> \
      --dhcp <subnet_name>
      --dns-nameserver <dns_ip>
    • Replace <network_name> with the name of the provisioning network that you created in the previous step.
    • Replace <network_cidr> with the CIDR representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with <start_ip> and ending with <end_ip> must be within the block of IP addresses specified by <network_cidr>.
    • Replace <gateway_ip> with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by <network_cidr>, but outside of the block of IP addresses specified by the range that starts with <start_ip> and ends with <end_ip>.
    • Replace <start_ip> with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <end_ip> with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <subnet_name> with a name for the subnet.
    • Replace <dns_ip> with the IP address of the load balancer configured for the DNS service on the control plane.
  4. Create a router for the network and subnet to ensure that the Networking service serves metadata requests:

    $ openstack router create <router_name>
    • Replace <router_name> with a name for the router.
  5. Attach the subnet to the new router to enable the metadata requests from cloud-init to be served and the node to be configured:

    $ openstack router add subnet <router_name> <subnet>
    • Replace <router_name> with the name of your router.
    • Replace <subnet> with the ID or name of the bare-metal subnet that you created in step 3.
  6. Exit the openstackclient pod:

    $ exit

To enable the Bare Metal Provisioning service (ironic) on your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must add the ironic service to the control plane and configure it as required.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following cellTemplates configuration to the nova service configuration:

      nova:
        apiOverride:
          route: {}
        template:
          ...
          secret: osp-secret
          cellTemplates:
            cell0:
              cellDatabaseAccount: nova-cell0
              hasAPIAccess: true
            cell1:
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              hasAPIAccess: true
              novaComputeTemplates:
                compute-ironic: 
    1
    
                  computeDriver: ironic.IronicDriver
    1
    The name of the Compute service. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the - symbol.
  3. Enable the ironic service and specify the networks to connect to:

    spec:
      ...
      ironic:
        enabled: true
        template:
          rpcTransport: oslo
          databaseInstance: openstack
          ironicAPI:
            replicas: 1
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: ctlplane
                      metallb.universe.tf/allow-shared-ip: ctlplane
                      metallb.universe.tf/loadBalancerIPs: 192.168.122.80
                  spec:
                    type: LoadBalancer
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal 
    1
    
            provisionNetwork: baremetal 
    2
    
          ironicInspector:
            replicas: 0 
    3
    
            networkAttachments:
            - baremetal 
    4
    
            inspectionNetwork: baremetal 
    5
    
          ironicNeutronAgent:
            replicas: 1
          secret: osp-secret
    1
    The name of the NetworkAttachmentDefinition CR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for the ironicConductor pods.
    2
    The name of the Networking service (neutron) network you created for use as the provisioning network in Creating the bare-metal network.
    3
    You can deploy the Bare Metal Provisioning service without the ironicInspector service. To deploy the service, set the number of replicas to 1.
    4
    The name of the NetworkAttachmentDefinition CR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for the ironicInspector pod.
    5
    The name of the Networking service (neutron) network you created for use as the inspection network in Creating the bare-metal network. The Ironic Inspector API listens on port 5050.
  4. Specify the networks the Bare Metal Provisioning service uses for provisioning, cleaning, inspection, and rescuing bare-metal nodes:

    spec:
      ...
      ironic:
        ...
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal
            provisionNetwork: baremetal
            customServiceConfig: |
              [neutron]
              cleaning_network = <network_UUID>
              provisioning_network = <network_UUID>
              inspection_network = <network_UUID>
              rescuing_network = <network_UUID>
  5. Configure the OVN mappings:

      ovn:
        template:
          ovnController:
            ...
            nicMappings: 
    1
    
              datacentre: ocpbr
              baremetal: baremetal
    1
    List of key-value pairs that map the physical network provider to the interface name defined in the NodeNetworkConfigurationPolicy (nncp) CR.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  8. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service ironic
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | ironic       | internal  | http://ironic-internal.openstack.svc:9292                     |
    | ironic       | public    | http://ironic-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
  3. Exit the openstackclient pod:

    $ exit

3.5. Configuring node event history records

The Bare Metal Provisioning service (ironic) records node event history by default. You can configure how the node event history records are managed.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following configuration options to the customServiceConfig parameter in the ironicConductors template to configure how node event history records are managed:

    spec:
      ...
      ironic:
        enabled: true
        template:
          rpcTransport: oslo
          databaseInstance: openstack
          ironicAPI:
            ...
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal
            provisionNetwork: baremetal
            customServiceConfig: |
              ...
              [conductor]
              node_history_max_entries=<max_entries>
              node_history_cleanup_interval=<clean_interval>
              node_history_cleanup_batch_count=<max_purge>
              node_history_minimum_days=<min_days>
          ...
          secret: osp-secret
    • Optional: Replace <max_entries> with the maximum number of event records that the Bare Metal Provisioning service records. The oldest recorded events are removed when the maximum number of entries is reached. By default, a maximum of 300 events are recorded. The minimum valid value is 0.
    • Optional: Replace <clean_interval> with the interval in seconds between scheduled cleanup of the node event history entries. By default, the cleanup is scheduled every 86400 seconds, which is once daily. Set to 0 to disable node event history cleanup.
    • Optional: Replace <max_purge> with the maximum number of entries to purge during each clean up operation. Defaults to 1000.
    • Optional: Replace <min_days> with the minimum number of days to explicitly keep the database history entries for nodes. Defaults to 0.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

Use one of the following methods to enroll a bare-metal node:

  • Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
  • Register a physical machine as a bare-metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses.

4.1. Prerequisites

  • The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.
  • The oc command line tool is installed on the workstation.

You can create an inventory file that defines the details of each bare-metal node. You import the file into the Bare Metal Provisioning service (ironic) to enroll the bare-metal nodes, and then make each node available.

Note

Some drivers might require specific configuration. For more information, see Bare metal drivers.

Procedure

  1. Create an inventory file to define the details of each node, for example, ironic-nodes.yaml.
  2. For each node, define the node name and the address and credentials for the bare-metal driver. For details on the available properties for your enabled driver, see Bare metal drivers.

    nodes:
      - name: <node>
        driver: <driver>
        driver_info:
          <driver>_address: <ip>
          <driver>_username: <user>
          <driver>_password: <password>
          [<property>: <value>]
    • Replace <node> with the name of the node.
    • Replace <driver> with a supported bare-metal driver, for example,redfish.
    • Replace <ip> with the IP address of the Bare Metal controller.
    • Replace <user> with your username.
    • Replace <password> with your password.
    • Optional: Replace <property> with a driver property that you want to configure, and replace <value> with the value of the property. For information on the available properties, see Bare metal drivers.
  3. Define the node properties and ports:

    nodes:
      - name: <node>
        ...
        properties:
          cpus: <cpu_count>
          cpu_arch: <cpu_arch>
          memory_mb: <memory>
          local_gb: <root_disk>
          root_device:
            serial: <serial>
          network_interface: <interface_type>
        ports:
          - address: <mac_address>
    • Replace <cpu_count> with the number of CPUs.
    • Replace <cpu_arch> with the type of architecture of the CPUs.
    • Replace <memory> with the amount of memory in MiB.
    • Replace <root_disk> with the size of the root disk in GiB. Only required when the machine has multiple disks.
    • Replace <serial> with the serial number of the disk that you want to use for deployment.
    • Optional: Include the network_interface property if you want to override the default network type of flat. You can change the network type to one of the following valid values:

      • neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6.
      • noop: Use for standalone deployments where network switching is not required.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  4. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  5. Import the inventory file into the Bare Metal Provisioning service:

    $ openstack baremetal create ironic-nodes.yaml

    The nodes are now in the enroll state.

  6. Wait for the extra network interface port configuration data to populate the Networking service (neutron). This process takes at least 60 seconds.
  7. Set the provisioning state of each node to available:

    $ openstack baremetal node manage <node>
    $ openstack baremetal node provide <node>

    The Bare Metal Provisioning service cleans the node if you enabled node cleaning.

  8. Check that the nodes are enrolled:

    $ openstack baremetal node list

    There might be a delay between enrolling a node and its state being shown.

  9. Exit the openstackclient pod:

    $ exit

4.3. Enrolling a bare-metal node manually

Register a physical machine as a bare-metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Add a new node:

    $ openstack baremetal node create --driver <driver_name> --name <node_name>
    • Replace <driver_name> with the name of the driver, for example, redfish.
    • Replace <node_name> with the name of your new bare-metal node.
  3. Note the UUID assigned to the node when it is created.
  4. Update the node properties to match the hardware specifications on the node:

    $ openstack baremetal node set <node> \
      --property cpus=<cpu> \
      --property memory_mb=<ram> \
      --property local_gb=<disk> \
      --property cpu_arch=<arch>
    • Replace <node> with the ID of the bare metal node.
    • Replace <cpu> with the number of CPUs.
    • Replace <ram> with the RAM in MB.
    • Replace <disk> with the disk size in GB.
    • Replace <arch> with the architecture type.
  5. Optional: Set the network_interface property to override the default network type of flat:

    $ openstack baremetal node set <node> --network-interace <network_interface>
    • Replace <network_interface> with one of the following valid network types:

      • neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6.
      • noop: Use for standalone deployments where network switching is not required.
  6. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:

    $ openstack baremetal node set <node> \
      --property root_device='{"<property>": "<value>"}'
    • Replace <node> with the ID of the bare metal node.
    • Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}'

      RHOSP supports the following properties:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  7. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:

    $ openstack baremetal port create --node <node_uuid> <mac_address>
    • Replace <node> with the unique ID of the bare metal node.
    • Replace <mac_address> with the MAC address of the NIC used to PXE boot.
  8. Validate the configuration of the node:

    $ openstack baremetal node validate <node>
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+

    The validation output Result indicates the following:

    • False: The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation.
    • True: The interface has passed validation.
    • None: The interface is not supported for your driver.
  9. Exit the openstackclient pod:

    $ exit

You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.

Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.

To launch bare-metal instances with the redfish hardware type over virtual media, set the boot interface of each bare-metal node to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.

Prerequisites

  • The bare-metal node is registered and enrolled.
  • The IPA and instance images are available in the Image Service (glance).
  • For UEFI nodes, an EFI system partition image (ESP) is available in the Image Service (glance).

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Set the Bare Metal service boot interface to redfish-virtual-media:

    $ openstack baremetal node set --boot-interface redfish-virtual-media <node_name>
    • Replace <node_name> with the name of the node.
  3. For UEFI nodes, define the EFI System Partition (ESP) image:

    $ openstack baremetal node set --driver-info bootloader=<esp_image> <node>
    • Replace <esp_image> with the image UUID or URL for the ESP image.
    • Replace <node> with the name of the node.
    Note

    For BIOS nodes, do not complete this step.

  4. Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare metal node:

    $ openstack baremetal port create --pxe-enabled True --node <node_uuid> <mac_address>
    • Replace <node_uuid> with the UUID of the bare-metal node.
    • Replace <mac_address> with the MAC address of the NIC on the bare-metal node.
  5. Exit the openstackclient pod:

    $ exit

You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Retrieve a list of your nodes to identify their UUIDs:

    $ openstack baremetal node list
  3. Tag each bare-metal node with a custom bare-metal resource class:

    $ openstack baremetal node set \
     --resource-class baremetal.<CUSTOM> <node>
    • Replace <CUSTOM> with a string that identifies the purpose of the resource class. For example, set to GPU to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads.
    • Replace <node> with the ID of the bare metal node.
  4. Create a flavor for bare-metal instances:

    $ openstack flavor create --id auto \
     --ram <ram_size_mb> --disk <disk_size_gb> \
     --vcpus <no_vcpus> baremetal
    • Replace <ram_size_mb> with the RAM of the bare metal node, in MB.
    • Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB.
    • Replace <no_vcpus> with the number of CPUs on the bare metal node.

      Note

      These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.

  5. Associate the flavor for bare-metal instances with the custom resource class:

    $ openstack flavor set \
     --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \
     baremetal

    To determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_.

    Note

    A flavor can request only one instance of a bare-metal resource class.

  6. Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances:

    $ openstack flavor set \
     --property resources:VCPU=0 \
     --property resources:MEMORY_MB=0 \
     --property resources:DISK_GB=0 baremetal
  7. Verify that the new flavor has the correct values:

    $ openstack flavor list
  8. Exit the openstackclient pod:

    $ exit

4.6. Bare-metal node provisioning states

A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".

Use the following table to understand the node provisioning states and the actions you can perform to transition a node from one state to another.

Expand
Table 4.1. Provisioning states
StateCategoryDescription

enroll

Stable

The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes.

verifying

In transition

The Bare Metal Provisioning service validates that it can manage the node by using the driver_info configuration provided during the node enrollment.

manageable

Stable

The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the manageable state to one of the following states by using the following commands:

  • openstack baremetal node adoptadoptingactive
  • openstack baremetal node providecleaningavailable
  • openstack baremetal node cleancleaningavailable
  • openstack baremetal node inspectinspectingmanageable

You must move a node to the manageable state after it is transitioned to one of the following failed states:

  • adopt failed
  • clean failed
  • inspect failed

Move a node into the manageable state when you need to update the node.

inspecting

In transition

The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to manageable for synchronous inspection, and inspect wait for asynchronous inspection. The node transitions to inspect failed if an error occurs.

inspect wait

In transition

The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the manageable state.

inspect failed

Stable

The provisioning state that indicates that the node inspection failed. You can transition the node from the inspect failed state to one of the following states by using the following commands:

  • openstack baremetal node inspectinspectingmanageable
  • openstack baremetal node managemanageable

cleaning

In transition

Nodes in the cleaning state are being scrubbed and reprogrammed into a known configuration. When a node is in the cleaning state, depending on the network management, the conductor performs the following tasks:

  • Out-of-band: The conductor performs the clean step.
  • In-band: The conductor prepares the environment to boot the ramdisk for running the in-band clean steps. The preparation tasks include building the PXE configuration files, and configuring the DHCP.

clean wait

In transition

Nodes in the clean wait state are being scrubbed and reprogrammed into a known configuration. This state is similar to the cleaning state except that in the clean wait state, the conductor is waiting for the ramdisk to boot or the clean step to finish.

You can interrupt the cleaning process of a node in the clean wait state by running openstack baremetal node abort.

available

Stable

After nodes have been successfully preconfigured and cleaned, they are moved into the available state and are ready to be provisioned. You can transition the node from the available state to one of the following states by using the following commands:

  • openstack baremetal node deploydeployingactive
  • openstack baremetal node managemanageable

deploying

In transition

Nodes in the deploying state are being prepared for a workload, which involves performing the following tasks:

  • Setting appropriate BIOS options for the node deployment.
  • Partitioning drives and creating file systems.
  • Creating any additional resources that might be required by additional subsystems, such as the node-specific network configuration, and a configuraton drive partition.

wait call-back

In transition

Nodes in the wait call-back state are being prepared for a workload. This state is similar to the deploying state except that in the wait call-back state, the conductor is waiting for a task to complete before preparing the node. For example, the following tasks must be completed before the conductor can prepare the node:

  • The ramdisk has booted.
  • The bootloader is installed.
  • The image is written to the disk.

You can interrupt the deployment of a node in the wait call-back state by running openstack baremetal node delete or openstack baremetal node undeploy.

deploy failed

Stable

The provisioning state that indicates that the node deployment failed. You can transition the node from the deploy failed state to one of the following states by using the following commands:

  • openstack baremetal node deploydeployingactive
  • openstack baremetal node rebuilddeployingactive
  • openstack baremetal node deletedeletingcleaningclean waitcleaningavailable
  • openstack baremetal node undeploydeletingcleaningclean waitcleaningavailable

active

Stable

Nodes in the active state have a workload running on them. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the active state to one of the following states by using the following commands:

  • openstack baremetal node deletedeletingavailable
  • openstack baremetal node undeploycleaningavailable
  • openstack baremetal node rebuilddeployingactive
  • openstack baremetal node rescuerescuingrescue

deleting

In transition

When a node is in the deleting state, the Bare Metal Provisioning service disassembles the active workload and removes any configuration and resources it added to the node during the node deployment or rescue. Nodes transition quickly from the deleting state to the cleaning state, and then to the clean wait state.

error

Stable

If a node deletion is unsuccessful, the node is moved into the error state. You can transition the node from the error state to one of the following states by using the following commands:

  • openstack baremetal node deletedeletingavailable
  • openstack baremetal node undeploycleaningavailable

adopting

In transition

You can use the openstack baremetal node adopt command to transition a node with an existing workload directly from manageable to active state without first cleaning and deploying the node. When a node is in the adopting state the Bare Metal Provisioning service has taken over management of the node with its existing workload.

rescuing

In transition

Nodes in the rescuing state are being prepared to perform the following rescue operations:

  • Setting appropriate BIOS options for the node deployment.
  • Creating any additional resources that might be required by additional subsystems, such as node-specific network configurations.

rescue wait

In transition

Nodes in the rescue wait state are being rescued. This state is similar to the rescuing state except that in the rescue wait state, the conductor is waiting for the ramdisk to boot, or to execute parts of the rescue which need to run in-band on the node, such as setting the password for user named rescue.

You can interrupt the rescue operation of a node in the rescue wait state by running openstack baremetal node abort.

rescue failed

Stable

The provisioning state that indicates that the node rescue failed. You can transition the node from the rescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescuerescuingrescue
  • openstack baremetal node unrescueunrescuingactive
  • openstack baremetal node deletedeletingavailable

rescue

Stable

Nodes in the rescue state are running a rescue ramdisk. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the rescue state to one of the following states by using the following commands:

  • openstack baremetal node unrescueunrescuingactive
  • openstack baremetal node deletedeletingavailable

unrescuing

In transition

Nodes in the unrescuing state are being prepared to transition from the rescue state to the active state.

unrescue failed

Stable

The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the unrescue failed state to one of the following states by using the following commands:

  • openstack baremetal node rescuerescuingrescue
  • openstack baremetal node unrescueunrescuingactive
  • openstack baremetal node deletedeletingavailable

As a cloud operator you can create and manage resources for bare-metal workloads and enable your cloud users to create bare-metal instances.

You can create the following resources for bare-metal workloads:

  • Bare-metal instances
  • Images for bare-metal instances
  • Virtual network interfaces (VIFs) for bare-metal nodes
  • Port groups

You can perform the following resource management tasks:

  • Manual node cleaning
  • Attach a virtual network interface (VIF) to a bare-metal instance

5.1. Prerequisites

  • The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.
  • The oc command line tool is installed on the workstation.

5.2. Launching bare-metal instances

You can launch a bare-metal instance by using the OpenStack Client CLI.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create the bare-metal instance:

    $ openstack server create \
     --nic net-id=<network_uuid> \
     --flavor baremetal \
     --image <image_uuid> \
     myBareMetalInstance
    • Replace <network_uuid> with the unique identifier for the network that you created to use with the Bare Metal Provisioning service.
    • Replace <image_uuid> with the unique identifier for the image that has the software profile that your instance requires.
  3. Check the status of the instance:

    $ openstack server list --name myBareMetalInstance
  4. Exit the openstackclient pod:

    $ exit

5.3. Images for launching bare-metal instances

A Red Hat OpenStack Services on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic) requires two sets of images:

  • Deploy images: The deploy images are the agent.ramdisk and agent.kernel images that the Bare Metal Provisioning agent (ironic-python-agent) requires to boot the RAM disk over the network and copy the user image to the disk.
  • User images: The images the cloud user uses to provision their bare-metal instances. The user image consists of a kernel image, a ramdisk image, and a main image. The main image is either a root partition, or a whole-disk image:

    • Whole-disk image: An image that contains the partition table and boot loader.
    • Root partition image: Contains only the root partition of the operating system.

Compatible whole-disk RHEL guest images should work without modification. To create your own custom disk image, see Creating operating system images for instances in Performing storage operations.

You can boot a bare-metal instance from a RAM disk or an ISO image if you want to boot an instance with PXE, iPXE, or Virtual Media, and use the instance memory for local storage. This is useful for advanced scientific and ephemeral workloads where writing an image to the local storage is not required or desired.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Specify ramdisk as the deploy interface for the bare-metal node that boots from an ISO image:

    $ openstack baremetal node set --deploy-interface ramdisk
    Tip

    You can configure the deploy interface when you create the bare-metal node by adding --deploy-interface ramdisk to the openstack baremetal node create command. For information on how to create a bare-metal node, see Enrolling a bare-metal node manually.

  3. Update the bare-metal node to boot an ISO image:

    $ openstack baremetal node set <node_UUID> \
        --instance-info boot_iso=<boot_iso_url>
    • Replace <node_UUID> with the UUID of the bare-metal node that you want to boot from an ISO image.
    • Replace <boot_iso_url> with the URL of the boot ISO file. You can specify the boot ISO file URL by using one of the following methods:

      • HTTP or HTTPS URL
      • File path URL
      • Image service (glance) object UUID
  4. Deploy the bare-metal node as an ISO image:

    $ openstack baremetal node deploy <node_UUID>
  5. Exit the openstackclient pod:

    $ exit

Cloud users can attach their bare-metal instances to the network interfaces you create for the bare-metal workloads. You must create the virtual network interfaces (VIFs) for the cloud user to select for attachment.

The Bare Metal Provisioning service (ironic) uses the Networking service (neutron) to manage the attachment state of the virtual network interfaces (VIFs). A VIF is a Networking service port, referred to by the port ID, which is a UUID value. A VIF can be available across a limited number of physical networks, dependent upon the cloud’s operating configuration and operating constraints.

The Bare Metal Provisioning service can also attach the bare-metal instance to a separate provider network to improve the overall operational security.

Each VIF must be attached to a port or port group, therefore the maximum number of VIFs is determined by the number of configured and available ports represented in the Bare Metal Provisioning service.

The network interface is one of the driver interfaces that manages the network switching for bare-metal instances. The type of network interface you create influences the operation of your bare-metal workloads. The following network interfaces are available to use with the Bare Metal Provisioning service:

  • noop: Used for standalone deployments, and does not perform any network switching.
  • flat: Places all nodes into a single provider network that is pre-configured on the Networking service and physical equipment. Nodes remain physically connected to this network during their entire life cycle. The supplied VIF attachment record is updated with new DHCP records as needed. When using this network interface, the VIF needs to be created on the same network that the bare-metal node is physically attached to.
  • neutron: Provides tenant-defined networking through the Networking service, separating tenant networks from each other and from the provisioning and cleaning provider networks. Nodes move between these networks during their life cycle. This interface requires Networking service support for the switches attached to the bare-metal instances so they can be programmed. This interface requires the ML2 plugin OVN mechanism driver or other SDN integrations to facilitate port configuration on the network. Use the neutron interface when your environment uses IPv6.

When provisioning, by default the Bare Metal Provisioning service (ironic) attempts to attach all PXE-enabled ports to the provisioning network. If you have neutron.add_all_ports enabled, then the Bare Metal Provisioning service attempts to bind all ports to the required service network beyond the Bare Metal Provisioning service ports with pxe_enabled set to True.

After the bare-metal nodes are provisioned, and before the bare-metal nodes are moved to the ACTIVE provisioning state, the previously attached ports are unbound. The process for unbinding is dependent on the network interface:

  • flat: All the requested VIFs with all binding configurations in all states are unbound.
  • neutron: The VIFs requested by the cloud user are attached to the bare-metal node for the first time, because the VIFs that the Bare Metal Provisioning service created were being deleted during the provisioning process.

The same flow and logic applies to the cleaning, service, and rescue processes.

Use the Networking service (neutron) to create the port that serves as the virtual network interface (VIF). If you are using the neutron network interface, then you must also create a physical connection to the underlying physical network by creating a Bare Metal Provisioning service (ironic) port with a binding profile. The binding profile is required by the Networking service’s ML2 mechanism driver when a VIF is attached to a bare-metal instance. The binding profile includes the VNIC_BAREMETAL port type, the bare-metal node UUID, and local link connection information that identifies the tenant network that the ML2 mechanism driver must attach to the physical bare-metal port.

The binding profile information is populated through the introspection process by using LLDP data that is broadcast from the switches, therefore the switches must have LLDP enabled. You need to manually set or update the binding profile when there is a physical networking change, for example, when a bare-metal port’s cable has been moved to a different port on a switch, or the switch has been replaced.

Note

Decoding LLDP data is performed as a best effort action. Some switch vendors, or changes in switch vendor firmware might impact field decoding.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create the virtual network interface (VIF):

    $ openstack port create --network <network> <name>
  3. If you are using the neutron network interface, then create a Bare Metal Provisioning service port with the binding profile information:

    $ openstack baremetal port create <physical_mac_address> --node <node_uuid> \
         --local-link-connection switch_id=<switch_mac_address> \
         --local-link-connection switch_info=<switch_hostname> \
         --local-link-connection port_id=<switch_port_for_connection> \
         --pxe-enabled true \
         --physical-network <phys_net>
    • Replace <switch_mac_address> with the MAC address or OpenFlow-based datapath_id of the switch.
    • Replace <switch_hostname> with the name of the bare-metal node that hosts the switch.
    • Replace <switch_port_for_connection> with the port ID on the switch, for example, Gig0/1, or rep0-0.
    • Replace <phys_net> with the name of the physical network you want to associate with the bare-metal port. The Bare Metal Provisioning service uses the physical network to map the Networking service virtual ports to physical ports and port groups. If not set then any VIF is mapped to that port when there no bare-metal port with a suitable physical network assignment exists.
  4. Exit the openstackclient pod:

    $ exit
Note

Port group functionality for bare-metal nodes is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Port groups (bonds) provide a method to aggregate multiple network interfaces into a single "bonded" interface. Port group configuration always takes precedence over an individual port configuration. During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify preference for port or port group in an interface attachment request. If a port group is available, the interface attachment will use it. Port groups that do not have any ports are ignored.

If a port group has a physical network, then all the ports in that port group must have the same physical network. The Bare Metal Provisioning service uses configdrive to support configuration of port groups in the instances.

Note

Bare Metal Provisioning service API version 1.26 and later supports port group configuration.

To configure port groups in a bare metal deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch.

Note

You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE.

With port group fallback, all the ports in a port group can fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the --support-standalone-ports and --unsupport-standalone-ports options.

5.6.1. Prerequisites

  • The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.
  • The oc command line tool is installed on the workstation.

Create a port group to aggregate multiple network interfaces into a single bonded interface.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create a port group:

    $ openstack baremetal port group create \
     --node <node_uuid> --name <group_name> \
     [--address <mac_address>] [--mode <mode>]  \
     --property miimon=100 --property xmit_hash_policy="layer2+3"
     [--support-standalone-ports]
    • Replace <node_uuid> with the UUID of the node that this port group belongs to.
    • Replace <group_name> with the name for this port group.
    • Optional: Replace <mac_address> with the MAC address for the port group. If you do not specify an address, the deployed instance port group address is the same as the Networking service port. If you do not attach the Networking service port, the port group configuration fails.
    • Optional: Replace <mode> with mode of the port group.
    • Specify if the group supports fallback to standalone ports.
    Note

    You must configure port groups manually in standalone mode either in the image or by generating the configdrive and adding it to the node’s instance_info. Ensure that you have cloud-init version 0.7.7 or later for the port group configuration to work.

  3. Associate a port with a port group:

    • During port creation:

      $ openstack baremetal port create --node <node_uuid> --address <mac_address> --port-group <group_name>
    • During port update:

      $ openstack baremetal port set <port_uuid> --port-group <group_uuid>
  4. Boot an instance by providing an image that has cloud-init or supports bonding.

    To check if the port group is configured properly, run the following command:

    # cat /proc/net/bonding/bondX

    Here, X is a number that cloud-init generates automatically for each configured port group, starting with a 0 and incremented by one for each configured port group.

  5. Exit the openstackclient pod:

    $ exit

5.7. Cleaning nodes manually

The Bare Metal Provisioning service (ironic) cleans nodes automatically when they are unprovisioned to prepare them for provisioning. You can perform manual cleaning on specific nodes as required. Node cleaning has two modes:

  • Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments.
  • Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Check the current state of the node:

    $ openstack baremetal node show \
     -f value -c provision_state <node>
    • Replace <node> with the name or UUID of the node to clean.
  3. If the node is not in the manageable state, then set it to manageable:

    $ openstack baremetal node manage <node>
  4. Clean the node:

    $ openstack baremetal node clean <node> \
      --clean-steps '[{"interface": "deploy", "step": "<clean_mode>"}]'
    • Replace <node> with the name or UUID of the node to clean.
    • Replace <clean_mode> with the type of cleaning to perform on the node:

      • erase_devices: Performs a full clean.
      • erase_devices_metadata: Performs a metadata only clean.
  5. Wait for the clean to complete, then check the status of the node:

    • manageable: The clean was successful, and the node is ready to provision.
    • clean failed: The clean was unsuccessful. Inspect the last_error field for the cause of failure.
  6. Exit the openstackclient pod:

    $ exit

To attach a bare-metal instance to the bare-metal network interface, the cloud user can use the Compute service (nova) or the Bare Metal Provisioning service (ironic).

  • Compute service: Cloud users use the openstack server add network command. For more information, see Attaching a network to an instance.

    Note

    ===

  • When using the Compute service you must explicitly declare the port when creating the instance. When the Compute service makes a request to the Bare Metal Provisioning service to create an instance, the Compute service attempts to record all the VIFs the user requested to be attached in the Bare Metal Provisioning service to generate the metadata.
  • You cannot specify which physical port to attach a VIF to when using the Compute service.If you want to explicitly declare which port to map to, then instead use the Bare Metal Provisioning service to create the attachment. ===
  • Bare Metal Provisioning service: Cloud users use the openstack baremetal node vif attach command to attach a VIF to a bare-metal instance. For more information about virtual network interfaces (VIFs), see Bare Metal Provisioning service virtual network interfaces (VIFs).

The following procedure uses the Bare Metal Provisioning service to attach a bare-metal instance to a network. The Bare Metal Provisioning service creates the VIF attachment by using the UUID of the port you created with the Networking service .

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Retrieve the UUID of the bare-metal instance you want to attach the VIF to:

    $ openstack server list
  3. Retrieve the UUID of the VIF you want to attach to your node:

    $ openstack port list
  4. Optional: Retrieve the UUID of the bare-metal port you want to map the VIF to:

    $ openstack baremetal port list
  5. Attach the VIF to your bare-metal instance:

    $ openstack baremetal node vif attach [--port-uuid <port_uuid>] \
      <node> <vif_id>
    • Optional: Replace <port_uuid> with the UUID of the bare-metal port to attach the VIF to.
    • Replace <node> with the name or UUID of the bare-metal instance you want to attach the VIF to.
    • Replace <vif_id> with the name or UUID of the VIF to attach to the bare-metal instance.
  6. Exit the openstackclient pod:

    $ exit

When a cloud user requests that a virtual network interface (VIF) is attached to their bare-metal instance by using the openstack baremetal node vif attach command without a declared port or port group preference, the Bare Metal Provisioning service (ironic) selects a suitable unattached port or port group by evaluating the following criteria in order:

  1. Ports or port groups do not have a physical network or have a physical network that matches one of the VIF’s available physical networks.
  2. Prefer ports and port groups that have a physical network to ports and port groups that do not have a physical network.
  3. Prefer port groups to ports.
  4. Prefer ports with PXE enabled.

When the Bare Metal Provisioning service attaches any VIF to a bare-metal instance it explicitly sets the MAC address for the physical port to which the VIF is bound. If a node is already in an ACTIVE state, then the Networking service (neutron) updates the VIF attachment.

When the Bare Metal Provisioning service unbinds the VIF, it makes a request to the Networking service to reset the assigned MAC address to avoid conflicts with the Networking service’s unique hardware MAC address requirement.

The Bare Metal Provisioning service has an API that you can use to manage the mapping between virtual network interfaces. For example, the interfaces in the Networking service (neutron) and your physical interfaces (NICs). You can configure these interfaces for each bare-metal node to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. List the VIF IDs that are connected to the bare-metal node:

    $ openstack baremetal node vif list <node>
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 |
    +--------------------------------------+
    • Replace <node> with the name or UUID of the bare-metal node.
  3. After the VIF is attached, the Bare Metal Provisioning service updates the virtual port in the Networking service with the MAC address of the physical port. Check this port address:

    $ openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips
    +-------------+-----------------------------------------------------------------------------+
    | Field       | Value                                                                       |
    +-------------+-----------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                           |
    +-------------+-----------------------------------------------------------------------------+
  4. Create a new port on the network where you created the bare-metal node:

    $ openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 <port_name>
  5. Remove the port from the bare-metal instance it was attached to:

    $ openstack server remove port <instance_name> 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16
  6. Check that the IP address no longer exists on the list:

    $ openstack server list
  7. Check if there are VIFs attached to the node:

    $ openstack baremetal node vif list <node>
    $ openstack port list
  8. Add the newly created port:

    $ openstack server add port <instance_name> <port_name>
  9. Verify that the new IP address shows the new port:

    $ openstack server list
  10. Check if the VIF ID is the UUID of the new port:

    $ openstack baremetal node vif list <node>
    +--------------------------------------+
    | ID                                   |
    +--------------------------------------+
    | 6181c089-7e33-4f1c-b8fe-2523ff431ffc |
    +--------------------------------------+
  11. Check if the Networking service port MAC address is updated and matches one of the Bare Metal Provisioning service ports:

    $ openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips
    +-------------+------------------------------------------------------------------------------+
    | Field       | Value                                                                        |
    +-------------+------------------------------------------------------------------------------+
    | fixed_ips   | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' |
    | mac_address | 00:2d:28:2f:8d:95                                                            |
    +-------------+------------------------------------------------------------------------------+
  12. Reboot the bare-metal node so that it recognizes the new IP address:

    $ openstack server reboot overcloud-baremetal-0

    After you detach or attach interfaces, the bare-metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this might take some time because the old DHCP lease is still valid. To initiate these changes immediately, reboot the bare-metal node.

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

You can enable tenant-defined networking on the cloud, with tenant networks isolated from each other and from the provisioning and cleaning provider networks. To enable tenant-defined networking, you must use the Networking Generic Switch ML2 plugin to configure the physical network switches attached to the bare-metal nodes on the Networking service.

You can configure multiple physical network switches. You must configure and add each switch to the control plane individually. To configure the switches on the control plane, perform the following tasks:

  1. Create a generic switch configuration file.
  2. Create a Secret custom resource (CR) that contains the generic switch configuration file.
  3. Mount the generic switch configuration file on the control plane through the neutron service.

6.1. Limitations

  • Routed spine-leaf networks are not supported.
  • Static provisioning network interfaces are not supported.
  • Contact Red Hat Support if you need to use a networking-generic-switch plugin with port groups, such as bonded ports or port channels.

6.2. Prerequisites

  • A user account with privileges to SSH into the switch by using the management IP address, to execute sudo and execute configuration commands to pre-configure the switch. For more information about how to authenticate the user account for vendor-specific switches and what switch pre-configuration is required, see Preparing vendor-specific switches.
  • Inter-switch links must be pre-configured as VLAN trunk ports.
  • Ports for workloads must be in Layer-2 mode.

6.3. Preparing vendor-specific switches

The Networking Generic Switch driver uses the ngs_trunk_ports configuration option to tag switch ports as permitted when creating and deleting attachments. You might need to perform additional trunk configuration.

Dell Force10 switch running OS10 (netmiko_dell_os10)

If the SSH server is not already enabled, use the following command to enable it:

$ ip ssh server enable

If password authentication is not already enabled, use the following command to enable it:

$ ip ssh server password-authentication

Switches running SONiC

Links for connected physical hosts must be in Layer-2 mode. Use the following commands to set the host to Layer-2 mode:

$ sudo config interface ip remove $INTERFACE $IP_ADDRESS/$CIDR
$ sudo config switchport mode access $INTERFACE

6.4. Configuring the physical switches

Create a configuration file that configures the physical network switches.

Procedure

  1. Create a configuration file for the physical switches named 03-ml2-genericswitch.conf.
  2. Specify the location of the session log file that captures the SSH session commands and responses:

    [ngs]
    session_log_file = /var/log/neutron/ngs.log
  3. Add VLAN to the list of supported tenant network types:

    [ml2]
    tenant_network_types = geneve,vlan
  4. Use a comma-separated list to map the physical networks to the segmentation ranges for the tenant networks:

    [ml2_type_vlan]
    network_vlan_ranges = <network_name>:<range_start>:<range_end>,<network_name>:<range_start>:<range_end>
    • Replace <network_name> with the name of the physical network.
    • Replace <range_start> with the VLAN ID for the start of the VLAN range.
    • Replace <range_end> with the VLAN ID for the end of the VLAN range.
    Tip

    You can apply multiple VLAN ranges to a single physical network by repeating the physical network name multiple times.

  5. Configure each physical network switch with the type of switch device and details of how to connect to the switch device. The parameters required depend on the switch device.

    [genericswitch:<switch_name>]
    device_type = <device_type>
    ngs_mac_address = <mac_address>
    <parameter> = <parameter_value>
    • Replace <switch_name> with the name of the physical network switch.
    • Replace <device_type> with the networking-generic-switch driver to use for the device. For example:

      • For switches that run SONiC, set to netmiko_sonic.
      • For the Cisco Nexus switch, set to netmiko_cisco_nxos.
      • For the Dell Force 10 switch running OS10, set to netmiko_dell_force10.
    • Replace <mac_address> with the MAC address of the switch device.
    • Optional: Replace <parameter> and <parameter_value> with any other configurations required for the switch device. For more information on the available configuration options, see Physical network switch configuration options.

Use the following parameters as required to configure each physical network switch.

Expand
ParameterDescription

device_type

(Mandatory) The networking-generic-switch driver to use for the device, for example, netmiko_cisco_ios.

ngs_mac_address

(Mandatory) The MAC address of the switch bridge that manages the switch device. The MAC address is used to match the local_link_connection information field switch_id field that is set on the bare-metal node port. If the MAC address is not set, the switch is selected by either the local_link_connection or switch_info configuration.

ngs_allowed_ports

A comma-separated list of allowed ports for the switch. If not set, all ports are allowed.

ip

The management IP address that connects to the SSH server on the switch and enables switch management.

username

The username for the switch device.

password

The password that authenticates access to the switch device when not using key-based authentication.

use_keys

Set use_keys to "True" if the switch requires a key.

key_file

If use_keys is set to "True", replace <key_file> with the path to the private key file that authenticates access to the switch device, for example, /etc/neutron/<key_file_name>.

secret

The secret password required on some switch chassis and in specific configurations.

ngs_port_default_vlan

The default VLAN to revert port configurations to when ports are detached.

ngs_trunk_ports

The list of trunk ports that you must configure with VLAN networks, so that a tenant VLAN is available on the network. This setting is referenced when any network is created or removed from the environment and can also be required by the network architecture and switch configuration.

ngs_physical_networks

A comma separated list of physical networks that are available on this switch device. This setting is optional and is useful when you have distinct physical networks in your Neutron configuration.

You must create a Secret custom resource (CR) that includes the 03-ml2-genericswitch.conf configuration file for the physical network switches. If you are using key-based authentication, then also include the authentication keys in the Secret CR.

Note

If you need to update the Secret CR after you deployed the control plane, then you must create a new Secret CR and update the control plane with the new Secret CR. Updating the existing Secret CR does not automatically update or restart the neutron service on the control plane.

Procedure

  • Create the Secret CR for the physical network switches and apply it to the cluster:

    $ oc create secret generic neutron-switch-config \
    --save-config --from-file=03-ml2-genericswitch.conf \
    [--from-file=<key_file_name>] -n openstack \
    -o yaml | oc apply -f -
    • Optional: Replace <key_file_name> with the name and location of your SSH private key file. The switch needs to be configured with the corresponding public key file.

To add the switch configuration to the control plane, you mount both the switch 03-ml2-genericswitch.conf configuration file and the Secret custom resource (CR) through the neutron service configuration in the OpenStackControlPlane CR.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, on your workstation.
  2. Add the following configuration to the ironic service to change the network interface to neutron:

    spec:
      ...
      ironic:
        template:
          customServiceConfig: |
            [DEFAULT]
            default_network_interface=neutron
  3. Add the generic switch as an ML2 mechanism driver to the neutron service specification:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      secret: osp-secret
      ...
      neutron:
      ...
        template:
          ml2MechanismDrivers:
          - genericswitch
          - ovn
  4. Add the extraMounts parameter to the neutron service specification to mount the configuration of the physical network switches:

      neutron:
      ...
        template:
          ...
          extraMounts:
          - name: switchConf
            extraVol:
            - volumes:
              - name: neutron-switch-config
                secret:
                  secretName: neutron-switch-config
              mounts:
              - name: neutron-switch-config
                mountPath: /etc/neutron/neutron.conf.d/03-ml2-genericswitch.conf
                subPath: 03-ml2-genericswitch.conf
                readOnly: true
  5. If you must assign authentication keys to the physical network switches, then add the private key files to the mounts:

              mounts:
              - name: neutron-switch-config
                mountPath: /etc/neutron/neutron.conf.d/03-ml2-genericswitch.conf
                subPath: 03-ml2-genericswitch.conf
                readOnly: true
              - name: neutron-switch-config
                mountPath: /etc/neutron/<key_file_name>
                subPath: <key_file_name>
                readOnly: true

    The mountPath should match the path to the key file defined for the switch device in the 03-ml2-genericswitch.conf file.`

  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack

Use the following procedures to diagnose issues in a red Hat OpenStack on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic).

7.1. Querying node event history records

You can query the node event history records to identify issues with bare-metal nodes when an operation fails.

Procedure

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
  2. View the event history for a particular node:

    $ openstack baremetal node history list <node_id>

    This command returns a list of the error events and node state transitions that occurred on the node. Each event is identified with an event UUID.

  3. View the details of a particular event that occurred on the node:

    $ openstack baremetal node history get <node_id> <event_uuid>
  4. Exit the openstackclient pod:

    $ exit

Chapter 8. Bare metal drivers

You can configure bare metal nodes to use one of the drivers that are enabled in the Bare Metal Provisioning service. Each driver includes a provisioning method and a power management type. Some drivers require additional configuration. Each driver described in this section uses PXE for provisioning. Drivers are listed by their power management type.

You can add drivers by configuring the IronicEnabledHardwareTypes parameter in your ironic.yaml file. By default, ipmi and redfish are enabled.

For the full list of supported plug-ins and drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform.

IPMI is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require an IPMI that is connected to the shared Bare Metal network. IPMI power manager driver uses the ipmitool utility to remotely manage hardware. You can use the following driver_info properties to configure the IPMI power manager driver for a node:

Expand
Table 8.1. IPMI driver_info properties
PropertyDescriptionEquivalent ipmitool option

ipmi_address

(Mandatory) The IP address or hostname of the node.

-H

ipmi_username

The IPMI user name.

-U

ipmi_password

The IPMI password. The password is written to a temporary file. You pass the filename to the ipmitool by using the -f option.

-f

ipmi_hex_kg_key

The hexadecimal Kg key for IPMIv2 authentication.

-y

ipmi_port

The remote IPMI RMCP port.

-p

ipmi_priv_level

IPMI privilege level. Set to one of the following valid values:

  • ADMINISTRATOR (default)
  • CALLBACK
  • OPERATOR
  • USER

-L

ipmi_protocol_version

The version of the IPMI protocol. Set to one of the following valid values:

  • 1.5 for lan
  • 2.0 for lanplus (default)

-I

ipmi_bridging

The type of bridging. Use with nested chassis management controllers (CMCs). Set to one of the following valid values:

  • single
  • dual
  • no (default)

n/a

ipmi_target_channel

Destination channel for a bridged request. Required only if ipmi_bridging is set to single or dual.

-b

ipmi_target_address

Destination address for a bridged request. Required only if ipmi_bridging is set to single or dual.

-t

ipmi_transit_channel

Transit channel for a bridged request. Required only if ipmi_bridging is set to dual.

-B

ipmi_transit_address

Transit address for bridged request. Required only if ipmi_bridging is set to dual.

-T

ipmi_local_address

Local IPMB address for bridged requests. Use only if ipmi_bridging is set to single or dual.

-m

ipmi_force_boot_device

Set to true to specify if the Bare Metal Provisioning service should specify the boot device to the BMC each time the server is turned on. The BMC is not capable of remembering the selected boot device across power cycles. Disabled by default.

n/a

ipmi_disable_boot_timeout

Set to false to not send a raw IPMI command to disable the 60 second timeout for booting on the node.

n/a

ipmi_cipher_suite

The IPMI cipher suite version to use on the node. Set to one of the following valid values:

  • 3 for AES-128 with SHA1
  • 17 for AES-128 with SHA256

n/a

The following support is provided for additional Bare Metal Provisioning service (ironic) interfaces:

  • The Console interface is available for Serial over LAN (IPMI SOL) support. End console access through the Compute service (nova) is not available.
  • The BIOS Settings management interface is not available for IPMI users because of limited IPMI support.
  • The RAID interface available for IPMI drivers is an agent software RAID interface. This interface is only supported under the support exception process.

8.2. Redfish

A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF). You can use the following driver_info properties to configure the Bare Metal Provisioning serive (ironic) connection to Redfish:

Expand
Table 8.2. Redfish driver_info properties
PropertyDescription

redfish_address

(Mandatory) The IP address of the Redfish controller. The address must include the authority portion of the URL. If you do not include the scheme it defaults to https.

redfish_system_id

The canonical path to the system resource the Redfish driver interacts with. The path must include the root service, version, and the unique path to the system within the same authority as the redfish_address property. For example: /redfish/v1/Systems/CX34R87. This property is only required if the target BMC manages more than one resource.

redfish_username

The Redfish username.

redfish_password

The Redfish password.

redfish_verify_ca

Either a Boolean value, a path to a CA_BUNDLE file, or a directory with certificates of trusted CAs. If you set this value to True the driver verifies the host certificates. If you set this value to False the driver ignores verifying the SSL certificate. If you set this value to a path, the driver uses the specified certificate or one of the certificates in the directory. The default is True.

redfish_auth_type

The Redfish HTTP client authentication method. Set to one of the following valid values:

  • basic
  • session (recommended)
  • auto (default) - Uses the session authentication method when available, and the basic authentication method when the session method is not available.

The following support is provided for additional Bare Metal Provisioning service (ironic) interfaces:

  • The Console interface is not available in the Redfish driver provided with Red Hat OpenStack Services on OpenShift (RHOSO) 18.0.
  • The BIOS Settings interface is available but not supported by Red Hat. The driver attempts to surface the hardware vendor specfiic settings available through the Baseboard Management Controller as exposed through the standardized Redfish interfaces. Contents, values, and the ability to change values is dependent on the hardware vendor Redfish implementation.
  • The RAID interface can be set to agent or redfish. Support of the RAID interface is limited to the support exception process.

8.3. Dell Remote Access Controller (DRAC)

DRAC is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require a DRAC that is connected to the shared Bare Metal Provisioning network. Enable the idrac driver, and set the following information in the driver_info of the node:

  • drac_address - The IP address of the DRAC NIC.
  • drac_username - The DRAC user name.
  • drac_password - The DRAC password.
  • Optional: drac_port - The port to use for the WS-Management endpoint. The default is port 443.
  • Optional: drac_path - The path to use for the WS-Management endpoint. The default path is /wsman.
  • Optional: drac_protocol - The protocol to use for the WS-Management endpoint. Valid values: http, https. The default protocol is https.

iRMC from Fujitsu is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type on a Bare Metal Provisioning service node, the node requires an iRMC interface that is connected to the shared Bare Metal network.

Note

To use the iRMC driver, iRMC S4 or higher is required.

You can use the following driver_info properties to configure the iRMC driver for a node:

Expand
Table 8.3. iRMC driver_info properties
PropertyDescription

irmc_address

The IP address of the iRMC interface NIC.

irmc_username

The iRMC user name.

irmc_password

The iRMC password.

irmc_snmp_version

Set to v3. Required if FIPS security is enabled in your RHOSP environment.

irmc_snmp_user

Set to the SNMPv3 User-based Security Model (USM) username for the iRMC firmware that runs on the target bare-metal node. Must be set for each bare-metal node. The SNMP username cannot be strings of digits (0-9).

Required if FIPS security is enabled in your RHOSP environment.

irmc_snmp_auth_password

Set to the SNMPv3 message authentication key for the SNMPv3 username. The minimum length of the SNMP password must be 8 characters.

Required if FIPS security is enabled in your RHOSP environment.

irmc_snmp_priv_password

Set to the SNMPv3 message privacy key for the SNMPv3 username. The minimum length of the SNMP password must be 8 characters.

Required if FIPS security is enabled in your RHOSP environment.

irmc_snmp_auth_proto

Set to one of the following values, depending on the version of iRMC firmware that runs on your Fujitsu server:

  • Earlier than "iRMC S6": sha
  • "iRMC S6": sha256, sha384, or sha512

Required if FIPS security is enabled in your RHOSP environment.

irmc_snmp_priv_proto

Set to aes. Required if FIPS security is enabled in your RHOSP environment.

To use IPMI to set the boot mode or SCCI to get sensor data, you must complete the following steps:

  1. Enable the sensor method in the ironic.conf file:

    $ openstack-config --set /etc/ironic/ironic.conf \
       irmc sensor_method <method>
    • Replace <method> with scci or ipmitool.
  2. If you enabled SCCI, install the python-scciclient package:

    # dnf install python-scciclient
  3. Restart the Bare Metal conductor service:

    # systemctl restart openstack-ironic-conductor.service

8.5. Integrated Lights-Out (iLO)

iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring. To use this power management type, all Bare Metal nodes require an iLO interface that is connected to the shared Bare Metal network. Enable the ilo driver, and set the following information in the driver_info of the node:

  • ilo_address - The IP address of the iLO interface NIC.
  • ilo_username - The iLO user name.
  • ilo_password - The iLO password.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top