Chapter 3. Enabling the Bare Metal Provisioning service (ironic)


If you want your cloud users to be able to launch bare-metal instances, you must perform the following tasks:

  • Prepare Red Hat OpenShift Container Platform (RHOCP) for bare-metal networks by creating an isolated bare metal provisioning network on the RHOCP cluster.
  • Create the Networking service (neutron) networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, and rescuing bare-metal nodes.
  • Add the Bare Metal Provisioning service (ironic) to your Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
  • Configure the Bare Metal Provisioning service as required for your environment.

3.1. Prerequisites

  • The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
  • You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges.

3.2. Preparing RHOCP for bare-metal networks

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.

Create an isolated network for the Bare Metal Provisioning service (ironic) that the ironic service pod attaches to. The following procedures create an isolated network named baremetal.

For more information about how to create an isolated network, see Preparing RHOCP for RHOSO networks in Deploying Red Hat OpenStack Services on OpenShift.

Note

The example in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:

Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interface for the isolated bare-metal network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.

Procedure

  1. Create a NodeNetworkConfigurationPolicy (nncp) CR file on your workstation to configure the interface for the isolated bare-metal network on each worker node in RHOCP cluster, for example, baremetal-nncp.yaml.
  2. Retrieve the names of the worker nodes in the RHOCP cluster:

    $ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"
    Copy to Clipboard Toggle word wrap
  3. Discover the network configuration:

    $ oc get nns/<worker_node> -o yaml | more
    Copy to Clipboard Toggle word wrap
    • Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1. Repeat this step for each worker node.
  4. In the nncp CR file, configure the interface for the isolated bare-metal network on each worker node in the RHOCP cluster, and configure the virtual routing and forwarding (VRF) to avoid asymmetric routing. In the following example, the nncp CR configures the baremetal interface for worker node 1, osp-enp6s0-worker-1, to use a bridge on the enp8s0 interface with IPv4 addresses for network isolation:

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
     name: osp-enp6s0-worker-1
    spec:
     desiredState:
       interfaces:
       ....
       - description: Ironic bridge
         name: baremetal
         type: linux-bridge
         mtu: 1500
         bridge:
           options:
             stp:
               enabled: false
           port:
           - name: enp8s0
         ipv4:
           address:
           - ip: 172.17.0.10
             prefix-length: "24"
           enabled: true
         ipv6:
           enabled: false
       - description: Ironic VRF
         name: ironicvrf
         state: up
         type: vrf
         vrf:
           port:
           - baremetal
           route-table-id: 10
       route-rules:
         config: []
       routes:
         config:
         - destination: 0.0.0.0/0
           metric: 150
           next-hop-address: 172.17.0.1
           next-hop-interface: baremetal
           table-id: 10
         - destination: 172.17.0.0/24
           metric: 150
           next-hop-address: 192.168.122.1
           next-hop-interface: ospbr
    Copy to Clipboard Toggle word wrap
  5. Create the nncp CR in the cluster:

    $ oc apply -f baremetal-nncp.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the nncp CR is created:

    $ oc get nncp -w
    NAME                        STATUS        REASON
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Progressing   ConfigurationProgressing
    osp-enp6s0-worker-1   Available     SuccessfullyConfigured
    Copy to Clipboard Toggle word wrap

Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.

Procedure

  1. Create a NetworkAttachmentDefinition (net-attach-def) CR file on your workstation for the bare-metal network to attach the ironic service pod to the network, for example, baremetal-net-attach-def.yaml.
  2. In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for the baremetal network to attach the ironic service deployment pod to the network:

    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: baremetal
      namespace: openstack
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "name": "baremetal",
          "type": "bridge",
          "master": "baremetal", 
    1
    
          "ipam": {              
    2
    
            "type": "whereabouts",
            "range": "172.17.0.0/24",
            "range_start": "172.17.0.30", 
    3
    
            "range_end": "172.17.0.70"
          }
        }
    Copy to Clipboard Toggle word wrap
    1
    The node interface name associated with the network, as defined in the nncp CR.
    2
    The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70.
    3
    The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange.
  3. Create the NetworkAttachmentDefinition CR in the cluster:

    $ oc apply -f baremetal-net-attach-def.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the NetworkAttachmentDefinition CR is created:

    $ oc get net-attach-def -n openstack
    Copy to Clipboard Toggle word wrap

3.2.3. Preparing RHOCP for baremetal network VIPs

The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.

Procedure

  1. Create an IPAddressPool CR file on your workstation to configure which IPs can be used as VIPs, for example, baremetal-ipaddresspools.yaml.
  2. In the IPAddressPool CR file, configure an IPAddressPool resource on the baremetal network to specify the IP address ranges over which MetalLB has authority:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: baremetal
      namespace: metallb-system
    spec:
      addresses:
        - 172.17.0.80-172.17.0.90 
    1
    
      autoAssign: true
      avoidBuggyIPs: false
    Copy to Clipboard Toggle word wrap
    1
    The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange.

    For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.

  3. Create the IPAddressPool CR in the cluster:

    $ oc apply -f baremetal-ipaddresspools.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the IPAddressPool CR is created:

    $ oc describe -n metallb-system IPAddressPool
    Copy to Clipboard Toggle word wrap
  5. Create a L2Advertisement CR file on your workstation to define how the Virtual IPs (VIPs) are announced, for example, baremetal-l2advertisement.yaml.
  6. In the L2Advertisement CR file, configure an L2Advertisement CR to define which node advertises the ironic service to the local network:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: baremetal
      namespace: metallb-system
    spec:
      ipAddressPools:
      - baremetal
      interfaces:
      - baremetal 
    1
    Copy to Clipboard Toggle word wrap
    1
    The interface where the VIPs requested from the VLAN address pool are announced.

    For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.

  7. Create the L2Advertisement CR in the cluster:

    $ oc apply -f baremetal-l2advertisement.yaml
    Copy to Clipboard Toggle word wrap
  8. Verify that the L2Advertisement CR is created:

    $ oc get -n metallb-system L2Advertisement
    NAME          IPADDRESSPOOLS    IPADDRESSPOOL SELECTORS   INTERFACES
    baremetal      ["baremetal"]                                ["enp6s0"]
    Copy to Clipboard Toggle word wrap

3.3. Creating the bare-metal networks

You use the Networking service (neutron) to create the networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, inspecting, and rescuing bare-metal nodes. The following procedure creates a provisioning network. Repeat the procedure for each Bare Metal Provisioning network you require.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Create the network over which to provision bare-metal instances:

    $ openstack network create \
      --provider-network-type <network_type> \
      [--provider-segment <vlan_id>] \
      --provider-physical-network <provider_physical_network> \
      --share <network_name>
    Copy to Clipboard Toggle word wrap
    • Replace <network_type> with the type of network, either flat or vlan.
    • Optional: If your network type is vlan then specify the --provider-segment.
    • Replace <provider_physical_network> with the name of the physical network over which you implement the virtual network, which is the bridge mapping configured for the OVN service on the control plane.
    • Replace <network_name> with a name for this network.
  3. Create the subnet on the network:

    $ openstack subnet create \
      --network <network_name> \
      --subnet-range <network_cidr> \
      --ip-version 4 \
      --gateway <gateway_ip> \
      --allocation-pool start=<start_ip>,end=<end_ip> \
      --dhcp <subnet_name>
      --dns-nameserver <dns_ip>
    Copy to Clipboard Toggle word wrap
    • Replace <network_name> with the name of the provisioning network that you created in the previous step.
    • Replace <network_cidr> with the CIDR representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with <start_ip> and ending with <end_ip> must be within the block of IP addresses specified by <network_cidr>.
    • Replace <gateway_ip> with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by <network_cidr>, but outside of the block of IP addresses specified by the range that starts with <start_ip> and ends with <end_ip>.
    • Replace <start_ip> with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <end_ip> with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated.
    • Replace <subnet_name> with a name for the subnet.
    • Replace <dns_ip> with the IP address of the load balancer configured for the DNS service on the control plane.
  4. Create a router for the network and subnet to ensure that the Networking service serves metadata requests:

    $ openstack router create <router_name>
    Copy to Clipboard Toggle word wrap
    • Replace <router_name> with a name for the router.
  5. Attach the subnet to the new router to enable the metadata requests from cloud-init to be served and the node to be configured:

    $ openstack router add subnet <router_name> <subnet>
    Copy to Clipboard Toggle word wrap
    • Replace <router_name> with the name of your router.
    • Replace <subnet> with the ID or name of the bare-metal subnet that you created in step 3.
  6. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

To enable the Bare Metal Provisioning service (ironic) on your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must add the ironic service to the control plane and configure it as required.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following cellTemplates configuration to the nova service configuration:

      nova:
        apiOverride:
          route: {}
        template:
          ...
          secret: osp-secret
          cellTemplates:
            cell0:
              cellDatabaseAccount: nova-cell0
              hasAPIAccess: true
            cell1:
              cellDatabaseAccount: nova-cell1
              cellDatabaseInstance: openstack-cell1
              cellMessageBusInstance: rabbitmq-cell1
              hasAPIAccess: true
              novaComputeTemplates:
                compute-ironic: 
    1
    
                  computeDriver: ironic.IronicDriver
    Copy to Clipboard Toggle word wrap
    1
    The name of the Compute service. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the - symbol.
  3. Enable the ironic service and specify the networks to connect to:

    spec:
      ...
      ironic:
        enabled: true
        template:
          rpcTransport: oslo
          databaseInstance: openstack
          ironicAPI:
            replicas: 1
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: ctlplane
                      metallb.universe.tf/allow-shared-ip: ctlplane
                      metallb.universe.tf/loadBalancerIPs: 192.168.122.80
                  spec:
                    type: LoadBalancer
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal 
    1
    
            provisionNetwork: baremetal 
    2
    
          ironicInspector:
            replicas: 0 
    3
    
            networkAttachments:
            - baremetal 
    4
    
            inspectionNetwork: baremetal 
    5
    
          ironicNeutronAgent:
            replicas: 1
          secret: osp-secret
    Copy to Clipboard Toggle word wrap
    1
    The name of the NetworkAttachmentDefinition CR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for the ironicConductor pods.
    2
    The name of the Networking service (neutron) network you created for use as the provisioning network in Creating the bare-metal network.
    3
    You can deploy the Bare Metal Provisioning service without the ironicInspector service. To deploy the service, set the number of replicas to 1.
    4
    The name of the NetworkAttachmentDefinition CR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for the ironicInspector pod.
    5
    The name of the Networking service (neutron) network you created for use as the inspection network in Creating the bare-metal network. The Ironic Inspector API listens on port 5050.
  4. Specify the networks the Bare Metal Provisioning service uses for provisioning, cleaning, inspection, and rescuing bare-metal nodes:

    spec:
      ...
      ironic:
        ...
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal
            provisionNetwork: baremetal
            customServiceConfig: |
              [neutron]
              cleaning_network = <network_UUID>
              provisioning_network = <network_UUID>
              inspection_network = <network_UUID>
              rescuing_network = <network_UUID>
    Copy to Clipboard Toggle word wrap
  5. Configure the OVN mappings:

      ovn:
        template:
          ovnController:
            ...
            nicMappings: 
    1
    
              datacentre: ocpbr
              baremetal: baremetal
    Copy to Clipboard Toggle word wrap
    1
    List of key-value pairs that map the physical network provider to the interface name defined in the NodeNetworkConfigurationPolicy (nncp) CR.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  8. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Verification

  1. Open a remote shell connection to the OpenStackClient pod:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard Toggle word wrap
  2. Confirm that the internal service endpoints are registered with each service:

    $ openstack endpoint list -c 'Service Name' -c Interface -c URL --service ironic
    +--------------+-----------+---------------------------------------------------------------+
    | Service Name | Interface | URL                                                           |
    +--------------+-----------+---------------------------------------------------------------+
    | ironic       | internal  | http://ironic-internal.openstack.svc:9292                     |
    | ironic       | public    | http://ironic-public-openstack.apps.ostest.test.metalkube.org |
    +--------------+-----------+---------------------------------------------------------------+
    Copy to Clipboard Toggle word wrap
  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard Toggle word wrap

3.5. Configuring node event history records

The Bare Metal Provisioning service (ironic) records node event history by default. You can configure how the node event history records are managed.

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the following configuration options to the customServiceConfig parameter in the ironicConductors template to configure how node event history records are managed:

    spec:
      ...
      ironic:
        enabled: true
        template:
          rpcTransport: oslo
          databaseInstance: openstack
          ironicAPI:
            ...
          ironicConductors:
          - replicas: 1
            storageRequest: 10G
            networkAttachments:
            - baremetal
            provisionNetwork: baremetal
            customServiceConfig: |
              ...
              [conductor]
              node_history_max_entries=<max_entries>
              node_history_cleanup_interval=<clean_interval>
              node_history_cleanup_batch_count=<max_purge>
              node_history_minimum_days=<min_days>
          ...
          secret: osp-secret
    Copy to Clipboard Toggle word wrap
    • Optional: Replace <max_entries> with the maximum number of event records that the Bare Metal Provisioning service records. The oldest recorded events are removed when the maximum number of entries is reached. By default, a maximum of 300 events are recorded. The minimum valid value is 0.
    • Optional: Replace <clean_interval> with the interval in seconds between scheduled cleanup of the node event history entries. By default, the cleanup is scheduled every 86400 seconds, which is once daily. Set to 0 to disable node event history cleanup.
    • Optional: Replace <max_purge> with the maximum number of entries to purge during each clean up operation. Defaults to 1000.
    • Optional: Replace <min_days> with the minimum number of days to explicitly keep the database history entries for nodes. Defaults to 0.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    NAME 						STATUS 	MESSAGE
    openstack-control-plane 	Unknown 	Setup started
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  5. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard Toggle word wrap

    The control plane is deployed when all the pods are either completed or running.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat