Chapter 3. Enabling the Bare Metal Provisioning service (ironic)
If you want your cloud users to be able to launch bare-metal instances, you must perform the following tasks:
- Prepare Red Hat OpenShift Container Platform (RHOCP) for bare-metal networks by creating an isolated bare metal provisioning network on the RHOCP cluster.
- Create the Networking service (neutron) networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, and rescuing bare-metal nodes.
- Add the Bare Metal Provisioning service (ironic) to your Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
- Configure the Bare Metal Provisioning service as required for your environment.
3.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
3.2. Preparing RHOCP for bare-metal networks Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
Create an isolated network for the Bare Metal Provisioning service (ironic) that the ironic service pod attaches to. The following procedures create an isolated network named baremetal.
For more information about how to create an isolated network, see Preparing RHOCP for RHOSO networks in Deploying Red Hat OpenStack Services on OpenShift.
The example in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with an isolated network interface for the Bare Metal Provisioning service Copy linkLink copied to clipboard!
Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interface for the isolated bare-metal network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation to configure the interface for the isolated bare-metal network on each worker node in RHOCP cluster, for example,baremetal-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Discover the network configuration:
oc get nns/<worker_node> -o yaml | more
$ oc get nns/<worker_node> -o yaml | moreCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interface for the isolated bare-metal network on each worker node in the RHOCP cluster, and configure the virtual routing and forwarding (VRF) to avoid asymmetric routing. In the following example, thenncpCR configures thebaremetalinterface for worker node 1,osp-enp6s0-worker-1, to use a bridge on the enp8s0 interface with IPv4 addresses for network isolation:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
nncpCR in the cluster:oc apply -f baremetal-nncp.yaml
$ oc apply -f baremetal-nncp.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
nncpCR is created:oc get nncp -w
$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfiguredCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Attaching the ironic service pod to the baremetal network Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation for the bare-metal network to attach theironicservice pod to the network, for example,baremetal-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for thebaremetalnetwork to attach theironicservice deployment pod to the network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
NetworkAttachmentDefinitionCR in the cluster:oc apply -f baremetal-net-attach-def.yaml
$ oc apply -f baremetal-net-attach-def.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
NetworkAttachmentDefinitionCR is created:oc get net-attach-def -n openstack
$ oc get net-attach-def -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.3. Preparing RHOCP for baremetal network VIPs Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation to configure which IPs can be used as VIPs, for example,baremetal-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on thebaremetalnetwork to specify the IP address ranges over which MetalLB has authority:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
IPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.Create the
IPAddressPoolCR in the cluster:oc apply -f baremetal-ipaddresspools.yaml
$ oc apply -f baremetal-ipaddresspools.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
IPAddressPoolCR is created:oc describe -n metallb-system IPAddressPool
$ oc describe -n metallb-system IPAddressPoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
L2AdvertisementCR file on your workstation to define how the Virtual IPs (VIPs) are announced, for example,baremetal-l2advertisement.yaml. In the
L2AdvertisementCR file, configure anL2AdvertisementCR to define which node advertises theironicservice to the local network:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.Create the
L2AdvertisementCR in the cluster:oc apply -f baremetal-l2advertisement.yaml
$ oc apply -f baremetal-l2advertisement.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
L2AdvertisementCR is created:oc get -n metallb-system L2Advertisement
$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES baremetal ["baremetal"] ["enp6s0"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Creating the bare-metal networks Copy linkLink copied to clipboard!
You use the Networking service (neutron) to create the networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, inspecting, and rescuing bare-metal nodes. The following procedure creates a provisioning network. Repeat the procedure for each Bare Metal Provisioning network you require.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the network over which to provision bare-metal instances:
openstack network create \ --provider-network-type <network_type> \ [--provider-segment <vlan_id>] \ --provider-physical-network <provider_physical_network> \ --share <network_name>
$ openstack network create \ --provider-network-type <network_type> \ [--provider-segment <vlan_id>] \ --provider-physical-network <provider_physical_network> \ --share <network_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<network_type>with the type of network, eitherflatorvlan. -
Optional: If your network type is
vlanthen specify the--provider-segment. -
Replace
<provider_physical_network>with the name of the physical network over which you implement the virtual network, which is the bridge mapping configured for the OVN service on the control plane. -
Replace
<network_name>with a name for this network.
-
Replace
Create the subnet on the network:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<network_name>with the name of the provisioning network that you created in the previous step. -
Replace
<network_cidr>with the CIDR representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with<start_ip>and ending with<end_ip>must be within the block of IP addresses specified by<network_cidr>. -
Replace
<gateway_ip>with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by<network_cidr>, but outside of the block of IP addresses specified by the range that starts with<start_ip>and ends with<end_ip>. -
Replace
<start_ip>with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<end_ip>with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<subnet_name>with a name for the subnet. -
Replace
<dns_ip>with the IP address of the load balancer configured for the DNS service on the control plane.
-
Replace
Create a router for the network and subnet to ensure that the Networking service serves metadata requests:
openstack router create <router_name>
$ openstack router create <router_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<router_name>with a name for the router.
-
Replace
Attach the subnet to the new router to enable the metadata requests from
cloud-initto be served and the node to be configured:openstack router add subnet <router_name> <subnet>
$ openstack router add subnet <router_name> <subnet>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<router_name>with the name of your router. -
Replace
<subnet>with the ID or name of the bare-metal subnet that you created in step 3.
-
Replace
Exit the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Adding the Bare Metal Provisioning service (ironic) to the control plane Copy linkLink copied to clipboard!
To enable the Bare Metal Provisioning service (ironic) on your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must add the ironic service to the control plane and configure it as required.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
cellTemplatesconfiguration to thenovaservice configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the Compute service. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the
-symbol.
Enable the
ironicservice and specify the networks to connect to:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
NetworkAttachmentDefinitionCR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for theironicConductorpods. - 2
- The name of the Networking service (neutron) network you created for use as the provisioning network in Creating the bare-metal network.
- 3
- You can deploy the Bare Metal Provisioning service without the
ironicInspectorservice. To deploy the service, set the number ofreplicasto1. - 4
- The name of the
NetworkAttachmentDefinitionCR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for theironicInspectorpod. - 5
- The name of the Networking service (neutron) network you created for use as the inspection network in Creating the bare-metal network. The Ironic Inspector API listens on port 5050.
Specify the networks the Bare Metal Provisioning service uses for provisioning, cleaning, inspection, and rescuing bare-metal nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<network_UUID>with the UUID of the network you created in Creating the bare-metal network for the function.
-
Replace
Configure the OVN mappings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- List of key-value pairs that map the physical network provider to the interface name defined in the
NodeNetworkConfigurationPolicy(nncp) CR.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the internal service endpoints are registered with each service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
openstackclientpod:exit
$ exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Configuring node event history records Copy linkLink copied to clipboard!
The Bare Metal Provisioning service (ironic) records node event history by default. You can configure how the node event history records are managed.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following configuration options to the
customServiceConfigparameter in theironicConductorstemplate to configure how node event history records are managed:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Replace
<max_entries>with the maximum number of event records that the Bare Metal Provisioning service records. The oldest recorded events are removed when the maximum number of entries is reached. By default, a maximum of300events are recorded. The minimum valid value is0. -
Optional: Replace
<clean_interval>with the interval in seconds between scheduled cleanup of the node event history entries. By default, the cleanup is scheduled every86400seconds, which is once daily. Set to0to disable node event history cleanup. -
Optional: Replace
<max_purge>with the maximum number of entries to purge during each clean up operation. Defaults to1000. -
Optional: Replace
<min_days>with the minimum number of days to explicitly keep the database history entries for nodes. Defaults to0.
-
Optional: Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:oc get pods -n openstack
$ oc get pods -n openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane is deployed when all the pods are either completed or running.