Configuring the Bare Metal Provisioning service
Enabling and configuring the Bare Metal Provisioning service (ironic) for Bare Metal as a Service (BMaaS)
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Bare Metal Provisioning service (ironic) functionality Copy linkLink copied to clipboard!
You use the Bare Metal Provisioning service (ironic) components to provision and manage physical machines as bare-metal instances for your cloud users. To provision and manage bare-metal instances, the Bare Metal Provisioning service interacts with the following Red Hat OpenStack Services on OpenShift (RHOSO) services:
- The Compute service (nova) provides scheduling, tenant quotas, and a user-facing API for virtual machine instance management.
- The Identity service (keystone) provides request authentication and assists the Bare Metal Provisioning service to locate other RHOSO services.
- The Image service (glance) manages disk and instance images and image metadata.
- The Networking service (neutron) provides DHCP and network configuration, and provisions the virtual or physical networks that instances connect to on boot.
- The Object Storage service (swift) exposes temporary image URLs for some drivers.
Bare Metal Provisioning service components
The Bare Metal Provisioning service consists of services, named ironic-*. The following services are the core Bare Metal Provisioning services:
- Bare Metal Provisioning API (
ironic-api) - This service provides the external REST API to users. The API sends application requests to the Bare Metal Provisioning conductor over remote procedure call (RPC).
- Bare Metal Provisioning conductor (
ironic-conductor) This service uses drivers to perform the following bare-metal node management tasks:
- Adds, edits, and deletes bare-metal nodes.
- Powers bare-metal nodes on and off with IPMI, Redfish, or other vendor-specific protocol.
- Provisions, deploys, and cleans bare metal nodes.
- Bare Metal Provisioning inspector (
ironic-inspector) - This service discovers the hardware properties of a bare-metal node that are required for scheduling bare-metal instances, and creates the Bare Metal Provisioning service ports for the discovered ethernet MACs.
- Bare Metal Provisioning database
- This database tracks hardware information and state.
- Bare Metal Provisioning agent (
ironic-python-agent) -
This service runs in a temporary ramdisk to provide
ironic-conductorandironic-inspectorservices with remote access, in-band hardware control, and hardware introspection.
Provisioning a bare-metal instance
You can configure the Bare Metal Provisioning service to use PXE, iPXE, or virtual media to provision physical machines as bare-metal instances:
- PXE or iPXE: The Bare Metal Provisioning service provisions the bare-metal instances by using network boot.
- Virtual media: The Bare Metal Provisioning service provisions the bare-metal instances by creating a temporary ISO image and requesting the Baseboard Management Controller (BMC) to attach and boot to that image.
Chapter 2. Requirements for bare metal provisioning Copy linkLink copied to clipboard!
To enable cloud users to launch bare-metal instances, your Red Hat OpenStack Services on OpenShift (RHOSO) environment must have the required hardware and network configuration.
2.1. Hardware requirements Copy linkLink copied to clipboard!
The hardware requirements for the bare-metal machines that you want to make available to your cloud users for provisioning depend on the operating system. For information about the hardware requirements for Red Hat Enterprise Linux installations, see the Product Documentation for Red Hat Enterprise Linux.
All bare-metal machines that you want to make available to your cloud users for provisioning must have the following capabilities:
- A NIC to connect to the bare-metal network.
The Redfish power management type, which is connected to a network that is reachable from the
ironic-conductorcontainer.NoteDo not use the IPMI power management type due to security concerns. Use Redfish as the power management type to optimize the performance of the Bare Metal Provisioning service.
- If the Bare Metal Provisioning service is configured to use PXE or iPXE for provisioning, then PXE boot must be enabled on the network interface that is attached to the bare-metal network, and disabled on all other network interfaces for that bare-metal node. This is not a requirement if the Bare Metal Provisioning service is configured to use virtual media for provisioning.
- If the Bare Metal Provisioning service is configured to use virtual media for provisioning, through Redfish or a vendor-specific boot interface on each node, then the bare-metal nodes must be able to reach cluster resources for virtual media disks or other disk images.
2.2. Networking requirements Copy linkLink copied to clipboard!
The cloud operator must create a private bare-metal network for the Bare Metal Provisioning service to use for the following operations:
- The provisioning and management of the bare-metal nodes that host the bare-metal instances.
- Cleaning bare-metal nodes when a node is unprovisioned.
- Project access to the bare-metal nodes.
In order for the Bare Metal Provisioning service to serve PXE boot and DHCP requests, the bare-metal node must be attached either to a port that does not use a VLAN, or to a port that is a VLAN trunk where the native VLAN is the bare-metal network.
The Bare Metal Provisioning service is designed for a trusted tenant environment because the bare-metal nodes have direct access to the control plane network of your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Cloud users have direct access to the public OpenStack APIs, and to the bare-metal network. A flat bare-metal network can introduce security concerns because cloud users have indirect access to the control plane network. To mitigate this risk, you can configure an isolated bare metal provisioning network for the Bare Metal Provisioning service that does not have access to the control plane.
The bare-metal network must be untagged for provisioning, and must also have access to the Bare Metal Provisioning API.
You must provide access to the bare-metal network for the following:
- The control plane that hosts the Bare Metal Provisioning service.
- The NIC from which the bare-metal machine is configured to PXE-boot.
Chapter 3. Enabling the Bare Metal Provisioning service (ironic) Copy linkLink copied to clipboard!
If you want your cloud users to be able to launch bare-metal instances, you must perform the following tasks:
- Prepare Red Hat OpenShift Container Platform (RHOCP) for bare-metal networks by creating an isolated bare metal provisioning network on the RHOCP cluster.
- Create the Networking service (neutron) networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, and rescuing bare-metal nodes.
- Add the Bare Metal Provisioning service (ironic) to your Red Hat OpenStack Services on OpenShift (RHOSO) control plane.
- Configure the Bare Metal Provisioning service as required for your environment.
3.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment is deployed on a RHOCP cluster. For more information, see Deploying Red Hat OpenStack Services on OpenShift.
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges.
3.2. Preparing RHOCP for bare-metal networks Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes.
Create an isolated network for the Bare Metal Provisioning service (ironic) that the ironic service pod attaches to. The following procedures create an isolated network named baremetal.
For more information about how to create an isolated network, see Preparing RHOCP for RHOSO networks in Deploying Red Hat OpenStack Services on OpenShift.
The example in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack (IPv4 and IPv6) is available only on tentant networks. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide:
3.2.1. Preparing RHOCP with an isolated network interface for the Bare Metal Provisioning service Copy linkLink copied to clipboard!
Create a NodeNetworkConfigurationPolicy (nncp) CR to configure the interface for the isolated bare-metal network on each worker node in the Red Hat OpenShift Container Platform (RHOCP) cluster.
Procedure
-
Create a
NodeNetworkConfigurationPolicy(nncp) CR file on your workstation to configure the interface for the isolated bare-metal network on each worker node in RHOCP cluster, for example,baremetal-nncp.yaml. Retrieve the names of the worker nodes in the RHOCP cluster:
$ oc get nodes -l node-role.kubernetes.io/worker -o jsonpath="{.items[*].metadata.name}"Discover the network configuration:
$ oc get nns/<worker_node> -o yaml | more-
Replace
<worker_node>with the name of a worker node retrieved in step 2, for example,worker-1. Repeat this step for each worker node.
-
Replace
In the
nncpCR file, configure the interface for the isolated bare-metal network on each worker node in the RHOCP cluster, and configure the virtual routing and forwarding (VRF) to avoid asymmetric routing. In the following example, thenncpCR configures thebaremetalinterface for worker node 1,osp-enp6s0-worker-1, to use a bridge on the enp8s0 interface with IPv4 addresses for network isolation:apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: .... - description: Ironic bridge name: baremetal type: linux-bridge mtu: 1500 bridge: options: stp: enabled: false port: - name: enp8s0 ipv4: address: - ip: 172.17.0.10 prefix-length: "24" enabled: true ipv6: enabled: false - description: Ironic VRF name: ironicvrf state: up type: vrf vrf: port: - baremetal route-table-id: 10 route-rules: config: [] routes: config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 172.17.0.1 next-hop-interface: baremetal table-id: 10 - destination: 172.17.0.0/24 metric: 150 next-hop-address: 192.168.122.1 next-hop-interface: ospbrCreate the
nncpCR in the cluster:$ oc apply -f baremetal-nncp.yamlVerify that the
nncpCR is created:$ oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured
3.2.2. Attaching the ironic service pod to the baremetal network Copy linkLink copied to clipboard!
Create a NetworkAttachmentDefinition (net-attach-def) custom resource (CR) for each isolated network to attach the service pods to the networks.
Procedure
-
Create a
NetworkAttachmentDefinition(net-attach-def) CR file on your workstation for the bare-metal network to attach theironicservice pod to the network, for example,baremetal-net-attach-def.yaml. In the
NetworkAttachmentDefinitionCR file, configure aNetworkAttachmentDefinitionresource for thebaremetalnetwork to attach theironicservice deployment pod to the network:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: baremetal namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "baremetal", "type": "bridge", "master": "baremetal",1 "ipam": {2 "type": "whereabouts", "range": "172.17.0.0/24", "range_start": "172.17.0.30",3 "range_end": "172.17.0.70" } }Create the
NetworkAttachmentDefinitionCR in the cluster:$ oc apply -f baremetal-net-attach-def.yamlVerify that the
NetworkAttachmentDefinitionCR is created:$ oc get net-attach-def -n openstack
3.2.3. Preparing RHOCP for baremetal network VIPs Copy linkLink copied to clipboard!
The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network.
Procedure
-
Create an
IPAddressPoolCR file on your workstation to configure which IPs can be used as VIPs, for example,baremetal-ipaddresspools.yaml. In the
IPAddressPoolCR file, configure anIPAddressPoolresource on thebaremetalnetwork to specify the IP address ranges over which MetalLB has authority:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: baremetal namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.901 autoAssign: true avoidBuggyIPs: false- 1
- The
IPAddressPoolrange must not overlap with thewhereaboutsIPAM range and the NetConfigallocationRange.
For information about how to configure the other
IPAddressPoolresource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide.Create the
IPAddressPoolCR in the cluster:$ oc apply -f baremetal-ipaddresspools.yamlVerify that the
IPAddressPoolCR is created:$ oc describe -n metallb-system IPAddressPool-
Create a
L2AdvertisementCR file on your workstation to define how the Virtual IPs (VIPs) are announced, for example,baremetal-l2advertisement.yaml. In the
L2AdvertisementCR file, configure anL2AdvertisementCR to define which node advertises theironicservice to the local network:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: baremetal namespace: metallb-system spec: ipAddressPools: - baremetal interfaces: - baremetal1 - 1
- The interface where the VIPs requested from the VLAN address pool are announced.
For information about how to configure the other
L2Advertisementresource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide.Create the
L2AdvertisementCR in the cluster:$ oc apply -f baremetal-l2advertisement.yamlVerify that the
L2AdvertisementCR is created:$ oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES baremetal ["baremetal"] ["enp6s0"]
3.3. Creating the bare-metal networks Copy linkLink copied to clipboard!
You use the Networking service (neutron) to create the networks that the Bare Metal Provisioning service (ironic) uses for provisioning, cleaning, inspecting, and rescuing bare-metal nodes. The following procedure creates a provisioning network. Repeat the procedure for each Bare Metal Provisioning network you require.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientCreate the network over which to provision bare-metal instances:
$ openstack network create \ --provider-network-type <network_type> \ [--provider-segment <vlan_id>] \ --provider-physical-network <provider_physical_network> \ --share <network_name>-
Replace
<network_type>with the type of network, eitherflatorvlan. -
Optional: If your network type is
vlanthen specify the--provider-segment. -
Replace
<provider_physical_network>with the name of the physical network over which you implement the virtual network, which is the bridge mapping configured for the OVN service on the control plane. -
Replace
<network_name>with a name for this network.
-
Replace
Create the subnet on the network:
$ openstack subnet create \ --network <network_name> \ --subnet-range <network_cidr> \ --ip-version 4 \ --gateway <gateway_ip> \ --allocation-pool start=<start_ip>,end=<end_ip> \ --dhcp <subnet_name> --dns-nameserver <dns_ip>-
Replace
<network_name>with the name of the provisioning network that you created in the previous step. -
Replace
<network_cidr>with the CIDR representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with<start_ip>and ending with<end_ip>must be within the block of IP addresses specified by<network_cidr>. -
Replace
<gateway_ip>with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by<network_cidr>, but outside of the block of IP addresses specified by the range that starts with<start_ip>and ends with<end_ip>. -
Replace
<start_ip>with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<end_ip>with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<subnet_name>with a name for the subnet. -
Replace
<dns_ip>with the IP address of the load balancer configured for the DNS service on the control plane.
-
Replace
Create a router for the network and subnet to ensure that the Networking service serves metadata requests:
$ openstack router create <router_name>-
Replace
<router_name>with a name for the router.
-
Replace
Attach the subnet to the new router to enable the metadata requests from
cloud-initto be served and the node to be configured:$ openstack router add subnet <router_name> <subnet>-
Replace
<router_name>with the name of your router. -
Replace
<subnet>with the ID or name of the bare-metal subnet that you created in step 3.
-
Replace
Exit the
openstackclientpod:$ exit
3.4. Adding the Bare Metal Provisioning service (ironic) to the control plane Copy linkLink copied to clipboard!
To enable the Bare Metal Provisioning service (ironic) on your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you must add the ironic service to the control plane and configure it as required.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following
cellTemplatesconfiguration to thenovaservice configuration:nova: apiOverride: route: {} template: ... secret: osp-secret cellTemplates: cell0: cellDatabaseAccount: nova-cell0 hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 hasAPIAccess: true novaComputeTemplates: compute-ironic:1 computeDriver: ironic.IronicDriver- 1
- The name of the Compute service. The name has a limit of 20 characters, and must contain only lowercase alphanumeric characters and the
-symbol.
Enable the
ironicservice and specify the networks to connect to:spec: ... ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer ironicConductors: - replicas: 1 storageRequest: 10G networkAttachments: - baremetal1 provisionNetwork: baremetal2 ironicInspector: replicas: 03 networkAttachments: - baremetal4 inspectionNetwork: baremetal5 ironicNeutronAgent: replicas: 1 secret: osp-secret- 1
- The name of the
NetworkAttachmentDefinitionCR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for theironicConductorpods. - 2
- The name of the Networking service (neutron) network you created for use as the provisioning network in Creating the bare-metal network.
- 3
- You can deploy the Bare Metal Provisioning service without the
ironicInspectorservice. To deploy the service, set the number ofreplicasto1. - 4
- The name of the
NetworkAttachmentDefinitionCR you created for your isolated bare-metal network in Preparing RHOCP for bare-metal networks to use for theironicInspectorpod. - 5
- The name of the Networking service (neutron) network you created for use as the inspection network in Creating the bare-metal network. The Ironic Inspector API listens on port 5050.
Specify the networks the Bare Metal Provisioning service uses for provisioning, cleaning, inspection, and rescuing bare-metal nodes:
spec: ... ironic: ... ironicConductors: - replicas: 1 storageRequest: 10G networkAttachments: - baremetal provisionNetwork: baremetal customServiceConfig: | [neutron] cleaning_network = <network_UUID> provisioning_network = <network_UUID> inspection_network = <network_UUID> rescuing_network = <network_UUID>-
Replace
<network_UUID>with the UUID of the network you created in Creating the bare-metal network for the function.
-
Replace
Configure the OVN mappings:
ovn: template: ovnController: ... nicMappings:1 datacentre: ocpbr baremetal: baremetal- 1
- List of key-value pairs that map the physical network provider to the interface name defined in the
NodeNetworkConfigurationPolicy(nncp) CR.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Verification
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientConfirm that the internal service endpoints are registered with each service:
$ openstack endpoint list -c 'Service Name' -c Interface -c URL --service ironic +--------------+-----------+---------------------------------------------------------------+ | Service Name | Interface | URL | +--------------+-----------+---------------------------------------------------------------+ | ironic | internal | http://ironic-internal.openstack.svc:9292 | | ironic | public | http://ironic-public-openstack.apps.ostest.test.metalkube.org | +--------------+-----------+---------------------------------------------------------------+Exit the
openstackclientpod:$ exit
3.5. Configuring node event history records Copy linkLink copied to clipboard!
The Bare Metal Provisioning service (ironic) records node event history by default. You can configure how the node event history records are managed.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following configuration options to the
customServiceConfigparameter in theironicConductorstemplate to configure how node event history records are managed:spec: ... ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: ... ironicConductors: - replicas: 1 storageRequest: 10G networkAttachments: - baremetal provisionNetwork: baremetal customServiceConfig: | ... [conductor] node_history_max_entries=<max_entries> node_history_cleanup_interval=<clean_interval> node_history_cleanup_batch_count=<max_purge> node_history_minimum_days=<min_days> ... secret: osp-secret-
Optional: Replace
<max_entries>with the maximum number of event records that the Bare Metal Provisioning service records. The oldest recorded events are removed when the maximum number of entries is reached. By default, a maximum of300events are recorded. The minimum valid value is0. -
Optional: Replace
<clean_interval>with the interval in seconds between scheduled cleanup of the node event history entries. By default, the cleanup is scheduled every86400seconds, which is once daily. Set to0to disable node event history cleanup. -
Optional: Replace
<max_purge>with the maximum number of entries to purge during each clean up operation. Defaults to1000. -
Optional: Replace
<min_days>with the minimum number of days to explicitly keep the database history entries for nodes. Defaults to0.
-
Optional: Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup startedThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace:$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Chapter 4. Adding physical machines as bare-metal nodes Copy linkLink copied to clipboard!
Use one of the following methods to enroll a bare-metal node:
- Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
- Register a physical machine as a bare-metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses.
4.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
The
occommand line tool is installed on the workstation.
4.2. Enrolling bare-metal nodes with an inventory file Copy linkLink copied to clipboard!
You can create an inventory file that defines the details of each bare-metal node. You import the file into the Bare Metal Provisioning service (ironic) to enroll the bare-metal nodes, and then make each node available.
Some drivers might require specific configuration. For more information, see Bare metal drivers.
Procedure
-
Create an inventory file to define the details of each node, for example,
ironic-nodes.yaml. For each node, define the node name and the address and credentials for the bare-metal driver. For details on the available properties for your enabled driver, see Bare metal drivers.
nodes: - name: <node> driver: <driver> driver_info: <driver>_address: <ip> <driver>_username: <user> <driver>_password: <password> [<property>: <value>]-
Replace
<node>with the name of the node. -
Replace
<driver>with a supported bare-metal driver, for example,redfish. -
Replace
<ip>with the IP address of the Bare Metal controller. -
Replace
<user>with your username. -
Replace
<password>with your password. -
Optional: Replace
<property>with a driver property that you want to configure, and replace<value>with the value of the property. For information on the available properties, see Bare metal drivers.
-
Replace
Define the node properties and ports:
nodes: - name: <node> ... properties: cpus: <cpu_count> cpu_arch: <cpu_arch> memory_mb: <memory> local_gb: <root_disk> root_device: serial: <serial> network_interface: <interface_type> ports: - address: <mac_address>-
Replace
<cpu_count>with the number of CPUs. -
Replace
<cpu_arch>with the type of architecture of the CPUs. -
Replace
<memory>with the amount of memory in MiB. -
Replace
<root_disk>with the size of the root disk in GiB. Only required when the machine has multiple disks. -
Replace
<serial>with the serial number of the disk that you want to use for deployment. Optional: Include the
network_interfaceproperty if you want to override the default network type offlat. You can change the network type to one of the following valid values:-
neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. -
noop: Use for standalone deployments where network switching is not required.
-
-
Replace
<mac_address>with the MAC address of the NIC used to PXE boot.
-
Replace
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientImport the inventory file into the Bare Metal Provisioning service:
$ openstack baremetal create ironic-nodes.yamlThe nodes are now in the
enrollstate.- Wait for the extra network interface port configuration data to populate the Networking service (neutron). This process takes at least 60 seconds.
Set the provisioning state of each node to
available:$ openstack baremetal node manage <node> $ openstack baremetal node provide <node>The Bare Metal Provisioning service cleans the node if you enabled node cleaning.
Check that the nodes are enrolled:
$ openstack baremetal node listThere might be a delay between enrolling a node and its state being shown.
Exit the
openstackclientpod:$ exit
4.3. Enrolling a bare-metal node manually Copy linkLink copied to clipboard!
Register a physical machine as a bare-metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientAdd a new node:
$ openstack baremetal node create --driver <driver_name> --name <node_name>-
Replace
<driver_name>with the name of the driver, for example,redfish. -
Replace
<node_name>with the name of your new bare-metal node.
-
Replace
- Note the UUID assigned to the node when it is created.
Update the node properties to match the hardware specifications on the node:
$ openstack baremetal node set <node> \ --property cpus=<cpu> \ --property memory_mb=<ram> \ --property local_gb=<disk> \ --property cpu_arch=<arch>-
Replace
<node>with the ID of the bare metal node. -
Replace
<cpu>with the number of CPUs. -
Replace
<ram>with the RAM in MB. -
Replace
<disk>with the disk size in GB. -
Replace
<arch>with the architecture type.
-
Replace
Optional: Set the
network_interfaceproperty to override the default network type offlat:$ openstack baremetal node set <node> --network-interace <network_interface>Replace
<network_interface>with one of the following valid network types:-
neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. -
noop: Use for standalone deployments where network switching is not required.
-
Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:
$ openstack baremetal node set <node> \ --property root_device='{"<property>": "<value>"}'-
Replace
<node>with the ID of the bare metal node. Replace
<property>and<value>with details about the disk that you want to use for deployment, for exampleroot_device='{"size": "128"}'RHOSP supports the following properties:
-
model(String): Device identifier. -
vendor(String): Device vendor. -
serial(String): Disk serial number. -
hctl(String): Host:Channel:Target:Lun for SCSI. -
size(Integer): Size of the device in GB. -
wwn(String): Unique storage identifier. -
wwn_with_extension(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension(String): Unique vendor storage identifier. -
rotational(Boolean): True for a rotational device (HDD), otherwise false (SSD). name(String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.NoteIf you specify more than one property, the device must match all of those properties.
-
-
Replace
Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:
$ openstack baremetal port create --node <node_uuid> <mac_address>-
Replace
<node>with the unique ID of the bare metal node. -
Replace
<mac_address>with the MAC address of the NIC used to PXE boot.
-
Replace
Validate the configuration of the node:
$ openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+The validation output
Resultindicates the following:-
False: The interface has failed validation. If the reason provided includes missing theinstance_infoparameters[\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only setimage_sourceto pass the validation. -
True: The interface has passed validation. -
None: The interface is not supported for your driver.
-
Exit the
openstackclientpod:$ exit
4.4. Deploying a bare-metal node with Redfish virtual media boot Copy linkLink copied to clipboard!
You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.
Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.
To launch bare-metal instances with the redfish hardware type over virtual media, set the boot interface of each bare-metal node to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.
Prerequisites
- The bare-metal node is registered and enrolled.
- The IPA and instance images are available in the Image Service (glance).
- For UEFI nodes, an EFI system partition image (ESP) is available in the Image Service (glance).
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientSet the Bare Metal service boot interface to
redfish-virtual-media:$ openstack baremetal node set --boot-interface redfish-virtual-media <node_name>-
Replace
<node_name>with the name of the node.
-
Replace
For UEFI nodes, define the EFI System Partition (ESP) image:
$ openstack baremetal node set --driver-info bootloader=<esp_image> <node>-
Replace
<esp_image>with the image UUID or URL for the ESP image. -
Replace
<node>with the name of the node.
NoteFor BIOS nodes, do not complete this step.
-
Replace
Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare metal node:
$ openstack baremetal port create --pxe-enabled True --node <node_uuid> <mac_address>-
Replace
<node_uuid>with the UUID of the bare-metal node. -
Replace
<mac_address>with the MAC address of the NIC on the bare-metal node.
-
Replace
Exit the
openstackclientpod:$ exit
4.5. Creating flavors for launching bare-metal instances Copy linkLink copied to clipboard!
You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientRetrieve a list of your nodes to identify their UUIDs:
$ openstack baremetal node listTag each bare-metal node with a custom bare-metal resource class:
$ openstack baremetal node set \ --resource-class baremetal.<CUSTOM> <node>-
Replace
<CUSTOM>with a string that identifies the purpose of the resource class. For example, set toGPUto create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads. -
Replace
<node>with the ID of the bare metal node.
-
Replace
Create a flavor for bare-metal instances:
$ openstack flavor create --id auto \ --ram <ram_size_mb> --disk <disk_size_gb> \ --vcpus <no_vcpus> baremetal-
Replace
<ram_size_mb>with the RAM of the bare metal node, in MB. -
Replace
<disk_size_gb>with the size of the disk on the bare metal node, in GB. Replace
<no_vcpus>with the number of CPUs on the bare metal node.NoteThese properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.
-
Replace
Associate the flavor for bare-metal instances with the custom resource class:
$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \ baremetalTo determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with
CUSTOM_.NoteA flavor can request only one instance of a bare-metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances:
$ openstack flavor set \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 baremetalVerify that the new flavor has the correct values:
$ openstack flavor listExit the
openstackclientpod:$ exit
4.6. Bare-metal node provisioning states Copy linkLink copied to clipboard!
A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".
Use the following table to understand the node provisioning states and the actions you can perform to transition a node from one state to another.
| State | Category | Description |
|---|---|---|
|
| Stable | The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes. |
|
| In transition |
The Bare Metal Provisioning service validates that it can manage the node by using the |
|
| Stable |
The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the
You must move a node to the
Move a node into the |
|
| In transition |
The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to |
|
| In transition |
The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the |
|
| Stable |
The provisioning state that indicates that the node inspection failed. You can transition the node from the
|
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the cleaning process of a node in the |
|
| Stable |
After nodes have been successfully preconfigured and cleaned, they are moved into the
|
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the deployment of a node in the |
|
| Stable |
The provisioning state that indicates that the node deployment failed. You can transition the node from the
|
|
| Stable |
Nodes in the
|
|
| In transition |
When a node is in the |
|
| Stable |
If a node deletion is unsuccessful, the node is moved into the
|
|
| In transition |
You can use the |
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the rescue operation of a node in the |
|
| Stable |
The provisioning state that indicates that the node rescue failed. You can transition the node from the
|
|
| Stable |
Nodes in the
|
|
| In transition |
Nodes in the |
|
| Stable |
The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the
|
Chapter 5. Creating and managing resources for bare-metal instances Copy linkLink copied to clipboard!
As a cloud operator you can create and manage resources for bare-metal workloads and enable your cloud users to create bare-metal instances.
You can create the following resources for bare-metal workloads:
- Bare-metal instances
- Images for bare-metal instances
- Virtual network interfaces (VIFs) for bare-metal nodes
- Port groups
You can perform the following resource management tasks:
- Manual node cleaning
- Attach a virtual network interface (VIF) to a bare-metal instance
5.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
The
occommand line tool is installed on the workstation.
5.2. Launching bare-metal instances Copy linkLink copied to clipboard!
You can launch a bare-metal instance by using the OpenStack Client CLI.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientCreate the bare-metal instance:
$ openstack server create \ --nic net-id=<network_uuid> \ --flavor baremetal \ --image <image_uuid> \ myBareMetalInstance-
Replace
<network_uuid>with the unique identifier for the network that you created to use with the Bare Metal Provisioning service. -
Replace
<image_uuid>with the unique identifier for the image that has the software profile that your instance requires.
-
Replace
Check the status of the instance:
$ openstack server list --name myBareMetalInstanceExit the
openstackclientpod:$ exit
5.3. Images for launching bare-metal instances Copy linkLink copied to clipboard!
A Red Hat OpenStack Services on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic) requires two sets of images:
-
Deploy images: The deploy images are the
agent.ramdiskandagent.kernelimages that the Bare Metal Provisioning agent (ironic-python-agent) requires to boot the RAM disk over the network and copy the user image to the disk. User images: The images the cloud user uses to provision their bare-metal instances. The user image consists of a
kernelimage, aramdiskimage, and amainimage. The main image is either a root partition, or a whole-disk image:- Whole-disk image: An image that contains the partition table and boot loader.
- Root partition image: Contains only the root partition of the operating system.
Compatible whole-disk RHEL guest images should work without modification. To create your own custom disk image, see Creating operating system images for instances in Performing storage operations.
5.4. Booting an ISO image directly for use as a RAM disk Copy linkLink copied to clipboard!
You can boot a bare-metal instance from a RAM disk or an ISO image if you want to boot an instance with PXE, iPXE, or Virtual Media, and use the instance memory for local storage. This is useful for advanced scientific and ephemeral workloads where writing an image to the local storage is not required or desired.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientSpecify
ramdiskas the deploy interface for the bare-metal node that boots from an ISO image:$ openstack baremetal node set --deploy-interface ramdiskTipYou can configure the deploy interface when you create the bare-metal node by adding
--deploy-interface ramdiskto theopenstack baremetal node createcommand. For information on how to create a bare-metal node, see Enrolling a bare-metal node manually.Update the bare-metal node to boot an ISO image:
$ openstack baremetal node set <node_UUID> \ --instance-info boot_iso=<boot_iso_url>-
Replace
<node_UUID>with the UUID of the bare-metal node that you want to boot from an ISO image. Replace
<boot_iso_url>with the URL of the boot ISO file. You can specify the boot ISO file URL by using one of the following methods:- HTTP or HTTPS URL
- File path URL
- Image service (glance) object UUID
-
Replace
Deploy the bare-metal node as an ISO image:
$ openstack baremetal node deploy <node_UUID>Exit the
openstackclientpod:$ exit
5.5. Creating the virtual network interfaces (VIFs) for bare-metal instances Copy linkLink copied to clipboard!
Cloud users can attach their bare-metal instances to the network interfaces you create for the bare-metal workloads. You must create the virtual network interfaces (VIFs) for the cloud user to select for attachment.
5.5.1. Bare Metal Provisioning service virtual network interfaces (VIFs) Copy linkLink copied to clipboard!
The Bare Metal Provisioning service (ironic) uses the Networking service (neutron) to manage the attachment state of the virtual network interfaces (VIFs). A VIF is a Networking service port, referred to by the port ID, which is a UUID value. A VIF can be available across a limited number of physical networks, dependent upon the cloud’s operating configuration and operating constraints.
The Bare Metal Provisioning service can also attach the bare-metal instance to a separate provider network to improve the overall operational security.
Each VIF must be attached to a port or port group, therefore the maximum number of VIFs is determined by the number of configured and available ports represented in the Bare Metal Provisioning service.
The network interface is one of the driver interfaces that manages the network switching for bare-metal instances. The type of network interface you create influences the operation of your bare-metal workloads. The following network interfaces are available to use with the Bare Metal Provisioning service:
-
noop: Used for standalone deployments, and does not perform any network switching. -
flat: Places all nodes into a single provider network that is pre-configured on the Networking service and physical equipment. Nodes remain physically connected to this network during their entire life cycle. The supplied VIF attachment record is updated with new DHCP records as needed. When using this network interface, the VIF needs to be created on the same network that the bare-metal node is physically attached to. -
neutron: Provides tenant-defined networking through the Networking service, separating tenant networks from each other and from the provisioning and cleaning provider networks. Nodes move between these networks during their life cycle. This interface requires Networking service support for the switches attached to the bare-metal instances so they can be programmed. This interface requires the ML2 plugin OVN mechanism driver or other SDN integrations to facilitate port configuration on the network. Use theneutroninterface when your environment uses IPv6.
5.5.2. How the Bare Metal Provisioning service manages VIFs when provisioning a bare-metal node Copy linkLink copied to clipboard!
When provisioning, by default the Bare Metal Provisioning service (ironic) attempts to attach all PXE-enabled ports to the provisioning network. If you have neutron.add_all_ports enabled, then the Bare Metal Provisioning service attempts to bind all ports to the required service network beyond the Bare Metal Provisioning service ports with pxe_enabled set to True.
After the bare-metal nodes are provisioned, and before the bare-metal nodes are moved to the ACTIVE provisioning state, the previously attached ports are unbound. The process for unbinding is dependent on the network interface:
-
flat: All the requested VIFs with all binding configurations in all states are unbound. -
neutron: The VIFs requested by the cloud user are attached to the bare-metal node for the first time, because the VIFs that the Bare Metal Provisioning service created were being deleted during the provisioning process.
The same flow and logic applies to the cleaning, service, and rescue processes.
5.5.3. Creating a virtual network interface (VIF) for bare-metal nodes Copy linkLink copied to clipboard!
Use the Networking service (neutron) to create the port that serves as the virtual network interface (VIF). If you are using the neutron network interface, then you must also create a physical connection to the underlying physical network by creating a Bare Metal Provisioning service (ironic) port with a binding profile. The binding profile is required by the Networking service’s ML2 mechanism driver when a VIF is attached to a bare-metal instance. The binding profile includes the VNIC_BAREMETAL port type, the bare-metal node UUID, and local link connection information that identifies the tenant network that the ML2 mechanism driver must attach to the physical bare-metal port.
The binding profile information is populated through the introspection process by using LLDP data that is broadcast from the switches, therefore the switches must have LLDP enabled. You need to manually set or update the binding profile when there is a physical networking change, for example, when a bare-metal port’s cable has been moved to a different port on a switch, or the switch has been replaced.
Decoding LLDP data is performed as a best effort action. Some switch vendors, or changes in switch vendor firmware might impact field decoding.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientCreate the virtual network interface (VIF):
$ openstack port create --network <network> <name>If you are using the
neutronnetwork interface, then create a Bare Metal Provisioning service port with the binding profile information:$ openstack baremetal port create <physical_mac_address> --node <node_uuid> \ --local-link-connection switch_id=<switch_mac_address> \ --local-link-connection switch_info=<switch_hostname> \ --local-link-connection port_id=<switch_port_for_connection> \ --pxe-enabled true \ --physical-network <phys_net>-
Replace
<switch_mac_address>with the MAC address or OpenFlow-based datapath_id of the switch. -
Replace
<switch_hostname>with the name of the bare-metal node that hosts the switch. -
Replace
<switch_port_for_connection>with the port ID on the switch, for example,Gig0/1, orrep0-0. -
Replace
<phys_net>with the name of the physical network you want to associate with the bare-metal port. The Bare Metal Provisioning service uses the physical network to map the Networking service virtual ports to physical ports and port groups. If not set then any VIF is mapped to that port when there no bare-metal port with a suitable physical network assignment exists.
-
Replace
Exit the
openstackclientpod:$ exit
5.6. Configuring port groups in the Bare Metal Provisioning service Copy linkLink copied to clipboard!
Port group functionality for bare-metal nodes is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Port groups (bonds) provide a method to aggregate multiple network interfaces into a single "bonded" interface. Port group configuration always takes precedence over an individual port configuration. During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify preference for port or port group in an interface attachment request. If a port group is available, the interface attachment will use it. Port groups that do not have any ports are ignored.
If a port group has a physical network, then all the ports in that port group must have the same physical network. The Bare Metal Provisioning service uses configdrive to support configuration of port groups in the instances.
Bare Metal Provisioning service API version 1.26 and later supports port group configuration.
To configure port groups in a bare metal deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch.
You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE.
With port group fallback, all the ports in a port group can fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the --support-standalone-ports and --unsupport-standalone-ports options.
5.6.1. Prerequisites Copy linkLink copied to clipboard!
- The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
The
occommand line tool is installed on the workstation.
5.6.2. Configuring port groups in the Bare Metal Provisioning service Copy linkLink copied to clipboard!
Create a port group to aggregate multiple network interfaces into a single bonded interface.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientCreate a port group:
$ openstack baremetal port group create \ --node <node_uuid> --name <group_name> \ [--address <mac_address>] [--mode <mode>] \ --property miimon=100 --property xmit_hash_policy="layer2+3" [--support-standalone-ports]-
Replace
<node_uuid>with the UUID of the node that this port group belongs to. -
Replace
<group_name>with the name for this port group. -
Optional: Replace
<mac_address>with the MAC address for the port group. If you do not specify an address, the deployed instance port group address is the same as the Networking service port. If you do not attach the Networking service port, the port group configuration fails. -
Optional: Replace
<mode>with mode of the port group. - Specify if the group supports fallback to standalone ports.
NoteYou must configure port groups manually in standalone mode either in the image or by generating the
configdriveand adding it to the node’sinstance_info. Ensure that you havecloud-initversion 0.7.7 or later for the port group configuration to work.-
Replace
Associate a port with a port group:
During port creation:
$ openstack baremetal port create --node <node_uuid> --address <mac_address> --port-group <group_name>During port update:
$ openstack baremetal port set <port_uuid> --port-group <group_uuid>
Boot an instance by providing an image that has
cloud-initor supports bonding.To check if the port group is configured properly, run the following command:
# cat /proc/net/bonding/bondXHere,
Xis a number thatcloud-initgenerates automatically for each configured port group, starting with a0and incremented by one for each configured port group.Exit the
openstackclientpod:$ exit
5.7. Cleaning nodes manually Copy linkLink copied to clipboard!
The Bare Metal Provisioning service (ironic) cleans nodes automatically when they are unprovisioned to prepare them for provisioning. You can perform manual cleaning on specific nodes as required. Node cleaning has two modes:
- Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments.
- Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientCheck the current state of the node:
$ openstack baremetal node show \ -f value -c provision_state <node>-
Replace
<node>with the name or UUID of the node to clean.
-
Replace
If the node is not in the
manageablestate, then set it tomanageable:$ openstack baremetal node manage <node>Clean the node:
$ openstack baremetal node clean <node> \ --clean-steps '[{"interface": "deploy", "step": "<clean_mode>"}]'-
Replace
<node>with the name or UUID of the node to clean. Replace
<clean_mode>with the type of cleaning to perform on the node:-
erase_devices: Performs a full clean. -
erase_devices_metadata: Performs a metadata only clean.
-
-
Replace
Wait for the clean to complete, then check the status of the node:
-
manageable: The clean was successful, and the node is ready to provision. -
clean failed: The clean was unsuccessful. Inspect thelast_errorfield for the cause of failure.
-
Exit the
openstackclientpod:$ exit
5.8. Attaching a virtual network interface (VIF) to a bare-metal instance Copy linkLink copied to clipboard!
To attach a bare-metal instance to the bare-metal network interface, the cloud user can use the Compute service (nova) or the Bare Metal Provisioning service (ironic).
Compute service: Cloud users use the
openstack server add networkcommand. For more information, see Attaching a network to an instance.Note===
- When using the Compute service you must explicitly declare the port when creating the instance. When the Compute service makes a request to the Bare Metal Provisioning service to create an instance, the Compute service attempts to record all the VIFs the user requested to be attached in the Bare Metal Provisioning service to generate the metadata.
- You cannot specify which physical port to attach a VIF to when using the Compute service.If you want to explicitly declare which port to map to, then instead use the Bare Metal Provisioning service to create the attachment. ===
-
Bare Metal Provisioning service: Cloud users use the
openstack baremetal node vif attachcommand to attach a VIF to a bare-metal instance. For more information about virtual network interfaces (VIFs), see Bare Metal Provisioning service virtual network interfaces (VIFs).
The following procedure uses the Bare Metal Provisioning service to attach a bare-metal instance to a network. The Bare Metal Provisioning service creates the VIF attachment by using the UUID of the port you created with the Networking service .
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientRetrieve the UUID of the bare-metal instance you want to attach the VIF to:
$ openstack server listRetrieve the UUID of the VIF you want to attach to your node:
$ openstack port listOptional: Retrieve the UUID of the bare-metal port you want to map the VIF to:
$ openstack baremetal port listAttach the VIF to your bare-metal instance:
$ openstack baremetal node vif attach [--port-uuid <port_uuid>] \ <node> <vif_id>-
Optional: Replace
<port_uuid>with the UUID of the bare-metal port to attach the VIF to. -
Replace
<node>with the name or UUID of the bare-metal instance you want to attach the VIF to. -
Replace
<vif_id>with the name or UUID of the VIF to attach to the bare-metal instance.
-
Optional: Replace
Exit the
openstackclientpod:$ exit
5.8.1. How the Bare Metal Provisioning service attaches the VIF to a bare-metal instance Copy linkLink copied to clipboard!
When a cloud user requests that a virtual network interface (VIF) is attached to their bare-metal instance by using the openstack baremetal node vif attach command without a declared port or port group preference, the Bare Metal Provisioning service (ironic) selects a suitable unattached port or port group by evaluating the following criteria in order:
- Ports or port groups do not have a physical network or have a physical network that matches one of the VIF’s available physical networks.
- Prefer ports and port groups that have a physical network to ports and port groups that do not have a physical network.
- Prefer port groups to ports.
- Prefer ports with PXE enabled.
When the Bare Metal Provisioning service attaches any VIF to a bare-metal instance it explicitly sets the MAC address for the physical port to which the VIF is bound. If a node is already in an ACTIVE state, then the Networking service (neutron) updates the VIF attachment.
When the Bare Metal Provisioning service unbinds the VIF, it makes a request to the Networking service to reset the assigned MAC address to avoid conflicts with the Networking service’s unique hardware MAC address requirement.
5.8.2. Attaching and detaching virtual network interfaces Copy linkLink copied to clipboard!
The Bare Metal Provisioning service has an API that you can use to manage the mapping between virtual network interfaces. For example, the interfaces in the Networking service (neutron) and your physical interfaces (NICs). You can configure these interfaces for each bare-metal node to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientList the VIF IDs that are connected to the bare-metal node:
$ openstack baremetal node vif list <node> +--------------------------------------+ | ID | +--------------------------------------+ | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 | +--------------------------------------+-
Replace
<node>with the name or UUID of the bare-metal node.
-
Replace
After the VIF is attached, the Bare Metal Provisioning service updates the virtual port in the Networking service with the MAC address of the physical port. Check this port address:
$ openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips +-------------+-----------------------------------------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+-----------------------------------------------------------------------------+Create a new port on the network where you created the bare-metal node:
$ openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 <port_name>Remove the port from the bare-metal instance it was attached to:
$ openstack server remove port <instance_name> 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16Check that the IP address no longer exists on the list:
$ openstack server listCheck if there are VIFs attached to the node:
$ openstack baremetal node vif list <node> $ openstack port listAdd the newly created port:
$ openstack server add port <instance_name> <port_name>Verify that the new IP address shows the new port:
$ openstack server listCheck if the VIF ID is the UUID of the new port:
$ openstack baremetal node vif list <node> +--------------------------------------+ | ID | +--------------------------------------+ | 6181c089-7e33-4f1c-b8fe-2523ff431ffc | +--------------------------------------+Check if the Networking service port MAC address is updated and matches one of the Bare Metal Provisioning service ports:
$ openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips +-------------+------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+------------------------------------------------------------------------------+Reboot the bare-metal node so that it recognizes the new IP address:
$ openstack server reboot overcloud-baremetal-0After you detach or attach interfaces, the bare-metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this might take some time because the old DHCP lease is still valid. To initiate these changes immediately, reboot the bare-metal node.
Chapter 6. Enabling tenant-defined networking for BMaaS workloads Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
You can enable tenant-defined networking on the cloud, with tenant networks isolated from each other and from the provisioning and cleaning provider networks. To enable tenant-defined networking, you must use the Networking Generic Switch ML2 plugin to configure the physical network switches attached to the bare-metal nodes on the Networking service.
You can configure multiple physical network switches. You must configure and add each switch to the control plane individually. To configure the switches on the control plane, perform the following tasks:
- Create a generic switch configuration file.
-
Create a
Secretcustom resource (CR) that contains the generic switch configuration file. -
Mount the generic switch configuration file on the control plane through the
neutronservice.
6.1. Limitations Copy linkLink copied to clipboard!
- Routed spine-leaf networks are not supported.
- Static provisioning network interfaces are not supported.
- Contact Red Hat Support if you need to use a networking-generic-switch plugin with port groups, such as bonded ports or port channels.
6.2. Prerequisites Copy linkLink copied to clipboard!
- A user account with privileges to SSH into the switch by using the management IP address, to execute sudo and execute configuration commands to pre-configure the switch. For more information about how to authenticate the user account for vendor-specific switches and what switch pre-configuration is required, see Preparing vendor-specific switches.
- Inter-switch links must be pre-configured as VLAN trunk ports.
- Ports for workloads must be in Layer-2 mode.
6.3. Preparing vendor-specific switches Copy linkLink copied to clipboard!
The Networking Generic Switch driver uses the ngs_trunk_ports configuration option to tag switch ports as permitted when creating and deleting attachments. You might need to perform additional trunk configuration.
Dell Force10 switch running OS10 (netmiko_dell_os10)
If the SSH server is not already enabled, use the following command to enable it:
$ ip ssh server enable
If password authentication is not already enabled, use the following command to enable it:
$ ip ssh server password-authentication
Switches running SONiC
Links for connected physical hosts must be in Layer-2 mode. Use the following commands to set the host to Layer-2 mode:
$ sudo config interface ip remove $INTERFACE $IP_ADDRESS/$CIDR
$ sudo config switchport mode access $INTERFACE
6.4. Configuring the physical switches Copy linkLink copied to clipboard!
Create a configuration file that configures the physical network switches.
Procedure
-
Create a configuration file for the physical switches named
03-ml2-genericswitch.conf. Specify the location of the session log file that captures the SSH session commands and responses:
[ngs] session_log_file = /var/log/neutron/ngs.logAdd VLAN to the list of supported tenant network types:
[ml2] tenant_network_types = geneve,vlanUse a comma-separated list to map the physical networks to the segmentation ranges for the tenant networks:
[ml2_type_vlan] network_vlan_ranges = <network_name>:<range_start>:<range_end>,<network_name>:<range_start>:<range_end>-
Replace
<network_name>with the name of the physical network. -
Replace
<range_start>with the VLAN ID for the start of the VLAN range. -
Replace
<range_end>with the VLAN ID for the end of the VLAN range.
TipYou can apply multiple VLAN ranges to a single physical network by repeating the physical network name multiple times.
-
Replace
Configure each physical network switch with the type of switch device and details of how to connect to the switch device. The parameters required depend on the switch device.
[genericswitch:<switch_name>] device_type = <device_type> ngs_mac_address = <mac_address> <parameter> = <parameter_value>-
Replace
<switch_name>with the name of the physical network switch. Replace
<device_type>with the networking-generic-switch driver to use for the device. For example:-
For switches that run SONiC, set to
netmiko_sonic. -
For the Cisco Nexus switch, set to
netmiko_cisco_nxos. -
For the Dell Force 10 switch running OS10, set to
netmiko_dell_force10.
-
For switches that run SONiC, set to
-
Replace
<mac_address>with the MAC address of the switch device. -
Optional: Replace
<parameter>and<parameter_value>with any other configurations required for the switch device. For more information on the available configuration options, see Physical network switch configuration options.
-
Replace
6.4.1. Physical network switch configuration options Copy linkLink copied to clipboard!
Use the following parameters as required to configure each physical network switch.
| Parameter | Description |
|---|---|
|
|
(Mandatory) The networking-generic-switch driver to use for the device, for example, |
|
|
(Mandatory) The MAC address of the switch bridge that manages the switch device. The MAC address is used to match the |
|
| A comma-separated list of allowed ports for the switch. If not set, all ports are allowed. |
|
| The management IP address that connects to the SSH server on the switch and enables switch management. |
|
| The username for the switch device. |
|
| The password that authenticates access to the switch device when not using key-based authentication. |
|
|
Set |
|
|
If |
|
| The secret password required on some switch chassis and in specific configurations. |
|
| The default VLAN to revert port configurations to when ports are detached. |
|
| The list of trunk ports that you must configure with VLAN networks, so that a tenant VLAN is available on the network. This setting is referenced when any network is created or removed from the environment and can also be required by the network architecture and switch configuration. |
|
| A comma separated list of physical networks that are available on this switch device. This setting is optional and is useful when you have distinct physical networks in your Neutron configuration. |
6.5. Creating the Secret CR for the physical switches Copy linkLink copied to clipboard!
You must create a Secret custom resource (CR) that includes the 03-ml2-genericswitch.conf configuration file for the physical network switches. If you are using key-based authentication, then also include the authentication keys in the Secret CR.
If you need to update the Secret CR after you deployed the control plane, then you must create a new Secret CR and update the control plane with the new Secret CR. Updating the existing Secret CR does not automatically update or restart the neutron service on the control plane.
Procedure
Create the
SecretCR for the physical network switches and apply it to the cluster:$ oc create secret generic neutron-switch-config \ --save-config --from-file=03-ml2-genericswitch.conf \ [--from-file=<key_file_name>] -n openstack \ -o yaml | oc apply -f --
Optional: Replace
<key_file_name>with the name and location of your SSH private key file. The switch needs to be configured with the corresponding public key file.
-
Optional: Replace
6.6. Adding the switch configuration to the control plane Copy linkLink copied to clipboard!
To add the switch configuration to the control plane, you mount both the switch 03-ml2-genericswitch.conf configuration file and the Secret custom resource (CR) through the neutron service configuration in the OpenStackControlPlane CR.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, on your workstation. Add the following configuration to the
ironicservice to change the network interface toneutron:spec: ... ironic: template: customServiceConfig: | [DEFAULT] default_network_interface=neutronAdd the generic switch as an ML2 mechanism driver to the
neutronservice specification:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: secret: osp-secret ... neutron: ... template: ml2MechanismDrivers: - genericswitch - ovnAdd the
extraMountsparameter to theneutronservice specification to mount the configuration of the physical network switches:neutron: ... template: ... extraMounts: - name: switchConf extraVol: - volumes: - name: neutron-switch-config secret: secretName: neutron-switch-config mounts: - name: neutron-switch-config mountPath: /etc/neutron/neutron.conf.d/03-ml2-genericswitch.conf subPath: 03-ml2-genericswitch.conf readOnly: trueIf you must assign authentication keys to the physical network switches, then add the private key files to the mounts:
mounts: - name: neutron-switch-config mountPath: /etc/neutron/neutron.conf.d/03-ml2-genericswitch.conf subPath: 03-ml2-genericswitch.conf readOnly: true - name: neutron-switch-config mountPath: /etc/neutron/<key_file_name> subPath: <key_file_name> readOnly: trueThe
mountPathshould match the path to the key file defined for the switch device in the03-ml2-genericswitch.conffile.`Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Chapter 7. Troubleshooting the Bare Metal Provisioning service Copy linkLink copied to clipboard!
Use the following procedures to diagnose issues in a red Hat OpenStack on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic).
7.1. Querying node event history records Copy linkLink copied to clipboard!
You can query the node event history records to identify issues with bare-metal nodes when an operation fails.
Procedure
Open a remote shell connection to the
OpenStackClientpod:$ oc rsh -n openstack openstackclientView the event history for a particular node:
$ openstack baremetal node history list <node_id>This command returns a list of the error events and node state transitions that occurred on the node. Each event is identified with an event UUID.
View the details of a particular event that occurred on the node:
$ openstack baremetal node history get <node_id> <event_uuid>Exit the
openstackclientpod:$ exit
Chapter 8. Bare metal drivers Copy linkLink copied to clipboard!
You can configure bare metal nodes to use one of the drivers that are enabled in the Bare Metal Provisioning service. Each driver includes a provisioning method and a power management type. Some drivers require additional configuration. Each driver described in this section uses PXE for provisioning. Drivers are listed by their power management type.
You can add drivers by configuring the IronicEnabledHardwareTypes parameter in your ironic.yaml file. By default, ipmi and redfish are enabled.
For the full list of supported plug-ins and drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform.
8.1. Intelligent Platform Management Interface (IPMI) power management driver Copy linkLink copied to clipboard!
IPMI is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require an IPMI that is connected to the shared Bare Metal network. IPMI power manager driver uses the ipmitool utility to remotely manage hardware. You can use the following driver_info properties to configure the IPMI power manager driver for a node:
| Property | Description | Equivalent ipmitool option |
|---|---|---|
|
| (Mandatory) The IP address or hostname of the node. |
|
|
| The IPMI user name. |
|
|
|
The IPMI password. The password is written to a temporary file. You pass the filename to the |
|
|
| The hexadecimal Kg key for IPMIv2 authentication. |
|
|
| The remote IPMI RMCP port. |
|
|
| IPMI privilege level. Set to one of the following valid values:
|
|
|
| The version of the IPMI protocol. Set to one of the following valid values:
|
|
|
| The type of bridging. Use with nested chassis management controllers (CMCs). Set to one of the following valid values:
| n/a |
|
|
Destination channel for a bridged request. Required only if |
|
|
|
Destination address for a bridged request. Required only if |
|
|
|
Transit channel for a bridged request. Required only if |
|
|
|
Transit address for bridged request. Required only if |
|
|
|
Local IPMB address for bridged requests. Use only if |
|
|
|
Set to | n/a |
|
|
Set to | n/a |
|
| The IPMI cipher suite version to use on the node. Set to one of the following valid values:
| n/a |
The following support is provided for additional Bare Metal Provisioning service (ironic) interfaces:
- The Console interface is available for Serial over LAN (IPMI SOL) support. End console access through the Compute service (nova) is not available.
- The BIOS Settings management interface is not available for IPMI users because of limited IPMI support.
- The RAID interface available for IPMI drivers is an agent software RAID interface. This interface is only supported under the support exception process.
8.2. Redfish Copy linkLink copied to clipboard!
A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF). You can use the following driver_info properties to configure the Bare Metal Provisioning serive (ironic) connection to Redfish:
| Property | Description |
|---|---|
|
|
(Mandatory) The IP address of the Redfish controller. The address must include the authority portion of the URL. If you do not include the scheme it defaults to |
|
|
The canonical path to the system resource the Redfish driver interacts with. The path must include the root service, version, and the unique path to the system within the same authority as the |
|
| The Redfish username. |
|
| The Redfish password. |
|
|
Either a Boolean value, a path to a CA_BUNDLE file, or a directory with certificates of trusted CAs. If you set this value to |
|
| The Redfish HTTP client authentication method. Set to one of the following valid values:
|
The following support is provided for additional Bare Metal Provisioning service (ironic) interfaces:
- The Console interface is not available in the Redfish driver provided with Red Hat OpenStack Services on OpenShift (RHOSO) 18.0.
- The BIOS Settings interface is available but not supported by Red Hat. The driver attempts to surface the hardware vendor specfiic settings available through the Baseboard Management Controller as exposed through the standardized Redfish interfaces. Contents, values, and the ability to change values is dependent on the hardware vendor Redfish implementation.
-
The RAID interface can be set to
agentorredfish. Support of the RAID interface is limited to the support exception process.
8.3. Dell Remote Access Controller (DRAC) Copy linkLink copied to clipboard!
DRAC is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type, all Bare Metal Provisioning service nodes require a DRAC that is connected to the shared Bare Metal Provisioning network. Enable the idrac driver, and set the following information in the driver_info of the node:
-
drac_address- The IP address of the DRAC NIC. -
drac_username- The DRAC user name. -
drac_password- The DRAC password. -
Optional:
drac_port- The port to use for the WS-Management endpoint. The default is port443. -
Optional:
drac_path- The path to use for the WS-Management endpoint. The default path is/wsman. -
Optional:
drac_protocol- The protocol to use for the WS-Management endpoint. Valid values:http,https. The default protocol ishttps.
8.4. Integrated Remote Management Controller (iRMC) Copy linkLink copied to clipboard!
iRMC from Fujitsu is an interface that provides out-of-band remote management features, including power management and server monitoring. To use this power management type on a Bare Metal Provisioning service node, the node requires an iRMC interface that is connected to the shared Bare Metal network.
To use the iRMC driver, iRMC S4 or higher is required.
You can use the following driver_info properties to configure the iRMC driver for a node:
| Property | Description |
|---|---|
|
| The IP address of the iRMC interface NIC. |
|
| The iRMC user name. |
|
| The iRMC password. |
|
|
Set to |
|
| Set to the SNMPv3 User-based Security Model (USM) username for the iRMC firmware that runs on the target bare-metal node. Must be set for each bare-metal node. The SNMP username cannot be strings of digits (0-9). Required if FIPS security is enabled in your RHOSP environment. |
|
| Set to the SNMPv3 message authentication key for the SNMPv3 username. The minimum length of the SNMP password must be 8 characters. Required if FIPS security is enabled in your RHOSP environment. |
|
| Set to the SNMPv3 message privacy key for the SNMPv3 username. The minimum length of the SNMP password must be 8 characters. Required if FIPS security is enabled in your RHOSP environment. |
|
| Set to one of the following values, depending on the version of iRMC firmware that runs on your Fujitsu server:
Required if FIPS security is enabled in your RHOSP environment. |
|
|
Set to |
To use IPMI to set the boot mode or SCCI to get sensor data, you must complete the following steps:
Enable the sensor method in the
ironic.conffile:$ openstack-config --set /etc/ironic/ironic.conf \ irmc sensor_method <method>-
Replace
<method>withsccioripmitool.
-
Replace
If you enabled SCCI, install the
python-scciclientpackage:# dnf install python-scciclientRestart the Bare Metal conductor service:
# systemctl restart openstack-ironic-conductor.service
8.5. Integrated Lights-Out (iLO) Copy linkLink copied to clipboard!
iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring. To use this power management type, all Bare Metal nodes require an iLO interface that is connected to the shared Bare Metal network. Enable the ilo driver, and set the following information in the driver_info of the node:
-
ilo_address- The IP address of the iLO interface NIC. -
ilo_username- The iLO user name. -
ilo_password- The iLO password.