Chapter 4. Configuring the Bare Metal Provisioning service after deployment
When you have deployed your overcloud with the Bare Metal Provisioning service (ironic), you must prepare your overcloud for bare-metal workloads. To prepare your overcloud for bare-metal workloads and enable your cloud users to create bare-metal instances, complete the following tasks:
- Configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service.
- Configure node cleaning.
- Create the bare metal flavor and resource class.
- Optional: Create the bare metal images.
- Add physical machines as bare-metal nodes.
- Optional: Configure Redfish virtual media boot.
- Optional: Create host aggregates to separate physical and virtual machine provisioning.
4.1. Configuring the Networking service for bare metal provisioning
You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic). You can configure the bare-metal network by using one of the following methods:
-
Create a single flat bare-metal network for the Bare Metal Provisioning conductor services,
ironic-conductor
. This network must route to the Bare Metal Provisioning services on the control plane network. - Create a custom composable network to implement Bare Metal Provisioning services in the overcloud.
4.1.1. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a flat network
You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a single flat bare-metal network for the Bare Metal Provisioning conductor services, ironic-conductor
. This network must route to the Bare Metal Provisioning services on the control plane network.
Procedure
-
Log in to the node that hosts the Networking service (neutron) as the
root
user. Source your overcloud credentials file:
# source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Create the flat network over which to provision bare-metal instances:
# openstack network create \ --provider-network-type flat \ --provider-physical-network <provider_physical_network> \ --share <network_name>
-
Replace
<provider_physical_network>
with the name of the physical network over which you implement the virtual network, which is configured with the parameterNeutronBridgeMappings
in yournetwork-environment.yaml
file. -
Replace
<network_name>
with a name for this network.
-
Replace
Create the subnet on the flat network:
# openstack subnet create \ --network <network_name> \ --subnet-range <network_cidr> \ --ip-version 4 \ --gateway <gateway_ip> \ --allocation-pool start=<start_ip>,end=<end_ip> \ --dhcp <subnet_name>
-
Replace
<network_name>
with the name of the provisioning network that you created in the previous step. -
Replace
<network_cidr>
with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses that the subnet represents. The block of IP addresses that you specify in the range starting with<start_ip>
and ending with<end_ip>
must be within the block of IP addresses specified by<network_cidr>
. -
Replace
<gateway_ip>
with the IP address or host name of the router interface that acts as the gateway for the new subnet. This address must be within the block of IP addresses specified by<network_cidr>
, but outside of the block of IP addresses specified by the range that starts with<start_ip>
and ends with<end_ip>
. -
Replace
<start_ip>
with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<end_ip>
with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses are allocated. -
Replace
<subnet_name>
with a name for the subnet.
-
Replace
Create a router for the network and subnet to ensure that the Networking service serves metadata requests:
# openstack router create <router_name>
-
Replace
<router_name>
with a name for the router.
-
Replace
Attach the subnet to the new router to enable the metadata requests from
cloud-init
to be served and the node to be configured: :# openstack router add subnet <router_name> <subnet>
-
Replace
<router_name>
with the name of your router. -
Replace
<subnet>
with the ID or name of the bare-metal subnet that you created in the step 4.
-
Replace
4.1.2. Configuring the Networking service to integrate with the Bare Metal Provisioning service on a custom composable network
You can configure the Networking service (neutron) to integrate with the Bare Metal Provisioning service (ironic) by creating a custom composable network to implement Bare Metal Provisioning services in the overcloud.
Procedure
- Log in to the undercloud host.
Source your overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service:
(overcloud)$ openstack network show <network_name> -f value -c id
-
Replace
<network_name>
with the name of the provider network that you want to use for the bare-metal instance provisioning network.
-
Replace
-
Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example,
ironic-overrides.yaml
. Configure the network to use as the bare-metal instance provisioning network:
parameter_defaults: IronicProvisioningNetwork: <network_uuid>
-
Replace
<network_uuid>
with the UUID of the provider network retrieved in step 3.
-
Replace
Source the
stackrc
undercloud credentials file:$ source ~/stackrc
To apply the bare-metal instance provisioning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/node-info.yaml -r /home/stack/templates/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /home/stack/templates/network_environment_overrides.yaml -n /home/stack/templates/network_data.yaml -e /home/stack/templates/ironic-overrides.yaml \
-
Replace
<default_ironic_template>
with eitherironic.yaml
orironic-overcloud.yaml
, depending on the Networking service mechanism driver for your deployment.
-
Replace
4.2. Cleaning bare-metal nodes
The Bare Metal Provisioning service cleans nodes to prepare them for provisioning. You can clean bare-metal nodes by using one of the following methods:
- Automatic: You can configure your overcloud to automatically perform node cleaning when you unprovision a node.
- Manual: You can manually clean individual nodes when required.
4.2.1. Configuring automatic node cleaning
Automatic bare-metal node cleaning runs after you enroll a node, and before the node reaches the available
provisioning state. Automatic cleaning is run each time the node is unprovisioned.
By default, the Bare Metal Provisioning service uses a network named provisioning
for node cleaning. However, network names are not unique in the Networking service (neutron), so it is possible for a project to create a network with the same name, which causes a conflict with the Bare Metal Provisioning service. To avoid the conflict, use the network UUID to configure the node cleaning network.
Procedure
- Log in to the undercloud host.
Source your overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Retrieve the UUID for the provider network that hosts the Bare Metal Provisioning service:
(overcloud)$ openstack network show <network_name> -f value -c id
-
Replace
<network_name>
with the name of the network that you want to use for the bare-metal node cleaning network.
-
Replace
-
Open your local environment file that configures the Bare Metal Provisioning service for your deployment, for example,
ironic-overrides.yaml
. Configure the network to use as the node cleaning network:
parameter_defaults: IronicCleaningNetwork: <network_uuid>
-
Replace
<network_uuid>
with the UUID of the provider network that you retrieved in step 3.
-
Replace
Source the
stackrc
undercloud credentials file:$ source ~/stackrc
To apply the node cleaning network configuration, add your Bare Metal Provisioning environment files to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/node-info.yaml -r /home/stack/templates/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/<default_ironic_template> \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /home/stack/templates/network_environment_overrides.yaml -n /home/stack/templates/network_data.yaml -e /home/stack/templates/ironic-overrides.yaml \
-
Replace
<default_ironic_template>
with eitherironic.yaml
orironic-overcloud.yaml
, depending on the Networking service mechanism driver for your deployment.
-
Replace
4.2.2. Cleaning nodes manually
You can clean specific nodes manually as required. Node cleaning has two modes:
- Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments.
- Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete.
Procedure
Source your overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
Check the current state of the node:
$ openstack baremetal node show \ -f value -c provision_state <node>
-
Replace
<node>
with the name or UUID of the node to clean.
-
Replace
If the node is not in the
manageable
state, then set it tomanageable
:$ openstack baremetal node manage <node>
Clean the node:
$ openstack baremetal node clean <node> \ --clean-steps '[{"interface": "deploy", "step": "<clean_mode>"}]'
-
Replace
<node>
with the name or UUID of the node to clean. Replace
<clean_mode>
with the type of cleaning to perform on the node:-
erase_devices
: Performs a full clean. -
erase_devices_metadata
: Performs a metadata only clean.
-
-
Replace
Wait for the clean to complete, then check the status of the node:
-
manageable
: The clean was successful, and the node is ready to provision. -
clean failed
: The clean was unsuccessful. Inspect thelast_error
field for the cause of failure.
-
4.3. Creating the bare metal flavor and resource class
You must create a flavor and a resource class to use to tag the bare metal nodes for a particular workload.
Procedure
Source the overcloud credentials file:
$ source ~/overcloudrc
Create a new instance flavor for bare metal nodes:
(overcloud)$ openstack flavor create --id auto \ --ram <ram_size_mb> --disk <disk_size_gb> \ --vcpus <no_vcpus> baremetal
-
Replace
<ram_size_mb>
with the RAM of the bare metal node, in MB. -
Replace
<disk_size_gb>
with the size of the disk on the bare metal node, in GB. Replace
<no_vcpus>
with the number of CPUs on the bare metal node.NoteThese properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.
-
Replace
Retrieve a list of your nodes to identify their UUIDs:
(overcloud)$ openstack baremetal node list
Tag each bare metal node with a custom bare metal resource class:
(overcloud)$ openstack baremetal node set \ --resource-class baremetal.<CUSTOM> <node>
-
Replace
<CUSTOM>
with a string that identifies the purpose of the resource class. For example, set toGPU
to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads. -
Replace
<node>
with the ID of the bare metal node.
-
Replace
Associate the new instance flavor for bare metal nodes with the custom resource class:
(overcloud)$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \ baremetal
To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with
CUSTOM_
.NoteA flavor can request only one instance of a bare metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances:
(overcloud)$ openstack flavor set \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 baremetal
Verify that the new flavor has the correct values:
(overcloud)$ openstack flavor list
4.4. Creating the bare metal images
An overcloud that includes the Bare Metal Provisioning service (ironic) requires two sets of images. During deployment, the Bare Metal Provisioning service boots bare metal nodes from the deploy image, and copies the user image onto nodes.
- The deploy image
-
The Bare Metal Provisioning service uses the deploy image to boot the bare metal node and copy a user image onto the bare metal node. The deploy image consists of the
kernel
image and theramdisk
image. - The user image
The user image is the image that you deploy onto the bare metal node. The user image also has a
kernel
image andramdisk
image, but additionally, the user image contains amain
image. The main image is either a root partition, or a whole-disk image.- A whole-disk image is an image that contains the partition table and boot loader. The Bare Metal Provisioning service does not control the subsequent reboot of a node deployed with a whole-disk image as the node supports localboot.
- A root partition image contains only the root partition of the operating system. If you use a root partition, after the deploy image is loaded into the Image service, you can set the deploy image as the node boot image in the node properties. A subsequent reboot of the node uses netboot to pull down the user image.
The examples in this section use a root partition image to provision bare metal nodes.
4.4.1. Preparing the deploy images
You do not have to create the deploy image because it was already created when the overcloud was deployed by the undercloud. The deploy image consists of two images - the kernel image and the ramdisk image:
/tftpboot/agent.kernel /tftpboot/agent.ramdisk
These images are often in the home directory, unless you have deleted them, or unpacked them elsewhere. If they are not in the home directory, and you still have the rhosp-director-images-ipa
package installed, these images are in the /usr/share/rhosp-director-images/ironic-python-agent*.tar
file.
Prerequisites
- A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service.
Procedure
Extract the images and upload them to the Image service:
$ openstack image create \ --container-format aki \ --disk-format aki \ --public \ --file ./tftpboot/agent.kernel bm-deploy-kernel $ openstack image create \ --container-format ari \ --disk-format ari \ --public \ --file ./tftpboot/agent.ramdisk bm-deploy-ramdisk
4.4.2. Preparing the user image
The final image that you need is the user image that you deploy onto the bare metal node. User images also have a kernel and ramdisk, along with a main image. To download and install these packages, you must first configure whole disk image environment variables to suit your requirements.
4.4.3. Installing the user image
Configure the user image and then upload the image to the Image service (glance).
Prerequisites
- A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service.
Procedure
- Download the Red Hat Enterprise Linux KVM guest image from the Customer Portal.
Define
DIB_LOCAL_IMAGE
as the downloaded image:$ export DIB_LOCAL_IMAGE=rhel-8.0-x86_64-kvm.qcow2
Set your registration information. If you use Red Hat Customer Portal, you must configure the following information:
$ export REG_USER='USER_NAME' $ export REG_PASSWORD='PASSWORD' $ export REG_AUTO_ATTACH=true $ export REG_METHOD=portal $ export https_proxy='IP_address:port' (if applicable) $ export http_proxy='IP_address:port' (if applicable)
If you use Red Hat Satellite, you must configure the following information:
$ export REG_USER='USER_NAME' $ export REG_PASSWORD='PASSWORD' $ export REG_SAT_URL='<SATELLITE URL>' $ export REG_ORG='<SATELLITE ORG>' $ export REG_ENV='<SATELLITE ENV>' $ export REG_METHOD=<METHOD>
If you have any offline repositories, you can define DIB_YUM_REPO_CONF as local repository configuration:
$ export DIB_YUM_REPO_CONF=<path-to-local-repository-config-file>
Create the user images with the
diskimage-builder
tool:$ export DIB_RELEASE=8 $ disk-image-create rhel baremetal -o rhel-image
This command extracts the kernel as
rhel-image.vmlinuz
and initial ramdisk asrhel-image.initrd
.Upload the images to the Image service:
$ KERNEL_ID=$(openstack image create \ --file rhel-image.vmlinuz --public \ --container-format aki --disk-format aki \ -f value -c id rhel-image.vmlinuz) $ RAMDISK_ID=$(openstack image create \ --file rhel-image.initrd --public \ --container-format ari --disk-format ari \ -f value -c id rhel-image.initrd) $ openstack image create \ --file rhel-image.qcow2 --public \ --container-format bare \ --disk-format qcow2 \ --property kernel_id=$KERNEL_ID \ --property ramdisk_id=$RAMDISK_ID \ rhel-image
4.5. Adding physical machines as bare metal nodes
Use one of the following methods to enroll a bare metal node:
- Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
-
Register a physical machine as a bare metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your
overcloudrc
file.
4.5.1. Enrolling a bare metal node with an inventory file
Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service (ironic), and make the nodes available.
Prerequisites
- An overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service.
Procedure
Create an inventory file,
overcloud-nodes.yaml
, that includes the node details. You can enroll multiple nodes with one file.nodes: - name: node0 driver: ipmi driver_info: ipmi_address: <ipmi_ip> ipmi_username: <user> ipmi_password: <password> [<property>: <value>] properties: cpus: <cpu_count> cpu_arch: <cpu_arch> memory_mb: <memory> local_gb: <root_disk> root_device: serial: <serial> ports: - address: <mac_address>
-
Replace
<ipmi_ip>
with the address of the Bare Metal controller. -
Replace
<user>
with your username. -
Replace
<password>
with your password. -
Optional: Replace
<property>: <value>
with an IPMI property that you want to configure, and the property value. For information on the available properties, see Intelligent Platform Management Interface (IPMI) power management driver. -
Replace
<cpu_count>
with the number of CPUs. -
Replace
<cpu_arch>
with the type of architecture of the CPUs. -
Replace
<memory>
with the amount of memory in MiB. -
Replace
<root_disk>
with the size of the root disk in GiB. Only required when the machine has multiple disks. -
Replace
<serial>
with the serial number of the disk that you want to use for deployment. -
Replace
<mac_address>
with the MAC address of the NIC used to PXE boot. - --driver-info <property>=<value>
-
Replace
Source the
overcloudrc
file:$ source ~/overcloudrc
Import the inventory file into the Bare Metal Provisioning service:
$ openstack baremetal create overcloud-nodes.yaml
The nodes are now in the
enroll
state.Specify the deploy kernel and deploy ramdisk on each node:
$ openstack baremetal node set <node> \ --driver-info deploy_kernel=<kernel_file> \ --driver-info deploy_ramdisk=<initramfs_file>
-
Replace
<node>
with the name or ID of the node. -
Replace
<kernel_file>
with the path to the.kernel
image, for example,file:///var/lib/ironic/httpboot/agent.kernel
. -
Replace
<initramfs_file>
with the path to the.initramfs
image, for example,file:///var/lib/ironic/httpboot/agent.ramdisk
.
-
Replace
Optional: Specify the IPMI cipher suite for each node:
$ openstack baremetal node set <node> \ --driver-info ipmi_cipher_suite=<version>
-
Replace
<node>
with the name or ID of the node. Replace
<version>
with the cipher suite version to use on the node. Set to one of the following valid values:-
3
- The node uses the AES-128 with SHA1 cipher suite. -
17
- The node uses the AES-128 with SHA256 cipher suite.
-
-
Replace
Set the provisioning state of the node to
available
:$ openstack baremetal node manage <node> $ openstack baremetal node provide <node>
The Bare Metal Provisioning service cleans the node if you enabled node cleaning.
Set the local boot option on the node:
$ openstack baremetal node set <node> --property capabilities="boot_option:local"
Check that the nodes are enrolled:
$ openstack baremetal node list
There might be a delay between enrolling a node and its state being shown.
4.5.2. Enrolling a bare-metal node manually
Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. You can perform these steps on any node that has your overcloudrc
file.
Prerequisites
- An overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service.
-
The driver for the new node must be enabled by using the
IronicEnabledHardwareTypes
parameter. For more information about supported drivers, see Bare metal drivers.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the overcloud credentials file:
(undercloud)$ source ~/overcloudrc
Add a new node:
$ openstack baremetal node create --driver <driver_name> --name <node_name>
-
Replace
<driver_name>
with the name of the driver, for example,ipmi
. -
Replace
<node_name>
with the name of your new bare-metal node.
-
Replace
- Note the UUID assigned to the node when it is created.
Set the boot option to
local
for each registered node:$ openstack baremetal node set \ --property capabilities="boot_option:local" <node>
Replace
<node>
with the UUID of the bare metal node.Specify the deploy kernel and deploy ramdisk for the node driver:
$ openstack baremetal node set <node> \ --driver-info deploy_kernel=<kernel_file> \ --driver-info deploy_ramdisk=<initramfs_file>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<kernel_file>
with the path to the.kernel
image, for example,file:///var/lib/ironic/httpboot/agent.kernel
. -
Replace
<initramfs_file>
with the path to the.initramfs
image, for example,file:///var/lib/ironic/httpboot/agent.ramdisk
.
-
Replace
Update the node properties to match the hardware specifications on the node:
$ openstack baremetal node set <node> \ --property cpus=<cpu> \ --property memory_mb=<ram> \ --property local_gb=<disk> \ --property cpu_arch=<arch>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<cpu>
with the number of CPUs. -
Replace
<ram>
with the RAM in MB. -
Replace
<disk>
with the disk size in GB. -
Replace
<arch>
with the architecture type.
-
Replace
Optional: Specify the IPMI cipher suite for each node:
$ openstack baremetal node set <node> \ --driver-info ipmi_cipher_suite=<version>
-
Replace
<node>
with the ID of the bare metal node. Replace
<version>
with the cipher suite version to use on the node. Set to one of the following valid values:-
3
- The node uses the AES-128 with SHA1 cipher suite. -
17
- The node uses the AES-128 with SHA256 cipher suite.
-
-
Replace
Optional: Specify the IPMI details for each node:
$ openstack baremetal node set <node> \ --driver-info <property>=<value>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<property>
with the IPMI property that you want to configure. For information on the available properties, see Intelligent Platform Management Interface (IPMI) power management driver. -
Replace
<value>
with the property value.
-
Replace
Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:
$ openstack baremetal node set <node> \ --property root_device='{"<property>": "<value>"}'
-
Replace
<node>
with the ID of the bare metal node. Replace
<property>
and<value>
with details about the disk that you want to use for deployment, for exampleroot_device='{"size": "128"}'
RHOSP supports the following properties:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). name
(String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.NoteIf you specify more than one property, the device must match all of those properties.
-
-
Replace
Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:
$ openstack baremetal port create --node <node_uuid> <mac_address>
-
Replace
<node>
with the unique ID of the bare metal node. -
Replace
<mac_address>
with the MAC address of the NIC used to PXE boot.
-
Replace
Validate the configuration of the node:
$ openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+
The validation output
Result
indicates the following:-
False
: The interface has failed validation. If the reason provided includes missing theinstance_info
parameters[\'ramdisk', \'kernel', and \'image_source']
, this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only setimage_source
to pass the validation. -
True
: The interface has passed validation. -
None
: The interface is not supported for your driver.
-
4.5.3. Bare-metal node provisioning states
A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".
Use the following table to understand the provisioning states a node can be in, and the actions that are available for you to use to transition the node from one provisioning state to another.
State | Category | Description |
---|---|---|
| Stable | The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes. |
| In transition |
The Bare Metal Provisioning service validates that it can manage the node by using the |
| Stable |
The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the
You must move a node to the
Move a node into the |
| In transition |
The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to |
| In transition |
The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the |
| Stable |
The provisioning state that indicates that the node inspection failed. You can transition the node from the
|
| In transition |
Nodes in the
|
| In transition |
Nodes in the
You can interrupt the cleaning process of a node in the |
| Stable |
After nodes have been successfully preconfigured and cleaned, they are moved into the
|
| In transition |
Nodes in the
|
| In transition |
Nodes in the
You can interrupt the deployment of a node in the |
| Stable |
The provisioning state that indicates that the node deployment failed. You can transition the node from the
|
| Stable |
Nodes in the
|
| In transition |
When a node is in the |
| Stable |
If a node deletion is unsuccessful, the node is moved into the
|
| In transition |
You can use the |
| In transition |
Nodes in the
|
| In transition |
Nodes in the
You can interrupt the rescue operation of a node in the |
| Stable |
The provisioning state that indicates that the node rescue failed. You can transition the node from the
|
| Stable |
Nodes in the
|
| In transition |
Nodes in the |
| Stable |
The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the
|
4.6. Configuring Redfish virtual media boot
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.
Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.
4.6.1. Deploying a bare metal server with Redfish virtual media boot
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
To boot a node with the redfish
hardware type over virtual media, set the boot interface to redfish-virtual-media
and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.
Prerequisites
-
Redfish driver enabled in the
enabled_hardware_types
parameter in theundercloud.conf
file. - A bare metal node registered and enrolled.
- IPA and instance images in the Image Service (glance).
- For UEFI nodes, you must also have an EFI system partition image (ESP) available in the Image Service (glance).
- A bare metal flavor.
- A network for cleaning and provisioning.
Sushy library installed:
$ sudo yum install sushy
Procedure
Set the Bare Metal service (ironic) boot interface to
redfish-virtual-media
:$ openstack baremetal node set --boot-interface redfish-virtual-media $NODE_NAME
Replace
$NODE_NAME
with the name of the node.For UEFI nodes, set the boot mode to
uefi
:$ openstack baremetal node set --property capabilities="boot_mode:uefi" $NODE_NAME
Replace
$NODE_NAME
with the name of the node.NoteFor BIOS nodes, do not complete this step.
For UEFI nodes, define the EFI System Partition (ESP) image:
$ openstack baremetal node set --driver-info bootloader=$ESP $NODE_NAME
Replace
$ESP
with the glance image UUID or URL for the ESP image, and replace$NODE_NAME
with the name of the node.NoteFor BIOS nodes, do not complete this step.
Create a port on the bare metal node and associate the port with the MAC address of the NIC on the bare metal node:
$ openstack baremetal port create --pxe-enabled True --node $UUID $MAC_ADDRESS
Replace
$UUID
with the UUID of the bare metal node, and replace$MAC_ADDRESS
with the MAC address of the NIC on the bare metal node.Create the new bare metal server:
$ openstack server create \ --flavor baremetal \ --image $IMAGE \ --network $NETWORK \ test_instance
Replace
$IMAGE
and$NETWORK
with the names of the image and network that you want to use.
4.7. Using host aggregates to separate physical and virtual machine provisioning
OpenStack Compute uses host aggregates to partition availability zones, and group together nodes that have specific shared properties. When an instance is provisioned, the Compute scheduler compares properties on the flavor with the properties assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine.
Complete the steps in this section to perform the following operations:
-
Add the property
baremetal
to your flavors and set it to eithertrue
orfalse
. -
Create separate host aggregates for bare metal hosts and compute nodes with a matching
baremetal
property. Nodes grouped into an aggregate inherit this property.
Prerequisites
- A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service.
Procedure
Set the
baremetal
property totrue
on the baremetal flavor.$ openstack flavor set baremetal --property baremetal=true
Set the
baremetal
property tofalse
on the flavors that virtual instances use:$ openstack flavor set FLAVOR_NAME --property baremetal=false
Create a host aggregate called
baremetal-hosts
:$ openstack aggregate create --property baremetal=true baremetal-hosts
Add each Controller node to the
baremetal-hosts
aggregate:$ openstack aggregate add host baremetal-hosts HOSTNAME
NoteIf you have created a composable role with the
NovaIronic
service, add all the nodes with this service to thebaremetal-hosts
aggregate. By default, only the Controller nodes have theNovaIronic
service.Create a host aggregate called
virtual-hosts
:$ openstack aggregate create --property baremetal=false virtual-hosts
Add each Compute node to the
virtual-hosts
aggregate:$ openstack aggregate add host virtual-hosts HOSTNAME
If you did not add the following Compute filter scheduler when you deployed the overcloud, add it now to the existing list under
scheduler_default_filters
in the_/etc/nova/nova.conf_
file:AggregateInstanceExtraSpecsFilter