Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Adding physical machines as bare-metal nodes
Use one of the following methods to enroll a bare-metal node:
- Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available.
- Register a physical machine as a bare-metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses.
4.1. Prerequisites Copier lienLien copié sur presse-papiers!
- The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic).
-
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-adminprivileges. -
The
occommand line tool is installed on the workstation.
4.2. Enrolling bare-metal nodes with an inventory file Copier lienLien copié sur presse-papiers!
You can create an inventory file that defines the details of each bare-metal node. You import the file into the Bare Metal Provisioning service (ironic) to enroll the bare-metal nodes, and then make each node available.
Some drivers might require specific configuration. For more information, see Bare metal drivers.
Procedure
-
Create an inventory file to define the details of each node, for example,
ironic-nodes.yaml. For each node, define the node name and the address and credentials for the bare-metal driver. For details on the available properties for your enabled driver, see Bare metal drivers.
nodes: - name: <node> driver: <driver> driver_info: <driver>_address: <ip> <driver>_username: <user> <driver>_password: <password> [<property>: <value>]-
Replace
<node>with the name of the node. -
Replace
<driver>with a supported bare-metal driver, for example,redfish. -
Replace
<ip>with the IP address of the Bare Metal controller. -
Replace
<user>with your username. -
Replace
<password>with your password. -
Optional: Replace
<property>with a driver property that you want to configure, and replace<value>with the value of the property. For information on the available properties, see Bare metal drivers.
-
Replace
Define the node properties and ports:
nodes: - name: <node> ... properties: cpus: <cpu_count> cpu_arch: <cpu_arch> memory_mb: <memory> local_gb: <root_disk> root_device: serial: <serial> network_interface: <interface_type> ports: - address: <mac_address>-
Replace
<cpu_count>with the number of CPUs. -
Replace
<cpu_arch>with the type of architecture of the CPUs. -
Replace
<memory>with the amount of memory in MiB. -
Replace
<root_disk>with the size of the root disk in GiB. Only required when the machine has multiple disks. -
Replace
<serial>with the serial number of the disk that you want to use for deployment. Optional: Include the
network_interfaceproperty if you want to override the default network type offlat. You can change the network type to one of the following valid values:-
neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. -
noop: Use for standalone deployments where network switching is not required.
-
-
Replace
<mac_address>with the MAC address of the NIC used to PXE boot.
-
Replace
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientImport the inventory file into the Bare Metal Provisioning service:
$ openstack baremetal create ironic-nodes.yamlThe nodes are now in the
enrollstate.- Wait for the extra network interface port configuration data to populate the Networking service (neutron). This process takes at least 60 seconds.
Set the provisioning state of each node to
available:$ openstack baremetal node manage <node> $ openstack baremetal node provide <node>The Bare Metal Provisioning service cleans the node if you enabled node cleaning.
Check that the nodes are enrolled:
$ openstack baremetal node listThere might be a delay between enrolling a node and its state being shown.
Exit the
openstackclientpod:$ exit
4.3. Enrolling a bare-metal node manually Copier lienLien copié sur presse-papiers!
Register a physical machine as a bare-metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientAdd a new node:
$ openstack baremetal node create --driver <driver_name> --name <node_name>-
Replace
<driver_name>with the name of the driver, for example,redfish. -
Replace
<node_name>with the name of your new bare-metal node.
-
Replace
- Note the UUID assigned to the node when it is created.
Update the node properties to match the hardware specifications on the node:
$ openstack baremetal node set <node> \ --property cpus=<cpu> \ --property memory_mb=<ram> \ --property local_gb=<disk> \ --property cpu_arch=<arch>-
Replace
<node>with the ID of the bare metal node. -
Replace
<cpu>with the number of CPUs. -
Replace
<ram>with the RAM in MB. -
Replace
<disk>with the disk size in GB. -
Replace
<arch>with the architecture type.
-
Replace
Optional: Set the
network_interfaceproperty to override the default network type offlat:$ openstack baremetal node set <node> --network-interace <network_interface>Replace
<network_interface>with one of the following valid network types:-
neutron: Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. -
noop: Use for standalone deployments where network switching is not required.
-
Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:
$ openstack baremetal node set <node> \ --property root_device='{"<property>": "<value>"}'-
Replace
<node>with the ID of the bare metal node. Replace
<property>and<value>with details about the disk that you want to use for deployment, for exampleroot_device='{"size": "128"}'RHOSP supports the following properties:
-
model(String): Device identifier. -
vendor(String): Device vendor. -
serial(String): Disk serial number. -
hctl(String): Host:Channel:Target:Lun for SCSI. -
size(Integer): Size of the device in GB. -
wwn(String): Unique storage identifier. -
wwn_with_extension(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension(String): Unique vendor storage identifier. -
rotational(Boolean): True for a rotational device (HDD), otherwise false (SSD). name(String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.NoteIf you specify more than one property, the device must match all of those properties.
-
-
Replace
Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:
$ openstack baremetal port create --node <node_uuid> <mac_address>-
Replace
<node>with the unique ID of the bare metal node. -
Replace
<mac_address>with the MAC address of the NIC used to PXE boot.
-
Replace
Validate the configuration of the node:
$ openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+The validation output
Resultindicates the following:-
False: The interface has failed validation. If the reason provided includes missing theinstance_infoparameters[\'ramdisk', \'kernel', and \'image_source'], this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only setimage_sourceto pass the validation. -
True: The interface has passed validation. -
None: The interface is not supported for your driver.
-
Exit the
openstackclientpod:$ exit
4.4. Deploying a bare-metal node with Redfish virtual media boot Copier lienLien copié sur presse-papiers!
You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image.
Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead.
To launch bare-metal instances with the redfish hardware type over virtual media, set the boot interface of each bare-metal node to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot.
Prerequisites
- The bare-metal node is registered and enrolled.
- The IPA and instance images are available in the Image Service (glance).
- For UEFI nodes, an EFI system partition image (ESP) is available in the Image Service (glance).
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientSet the Bare Metal service boot interface to
redfish-virtual-media:$ openstack baremetal node set --boot-interface redfish-virtual-media <node_name>-
Replace
<node_name>with the name of the node.
-
Replace
For UEFI nodes, define the EFI System Partition (ESP) image:
$ openstack baremetal node set --driver-info bootloader=<esp_image> <node>-
Replace
<esp_image>with the image UUID or URL for the ESP image. -
Replace
<node>with the name of the node.
NoteFor BIOS nodes, do not complete this step.
-
Replace
Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare metal node:
$ openstack baremetal port create --pxe-enabled True --node <node_uuid> <mac_address>-
Replace
<node_uuid>with the UUID of the bare-metal node. -
Replace
<mac_address>with the MAC address of the NIC on the bare-metal node.
-
Replace
Exit the
openstackclientpod:$ exit
4.5. Creating flavors for launching bare-metal instances Copier lienLien copié sur presse-papiers!
You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientRetrieve a list of your nodes to identify their UUIDs:
$ openstack baremetal node listTag each bare-metal node with a custom bare-metal resource class:
$ openstack baremetal node set \ --resource-class baremetal.<CUSTOM> <node>-
Replace
<CUSTOM>with a string that identifies the purpose of the resource class. For example, set toGPUto create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads. -
Replace
<node>with the ID of the bare metal node.
-
Replace
Create a flavor for bare-metal instances:
$ openstack flavor create --id auto \ --ram <ram_size_mb> --disk <disk_size_gb> \ --vcpus <no_vcpus> baremetal-
Replace
<ram_size_mb>with the RAM of the bare metal node, in MB. -
Replace
<disk_size_gb>with the size of the disk on the bare metal node, in GB. Replace
<no_vcpus>with the number of CPUs on the bare metal node.NoteThese properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size.
-
Replace
Associate the flavor for bare-metal instances with the custom resource class:
$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 \ baremetalTo determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with
CUSTOM_.NoteA flavor can request only one instance of a bare-metal resource class.
Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances:
$ openstack flavor set \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 baremetalVerify that the new flavor has the correct values:
$ openstack flavor listExit the
openstackclientpod:$ exit
4.6. Bare-metal node provisioning states Copier lienLien copié sur presse-papiers!
A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition".
Use the following table to understand the node provisioning states and the actions you can perform to transition a node from one state to another.
| State | Category | Description |
|---|---|---|
|
| Stable | The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes. |
|
| In transition |
The Bare Metal Provisioning service validates that it can manage the node by using the |
|
| Stable |
The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the
You must move a node to the
Move a node into the |
|
| In transition |
The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to |
|
| In transition |
The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the |
|
| Stable |
The provisioning state that indicates that the node inspection failed. You can transition the node from the
|
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the cleaning process of a node in the |
|
| Stable |
After nodes have been successfully preconfigured and cleaned, they are moved into the
|
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the deployment of a node in the |
|
| Stable |
The provisioning state that indicates that the node deployment failed. You can transition the node from the
|
|
| Stable |
Nodes in the
|
|
| In transition |
When a node is in the |
|
| Stable |
If a node deletion is unsuccessful, the node is moved into the
|
|
| In transition |
You can use the |
|
| In transition |
Nodes in the
|
|
| In transition |
Nodes in the
You can interrupt the rescue operation of a node in the |
|
| Stable |
The provisioning state that indicates that the node rescue failed. You can transition the node from the
|
|
| Stable |
Nodes in the
|
|
| In transition |
Nodes in the |
|
| Stable |
The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the
|