Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Configure Bare Metal Deployment
Configure Bare Metal Provisioning, the Image service, and Compute to enable bare metal deployment in the OpenStack environment. The following sections outline the additional configuration steps required to successfully deploy a bare metal node.
2.1. Create OpenStack Configurations for Bare Metal Provisioning Service Copier lienLien copié sur presse-papiers!
2.1.1. Configure the OpenStack Networking Configuration Copier lienLien copié sur presse-papiers!
Configure OpenStack Networking to communicate with Bare Metal Provisioning for DHCP, PXE boot, and other requirements. The procedure below configures OpenStack Networking for a single, flat network use case for provisioning onto bare metal. The configuration uses the ML2 plug-in and the Open vSwitch agent.
Ensure that the network interface used for provisioning is not the same network interface that is used for remote connectivity on the OpenStack Networking node. This procedure creates a bridge using the Bare Metal Provisioning Network interface, and drops any remote connections.
All steps in the following procedure must be performed on the server hosting OpenStack Networking, while logged in as the root user.
Configuring OpenStack Networking to Communicate with Bare Metal Provisioning
Set up the shell to access Identity as the administrative user:
source ~stack/overcloudrc
# source ~stack/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the flat network over which to provision bare metal instances:
neutron net-create --tenant-id TENANT_ID sharednet1 --shared \ --provider:network_type flat --provider:physical_network PHYSNET
# neutron net-create --tenant-id TENANT_ID sharednet1 --shared \ --provider:network_type flat --provider:physical_network PHYSNET
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace TENANT_ID with the unique identifier of the tenant on which to create the network. Replace PHYSNET with the name of the physical network.
Create the subnet on the flat network:
neutron subnet-create sharednet1 NETWORK_CIDR --name SUBNET_NAME \ --ip-version 4 --gateway GATEWAY_IP --allocation-pool \ start=START_IP,end=END_IP --enable-dhcp
# neutron subnet-create sharednet1 NETWORK_CIDR --name SUBNET_NAME \ --ip-version 4 --gateway GATEWAY_IP --allocation-pool \ start=START_IP,end=END_IP --enable-dhcp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace NETWORK_CIDR with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. The block of IP addresses specified by the range started by START_IP and ended by END_IP must fall within the block of IP addresses specified by NETWORK_CIDR.
- Replace SUBNET_NAME with a name for the subnet.
- Replace GATEWAY_IP with the IP address or host name of the system that will act as the gateway for the new subnet. This address must be within the block of IP addresses specified by NETWORK_CIDR, but outside of the block of IP addresses specified by the range started by START_IP and ended by END_IP.
- Replace START_IP with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
- Replace END_IP with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
Attach the network and subnet to the router to ensure the metadata requests are served by the OpenStack Networking service.
neutron router-create ROUTER_NAME
# neutron router-create ROUTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
ROUTER_NAME
with a name for the router.Add the Bare Metal subnet as an interface on this router:
neutron router-interface-add ROUTER_NAME BAREMETAL_SUBNET
# neutron router-interface-add ROUTER_NAME BAREMETAL_SUBNET
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace ROUTER_NAME with the name of your router and
BAREMETAL_SUBNET
with the ID or subnet name that you previously created. This allows the metadata requests fromcloud-init
to be served and the node configured.Update the
/etc/ironic/ironic.conf
file on the Compute node running the Bare Metal Provisioning service to utilize the same network for the cleaning service. Login to the Compute node where the Bare Metal Provisioning service is running and execute the following as aroot
user:openstack-config --set /etc/ironic/ironic.conf neutron cleaning_network_uuid NETWORK_UUID
# openstack-config --set /etc/ironic/ironic.conf neutron cleaning_network_uuid NETWORK_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the NETWORK_UUID with the ID of the Bare Metal Provisioning Network created in the previous steps.
Restart the Bare Metal Provisioning service:
systemctl restart openstack-ironic-conductor.service
# systemctl restart openstack-ironic-conductor.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.2. Create the Bare Metal Provisioning Flavor Copier lienLien copié sur presse-papiers!
You need to create a flavor to use as a part of the deployment which should have the specifications (memory, CPU and disk) that is equal to or less than what your bare metal node provides.
Set up the shell to access Identity as the administrative user:
source ~stack/overcloudrc
# source ~stack/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List existing flavors:
openstack flavor list
# openstack flavor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new flavor for the Bare Metal Provisioning service:
openstack flavor create --id auto --ram RAM --vcpus VCPU --disk DISK --public baremetal
# openstack flavor create --id auto --ram RAM --vcpus VCPU --disk DISK --public baremetal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
RAM
with the RAM memory,VCPU
with the number of vCPUs andDISK
with the disk storage value.Verify that the new flavor is created with the respective values:
openstack flavor list
# openstack flavor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.3. Create the Bare Metal Images Copier lienLien copié sur presse-papiers!
Bare Metal Provisioning supports deploying whole-disk images or root partition images. The whole-disk image contains the partition table, kernel image, and final user image. Root partition images contains the root partition of the OS and requires the kernel and ramdisk image for the bare metal node to use to boot the final user image with. All supported bare metal agent drivers can deploy whole-disk or root partition images.
A whole-disk image requires one image that contains the partition table, boot loader, and user image. Bare Metal Provisioning does not control the subsequent reboot of a node deployed with a whole-disk image as the node supports localboot.
A root partition deployment requires two sets of images - deploy
image and user
image. Bare Metal Provisioning uses the deploy
image to boot the node and copy the user
image on to the bare metal node. After the deploy
image is loaded into the Image service, you can use the bare metal node’s properties to associate the deploy
image to the bare metal node to set it to use the deploy
image as the boot image. A subsequent reboot of the node uses net-boot to pull down the user image.
This section uses a root partition image to provision bare metal nodes in the examples. For a whole-disk image deployment, see Section 3.3, “Create a Whole Windows Image”. For a partition-based deployment, you do not have to create the deploy
image as it was already used when the overcloud was deployed by the undercloud. The deploy
image consists of two images - the kernel
image and the ramdisk
image as follows:
ironic-python-agent.kernel ironic-python-agent.initramfs
ironic-python-agent.kernel
ironic-python-agent.initramfs
These images will be in the /usr/share/rhosp-director-images/ironic-python-agent*.el7ost.tar
file if you have the rhosp-director-images-ipa
package installed.
Extract the images and load them to the Image service:
openstack image create --container-format aki --disk-format aki --public --file ./ironic-python-agent.kernel bm-deploy-kernel openstack image create --container-format ari --disk-format ari --public --file ./ironic-python-agent.initramfs bm-deploy-ramdisk
# openstack image create --container-format aki --disk-format aki --public --file ./ironic-python-agent.kernel bm-deploy-kernel
# openstack image create --container-format ari --disk-format ari --public --file ./ironic-python-agent.initramfs bm-deploy-ramdisk
The final image that you need is the actual image that will be deployed on the Bare Metal Provisioning node. For example, you can download a Red Hat Enterprise Linux KVM
image since it already has cloud-init
.
Load the image to the Image service:
openstack image create --container-format bare --disk-format qcow2 --property kernel_id=DEPLOY_KERNEL_ID \ --property ramdisk_id=DEPLOY_RAMDISK_ID --public --file ./IMAGE_FILE rhel
# openstack image create --container-format bare --disk-format qcow2 --property kernel_id=DEPLOY_KERNEL_ID \
--property ramdisk_id=DEPLOY_RAMDISK_ID --public --file ./IMAGE_FILE rhel
Where DEPLOY_KERNEL_ID is the UUID associated with the deploy-kernel images uploaded to the Image service. And DEPLOY_RAMDISK_ID is the UUID associated with the deploy-ramdisk image uploaded to the Image service. Use openstack image list
to find these UUIDs.
2.1.4. Add the Bare Metal Provisioning Node to the Bare Metal Provisioning Service Copier lienLien copié sur presse-papiers!
In order to add the Bare Metal Provisioning node to the Bare Metal Provisioning service, copy the section of the instackenv.json
file that was used to instantiate the cloud and modify it according to your needs.
Source the
overcloudrc
file and import the.json
file:source ~stack/overcloudrc openstack baremetal import --json ./baremetal.json
# source ~stack/overcloudrc # openstack baremetal import --json ./baremetal.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the bare metal node in the Bare Metal Provisioning service to use the deployed images as the initial boot image by specifying the
deploy_kernel
anddeploy_ramdisk
in thedriver_info
section of the node:ironic node-update NODE_UUID add driver_info/deploy_kernel=DEPLOY_KERNEL_ID driver_info/deploy_ramdisk=DEPLOY_RAMDISK_ID
# ironic node-update NODE_UUID add driver_info/deploy_kernel=DEPLOY_KERNEL_ID driver_info/deploy_ramdisk=DEPLOY_RAMDISK_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Replace NODE_UUID
with the UUID of the bare metal node. You can get this value by executing the ironic node-list
command on the director node. Replace DEPLOY_KERNEL_ID
with the ID of the deploy kernel image. You can get this value by executing the glance image-list
command on the director node. Replace the DEPLOY_RAMDISK_ID
with the ID of the deploy ramdisk image. You can get this value by executing the glance image-list
command on the director node.
2.1.5. Configure Proxy Services For Image Deployment Copier lienLien copié sur presse-papiers!
You can optionally configure a bare metal node to use Object Storage with HTTP or HTTPS proxy services to download images to the bare metal node. This allows you to cache images in the same physical network segments as the bare metal nodes to reduce overall network traffic and deployment time.
Before you Begin
Configure the proxy server with the following additional considerations:
- Use content caching, even for queries contained in the requested URL.
- Raise the maximum cache size to accommodate your image sizes.
Only configure the proxy server to store images as unencrypted if the images do not contain sensitive information.
Configure Image Proxy
Set up the shell to access Identity as the administrative user:
source ~stack/overcloudrc
# source ~stack/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the bare metal node driver to use HTTP or HTTPS:
openstack baremetal node set NODE_UUID \ --driver_info image_https_proxy=HTTPS://PROXYIP:PORT
# openstack baremetal node set NODE_UUID \ --driver_info image_https_proxy=HTTPS://PROXYIP:PORT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example uses the HTTPS protocol. Set the
driverinfo/image_http_proxy
parameter if you want to use HTTP instead of HTTPS.Set Bare Metal Provisioning to reuse cached Object Storage temporary URLs when an image is requested.
openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_cache_enabled=true
# openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_cache_enabled=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The proxy server will not create new cache entries for the same image based on the query part of the URL when it contains some query parameters that change each time the request is regenerated.
Set the duration (in seconds) that the generated temporary URL remains valid:
openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_duration=DURATION
# openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_duration=DURATION
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only non-expired links to images will be returned from the Object Storage service temporary URLs cache. If swift_temp_url_duration=1200, then after 20 minutes a new image will be cached by the proxy server. The value of this option must be greater than or equal to swift_temp_url_expected_download_start_delay.
Set the download start delay (in seconds) for your hardware:
openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_expected_download_start_delay=DELAY
# openstack-config --set /etc/ironic/ironic.conf glance swift_temp_url_expected_download_start_delay=DELAY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set DELAY to cover the delay between when the deploy request is made (the temporary URL is generated) to when the URL is used to download an image to the bare metal node. This delay allows time for the ramdisk to boot and begin the image download. This value determines if a cached entry will still be valid when the image download starts.
2.1.6. Deploy the Bare Metal Provisioning Node Copier lienLien copié sur presse-papiers!
Deploy the Bare Metal Provisioning node using the nova boot
command:
nova boot --image BAREMETAL_USER_IMAGE --flavor BAREMETAL_FLAVOR --nic net-id=IRONIC_NETWORK_ID --key default MACHINE_HOSTNAME
# nova boot --image BAREMETAL_USER_IMAGE --flavor BAREMETAL_FLAVOR --nic net-id=IRONIC_NETWORK_ID --key default MACHINE_HOSTNAME
Replace BAREMETAL_USER_IMAGE
with image that was loaded to the Image service, BAREMETAL_FLAVOR
with the flavor for the Bare Metal deployment, IRONIC_NETWORK_ID
with the ID of the Bare Metal Provisioning Network in the OpenStack Networking service, and MACHINE_HOSTNAME
with the hostname of the machine you want it to be after it is deployed.
2.2. Configure Hardware Inspection Copier lienLien copié sur presse-papiers!
Hardware inspection allows Bare Metal Provisioning to discover hardware information on a node. Inspection also creates ports for the discovered Ethernet MAC addresses. Alternatively, you can manually add hardware details to each node; see Section 2.3.2, “Add a Node Manually” for more information. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.
Hardware inspection is supported in-band using the following drivers:
-
pxe_drac
-
pxe_ipmitool
-
pxe_ssh
-
pxe_amt
Configuring Hardware Inspection
- Obtain the Ironic Python Agent kernel and ramdisk images used for bare metal system discovery over PXE boot. These images are available in a TAR archive labeled Ironic Python Agent Image for RHOSP director 9.0 at https://access.redhat.com/downloads/content/191/ver=9/rhel---7/9/x86_64/product-software. Download the TAR archive, extract the image files (ironic-python-agent.kernel and ironic-python-agent.initramfs) from it, and copy them to the /tftpboot directory on the TFTP server.
On the server that will host the hardware inspection service, enable the Red Hat OpenStack Platform 9 director for RHEL 7 (RPMs) channel:
subscription-manager repos --enable=rhel-7-server-openstack-9-director-rpms
# subscription-manager repos --enable=rhel-7-server-openstack-9-director-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the openstack-ironic-inspector package:
yum install openstack-ironic-inspector
# yum install openstack-ironic-inspector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable inspection in the ironic.conf file:
openstack-config --set /etc/ironic/ironic.conf \ inspector enabled True
# openstack-config --set /etc/ironic/ironic.conf \ inspector enabled True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the hardware inspection service is hosted on a separate server, set its URL on the server hosting the conductor service:
openstack-config --set /etc/ironic/ironic.conf \ inspector service_url http://INSPECTOR_IP:5050
# openstack-config --set /etc/ironic/ironic.conf \ inspector service_url http://INSPECTOR_IP:5050
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace INSPECTOR_IP with the IP address or host name of the server hosting the hardware inspection service.
Provide the hardware inspection service with authentication credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace IDENTITY_IP with the IP address or host name of the Identity server.
- Replace PASSWORD with the password that Bare Metal Provisioning uses to authenticate with Identity.
Optionally, set the hardware inspection service to store logs for the ramdisk:
openstack-config --set /etc/ironic-inspector/inspector.conf \ processing ramdisk_logs_dir /var/log/ironic-inspector/ramdisk
# openstack-config --set /etc/ironic-inspector/inspector.conf \ processing ramdisk_logs_dir /var/log/ironic-inspector/ramdisk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, enable an additional data processing plug-in that gathers block devices on bare metal machines with multiple local disks and exposes root devices.
ramdisk_error
,root_disk_selection
,scheduler
, andvalidate_interfaces
are enabled by default, and should not be disabled. The following command addsroot_device_hint
to the list:openstack-config --set /etc/ironic-inspector/inspector.conf \ processing processing_hooks '$default_processing_hooks,root_device_hint'
# openstack-config --set /etc/ironic-inspector/inspector.conf \ processing processing_hooks '$default_processing_hooks,root_device_hint'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the initial
ironic inspector
database:ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
# ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the inspector database file to be owned by ironic-inspector:
chown ironic-inspector /var/lib/ironic-inspector/inspector.sqlite
# chown ironic-inspector /var/lib/ironic-inspector/inspector.sqlite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the /etc/ironic-inspector/dnsmasq.conf file in a text editor, and configure the following PXE boot settings for the openstack-ironic-inspector-dnsmasq service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace INTERFACE with the name of the Bare Metal Provisioning Network interface.
- Replace START_IP with the IP address that denotes the start of the range of IP addresses from which floating IP addresses will be allocated.
- Replace END_IP with the IP address that denotes the end of the range of IP addresses from which floating IP addresses will be allocated.
Copy the
syslinux bootloader
to thetftp
directory:cp /usr/share/syslinux/pxelinux.0 /tftpboot/pxelinux.0
# cp /usr/share/syslinux/pxelinux.0 /tftpboot/pxelinux.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, you can configure the hardware inspection service to store metadata in the swift section of the
/etc/ironic-inspector/inspector.conf
file.[swift] username = ironic password = PASSWORD tenant_name = service os_auth_url = http://IDENTITY_IP:5000/v2.0
[swift] username = ironic password = PASSWORD tenant_name = service os_auth_url = http://IDENTITY_IP:5000/v2.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace IDENTITY_IP with the IP address or host name of the Identity server.
- Replace PASSWORD with the password that Bare Metal Provisioning uses to authenticate with Identity.
Open the /tftpboot/pxelinux.cfg/default file in a text editor, and configure the following options:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace INSPECTOR_IP with the IP address or host name of the server hosting the hardware inspection service. Note that the text from
append
to/continue
must be on a single line, as indicated by the\
in the block above.Reset the security context for the /tftpboot/ directory and its files:
restorecon -R /tftpboot/
# restorecon -R /tftpboot/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This step ensures that the directory has the correct SELinux security labels, and the dnsmasq service is able to access the directory.
Start the hardware inspection service and the dnsmasq service, and configure them to start at boot time:
systemctl start openstack-ironic-inspector.service systemctl enable openstack-ironic-inspector.service systemctl start openstack-ironic-inspector-dnsmasq.service systemctl enable openstack-ironic-inspector-dnsmasq.service
# systemctl start openstack-ironic-inspector.service # systemctl enable openstack-ironic-inspector.service # systemctl start openstack-ironic-inspector-dnsmasq.service # systemctl enable openstack-ironic-inspector-dnsmasq.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Hardware inspection can be used on nodes after they have been registered with Bare Metal Provisioning.
2.3. Add Physical Machines as Bare Metal Nodes Copier lienLien copié sur presse-papiers!
Add as nodes the physical machines onto which you will provision instances, and confirm that Compute can see the available hardware. Compute is not immediately notified of new resources, because Compute’s resource tracker synchronizes periodically. Changes will be visible after the next periodic task is run. This value, scheduler_driver_task_period
, can be updated in /etc/nova/nova.conf. The default period is 60 seconds.
After systems are registered as bare metal nodes, hardware details can be discovered using hardware inspection, or added manually.
2.3.1. Add a Node with Hardware Inspection Copier lienLien copié sur presse-papiers!
Register a physical machine as a bare metal node, then use openstack-ironic-inspector to detect the node’s hardware details and create ports for each of its Ethernet MAC addresses. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.
Adding a Node with Hardware Inspection
Set up the shell to use Identity as the administrative user:
source ~/keystonerc_admin
# source ~/keystonerc_admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new node:
ironic node-create -d DRIVER_NAME
# ironic node-create -d DRIVER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace DRIVER_NAME with the name of the driver that Bare Metal Provisioning will use to provision this node. You must have enabled this driver in the /etc/ironic/ironic.conf file. To create a node, you must, at a minimum, specify the driver name.
ImportantNote the unique identifier for the node.
You can refer to a node by a logical name or by its UUID. Optionally assign a logical name to the node:
ironic node-update NODE_UUID add name=NAME
# ironic node-update NODE_UUID add name=NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Replace NAME with a logical name for the node.
Determine the node information that is required by the driver, then update the node driver information to allow Bare Metal Provisioning to manage the node:
ironic driver-properties DRIVER_NAME ironic node-update NODE_UUID add \ driver_info/PROPERTY=VALUE \ driver_info/PROPERTY=VALUE
# ironic driver-properties DRIVER_NAME # ironic node-update NODE_UUID add \ driver_info/PROPERTY=VALUE \ driver_info/PROPERTY=VALUE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace DRIVER_NAME with the name of the driver for which to show properties. The information is not returned unless the driver has been enabled in the /etc/ironic/ironic.conf file.
- Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
- Replace PROPERTY with a required property returned by the ironic driver-properties command.
- Replace VALUE with a valid value for that property.
Specify the deploy kernel and deploy ramdisk for the node driver:
ironic node-update NODE_UUID add \ driver_info/deploy_kernel=KERNEL_UUID \ driver_info/deploy_ramdisk=INITRAMFS_UUID
# ironic node-update NODE_UUID add \ driver_info/deploy_kernel=KERNEL_UUID \ driver_info/deploy_ramdisk=INITRAMFS_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
- Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
- Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also be set on the flavor used to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:
ironic node-update NODE_UUID add \ properties/capabilities="boot_option:local"
# ironic node-update NODE_UUID add \ properties/capabilities="boot_option:local"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
Move the bare metal node to
manageable
state:ironic node-set-provision-state NODE_UUID manage
# ironic node-set-provision-state NODE_UUID manage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
Start inspection:
openstack baremetal introspection start NODE_UUID --discoverd-url http://overcloud IP:5050
# openstack baremetal introspection start NODE_UUID --discoverd-url http://overcloud IP:5050
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The node discovery and inspection process must run to completion before the node can be provisioned. To check the status of node inspection, run ironic node-list and look for
Provision State
. Nodes will be inavailable
state after successful inspection. -
Replace overcloud IP with the
service_url
value that was previously set in ironic.conf.
-
Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The node discovery and inspection process must run to completion before the node can be provisioned. To check the status of node inspection, run ironic node-list and look for
Validate the node’s setup:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The output of the command above should report either
True
orNone
for each interface. Interfaces markedNone
are those that you have not configured, or those that are not supported for your driver.
2.3.2. Add a Node Manually Copier lienLien copié sur presse-papiers!
Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. All steps in the following procedure must be performed on the server hosting the Bare Metal Provisioning conductor service, while logged in as the root user.
Adding a Node without Hardware Inspection
Set up the shell to use Identity as the administrative user:
source ~/keystonerc_admin
# source ~/keystonerc_admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new node:
ironic node-create -d DRIVER_NAME
# ironic node-create -d DRIVER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace DRIVER_NAME with the name of the driver that Bare Metal Provisioning will use to provision this node. You must have enabled this driver in the /etc/ironic/ironic.conf file. To create a node, you must, at a minimum, specify the driver name.
ImportantNote the unique identifier for the node.
You can refer to a node by a logical name or by its UUID. Optionally assign a logical name to the node:
ironic node-update NODE_UUID add name=NAME
# ironic node-update NODE_UUID add name=NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Replace NAME with a logical name for the node.
Determine the node information that is required by the driver, then update the node driver information to allow Bare Metal Provisioning to manage the node:
ironic driver-properties DRIVER_NAME ironic node-update NODE_UUID add \ driver_info/PROPERTY=VALUE \ driver_info/PROPERTY=VALUE
# ironic driver-properties DRIVER_NAME # ironic node-update NODE_UUID add \ driver_info/PROPERTY=VALUE \ driver_info/PROPERTY=VALUE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace DRIVER_NAME with the name of the driver for which to show properties. The information is not returned unless the driver has been enabled in the /etc/ironic/ironic.conf file.
- Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
- Replace PROPERTY with a required property returned by the ironic driver-properties command.
- Replace VALUE with a valid value for that property.
Specify the deploy kernel and deploy ramdisk for the node driver:
ironic node-update NODE_UUID add \ driver_info/deploy_kernel=KERNEL_UUID \ driver_info/deploy_ramdisk=INITRAMFS_UUID
# ironic node-update NODE_UUID add \ driver_info/deploy_kernel=KERNEL_UUID \ driver_info/deploy_ramdisk=INITRAMFS_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
- Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
- Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
Update the node’s properties to match the hardware specifications on the node:
ironic node-update NODE_UUID add \ properties/cpus=CPU \ properties/memory_mb=RAM_MB \ properties/local_gb=DISK_GB \ properties/cpu_arch=ARCH
# ironic node-update NODE_UUID add \ properties/cpus=CPU \ properties/memory_mb=RAM_MB \ properties/local_gb=DISK_GB \ properties/cpu_arch=ARCH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
- Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
- Replace CPU with the number of CPUs to use.
- Replace RAM_MB with the RAM (in MB) to use.
- Replace DISK_GB with the disk size (in GB) to use.
- Replace ARCH with the architecture type to use.
Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also be set on the flavor used to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:
ironic node-update NODE_UUID add \ properties/capabilities="boot_option:local"
# ironic node-update NODE_UUID add \ properties/capabilities="boot_option:local"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
Inform Bare Metal Provisioning of the network interface cards on the node. Create a port with each NIC’s MAC address:
ironic port-create -n NODE_UUID -a MAC_ADDRESS
# ironic port-create -n NODE_UUID -a MAC_ADDRESS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Replace MAC_ADDRESS with the MAC address for a NIC on the node.
Validate the node’s setup:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The output of the command above should report either
True
orNone
for each interface. Interfaces markedNone
are those that you have not configured, or those that are not supported for your driver.
2.3.3. Configure Manual Node Cleaning Copier lienLien copié sur presse-papiers!
When a bare metal server is initially provisioned or reprovisioned after the server is freed from a workload, Bare Metal Provisioning can automatically clean the server to ensure that the server is ready for another workload. You can also initiate a manual cleaning cycle when the server is in the manageable state. Manual cleaning cycles are useful for long running or destructive tasks. You can configure the specific cleaning steps for the bare metal server.
Configure a Cleaning Network
Bare Metal Provisioning uses the cleaning network to provide in-band cleaning steps for the bare metal server. You can create a separate network for this cleaning network or use the provisioning network.
To configure the Bare Metal Provisioning service cleaning network, follow these steps:
Set up the shell to access Identity as the administrative user.
source ~stack/overcloudrc
# source ~stack/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the network UUID for the network you want Bare Metal Provisioning to use for cleaning bare metal servers.
openstack network list
# openstack network list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select the UUID from the id field in the
neutron net-list
output.Set
cleaning_network_uuid
in the/etc/ironic/ironic.conf
file to the cleaning network UUID.openstack-config --set /etc/ironic/ironic.conf neutron cleaning_network_uuid CLEANING_NETWORK_UUID
# openstack-config --set /etc/ironic/ironic.conf neutron cleaning_network_uuid CLEANING_NETWORK_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLEANING_NETWORK_UUID with the network id retrieved in the earlier step.
Restart the Bare Metal Provisioning Service.
systemctl restart openstack-ironic-conductor
# systemctl restart openstack-ironic-conductor
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure Manual Cleaning
Ensure that the bare metal server is in the manageable state.
ironic node-set-provision-state NODE_ID manage
# ironic node-set-provision-state NODE_ID manage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_ID with the bare metal server UUID or node name.
Set the bare metal server in cleaning state and provide the cleaning steps.
ironic node-set-provision-state NODE_ID clean --clean-steps CLEAN_STEPS
# ironic node-set-provision-state NODE_ID clean --clean-steps CLEAN_STEPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace NODE_ID with the bare metal server UUID or node name. Replace CLEAN_STEPS with the cleaning steps in JSON format, a path to a file that contains the cleaning steps, or directly from standard input. The following is an example of cleaning steps in JSON format:
'[{"interface": "deploy", "step": "erase_devices"}]'
'[{"interface": "deploy", "step": "erase_devices"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
See OpenStack - Node Cleaning for more details.
2.3.4. Specify the Preferred Root Disk on a Bare Metal Node Copier lienLien copié sur presse-papiers!
When the deploy ramdisk
boots on a bare metal node, the first disk that Bare Metal Provisioning discovers becomes the root
device (the device where the image is saved). If the bare metal node has more than one SATA, SCSI, or IDE disk controller, the order in which their corresponding disk devices are added is arbitrary and may change at each reboot. For example, devices such as /dev/sda
and /dev/sdb
may switch on each boot, which would result in Bare Metal Provisioning selecting a different disk each time the bare metal node is being deployed.
With disk hints, you can pass hints to the deploy ramdisk
to specify which disk device Bare Metal Provisioning should deploy the image onto. The following table describes the hints you can use to select a preferred root
disk on a bare metal node.
Hint | Type | Description |
---|---|---|
model | (STRING): | Disk device identifier. |
vendor | (STRING): | Disk device vendor. |
serial | (STRING): | Disk device serial number. |
size | (INT): | Size of the disk device in GB.
NOTE: The |
wwn | (STRING): | Unique storage identifier. |
wwn_with_extension | (STRING): | Unique storage identifier with the vendor extension appended. |
wwn_vendor_extension | (STRING): | Unique vendor storage identifier. |
name | (STRING): | The device name, for example /dev/md0. WARNING: The root device hint name should only be used for devices with constant names (for example, RAID volumes). Do not use this hint for SATA, SCSI, and IDE disk controllers because the order in which the device nodes are added in Linux is arbitrary, resulting in devices such as /dev/sda and /dev/sdb switching at boot time. See Persistent Naming for details. |
To associate one or more disk device hints with a bare metal node, update the node’s properties with a root_device
key, for example:
ironic node-update <node-uuid> add properties/root_device='{"wwn": "0x4000cca77fc4dba1"}'
# ironic node-update <node-uuid> add properties/root_device='{"wwn": "0x4000cca77fc4dba1"}'
This example guarantees that Bare Metal Provisioning picks the disk device that has the wwn
equal to the specified WWN value, or fails the deployment if no disk device on that node has the specified WWN value.
The hints can have an operator at the beginning of the value string. If no operator is specified the default is ==
(for numerical values) and s==
(for string values).
Type | Operator | Description |
---|---|---|
numerical |
|
equal to or greater than (equivalent to |
| equal to | |
| not equal to | |
| greater than or equal to | |
| greater than | |
| less than or equal to | |
| less than | |
string (python comparisons) |
| equal to |
| not equal to | |
| greater than or equal to | |
| greater than | |
| less than or equal to | |
| less than | |
| substring | |
collections |
| all elements contained in collection |
| find one of these |
The following examples show how to update bare metal node properties to select a particular disk:
- Find a non-rotational (SSD) disk greater than or equal to 60 GB:
ironic node-update <node-uuid> add properties/root_device='{"size": ">= 60", "rotational": false}'
# ironic node-update <node-uuid> add properties/root_device='{"size": ">= 60", "rotational": false}'
- Find a Samsung or Winsys disk:
ironic node-update <node-uuid> add properties/root_device='{"vendor": "<or> samsung <or> winsys"}'
# ironic node-update <node-uuid> add properties/root_device='{"vendor": "<or> samsung <or> winsys"}'
If multiple hints are specified, a disk device must satisfy all the hints.
2.4. Use Host Aggregates to Separate Physical and Virtual Machine Provisioning Copier lienLien copié sur presse-papiers!
Host aggregates are used by OpenStack Compute to partition availability zones, and group nodes with specific shared properties together. Key value pairs are set both on the host aggregate and on instance flavors to define these properties. When an instance is provisioned, Compute’s scheduler compares the key value pairs on the flavor with the key value pairs assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine on an openstack-nova-compute node.
If your Red Hat OpenStack Platform environment is set up to provision both bare metal machines and virtual machines, use host aggregates to direct instances to spawn as either physical machines or virtual machines. The procedure below creates a host aggregate for bare metal hosts, and adds a key value pair specifying that the host type is baremetal
. Any bare metal node grouped in this aggregate inherits this key value pair. The same key value pair is then added to the flavor that will be used to provision the instance.
If the image or images you will use to provision bare metal machines were uploaded to the Image service with the hypervisor_type=ironic
property set, the scheduler will also use that key pair value in its scheduling decision. To ensure effective scheduling in situations where image properties may not apply, set up host aggregates in addition to setting image properties. See Section 2.1.3, “Create the Bare Metal Images” for more information on building and uploading images.
Creating a Host Aggregate for Bare Metal Provisioning
Create the host aggregate for
baremetal
in the defaultnova
availability zone:nova aggregate-create baremetal nova
# nova aggregate-create baremetal nova
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set metadata on the
baremetal
aggregate that will assign hosts added to the aggregate thehypervisor_type=ironic
property:nova aggregate-set-metadata baremetal hypervisor_type=ironic
# nova aggregate-set-metadata baremetal hypervisor_type=ironic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the openstack-nova-compute node with Bare Metal Provisioning drivers to the
baremetal
aggregate:nova aggregate-add-host baremetal COMPUTE_HOSTNAME
# nova aggregate-add-host baremetal COMPUTE_HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace COMPUTE_HOSTNAME with the host name of the system hosting the openstack-nova-compute service. A single, dedicated compute host should be used to handle all Bare Metal Provisioning requests.
Add the
ironic
hypervisor property to the flavor or flavors that you have created for provisioning bare metal nodes:nova flavor-key FLAVOR_NAME set hypervisor_type="ironic"
# nova flavor-key FLAVOR_NAME set hypervisor_type="ironic"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace FLAVOR_NAME with the name of the flavor.
Add the following Compute filter scheduler to the existing list under
scheduler_default_filters
in /etc/nova/nova.conf:AggregateInstanceExtraSpecsFilter
AggregateInstanceExtraSpecsFilter
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This filter ensures that the Compute scheduler processes the key value pairs assigned to host aggregates.
2.5. Example: Test Bare Metal Provisioning with SSH and Virsh Copier lienLien copié sur presse-papiers!
Test the Bare Metal Provisioning setup by deploying instances on two virtual machines acting as bare metal nodes on a single physical host. Both virtual machines are virtualized using libvirt and virsh.
The SSH driver is for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.
This scenario requires the following resources:
- A Red Hat OpenStack Platform environment with Bare Metal Provisioning services configured on an overcloud node. You must have completed all steps in this guide.
- One bare metal machine with Red Hat Enterprise Linux 7.2 and libvirt virtualization tools installed. This system acts as the host containing the virtualized bare metal nodes.
- One network connection between the Bare Metal Provisioning node and the host containing the virtualized bare metal nodes. This network acts as the Bare Metal Provisioning Network.
2.5.1. Create the Virtualized Bare Metal Nodes Copier lienLien copié sur presse-papiers!
Create two virtual machines that will act as the bare metal nodes in the test scenario. The nodes will be referred to as Node1
and Node2
.
Creating Virtualized Bare Metal Nodes
- Access the Virtual Machine Manager from the libvirt host.
Create two virtual machines with the following configuration:
- 1 vCPU
- 2048 MB of memory
- Network Boot (PXE)
- 20 GB storage
-
Network source:
Host device eth0: macvtap
and Source mode:Bridge
. Selecting macvtap sets the virtual machines to share the host’s Ethernet network interface. This way the Bare Metal Provisioning node has direct access to the virtualized nodes.
- Shut down both virtual machines.
2.5.2. Create an SSH Key Pair Copier lienLien copié sur presse-papiers!
Create an SSH key pair that will allow the Bare Metal Provisioning node to connect to the libvirt host.
Creating an SSH Key Pair
On the Bare Metal Provisioning node, create a new SSH key:
ssh-keygen -t rsa -b 2048 -C "user@domain.com" -f ./virtkey
# ssh-keygen -t rsa -b 2048 -C "user@domain.com" -f ./virtkey
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace user@domain.com with an email address or other comment that identifies this key. When the command prompts you for a passphrase, press Enter to proceed without a passphrase. The command creates two files: the private key (virtkey) and the public key (virtkey.pub).
Copy the contents of the public key into the /root/.ssh/authorized_keys file of the libvirt host’s root user:
ssh-copy-id -i virtkey root@LIBVIRT_HOST
# ssh-copy-id -i virtkey root@LIBVIRT_HOST
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace LIBVIRT_HOST with the IP address or host name of the libvirt host.
The private key (virtkey) is used when the nodes are registered.
2.5.3. Add the Virtualized Nodes as Bare Metal Nodes Copier lienLien copié sur presse-papiers!
Add as nodes the virtual machines onto which you will provision instances. In this example, the driver details are provided manually and the node details are discovered using hardware inspection. Node details can also be added manually on a node-by-node basis. See Section 2.3.2, “Add a Node Manually” for more information.
Adding Virtualized Nodes as Bare Metal Nodes
On the Bare Metal Provisioning conductor service node, enable the
pxe_ssh
driver:openstack-config --set /etc/ironic/ironic.conf \ DEFAULT enabled_drivers pxe_ssh
# openstack-config --set /etc/ironic/ironic.conf \ DEFAULT enabled_drivers pxe_ssh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are adding
pxe_ssh
to a list of existing drivers, open the file and add the driver to the list inenabled_drivers
, separated by a comma.Set up the shell to use Identity as the administrative user:
source ~/keystonerc_admin
# source ~/keystonerc_admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the first node, and register the SSH details for the libvirt host:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the following values:
-
Replace VIRTKEY_FILE_PATH with the absolute file path of the
virtkey
SSH private key file. - Replace LIBVIRT_HOST_IP with the IP address or host name of the libvirt host.
- Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
- Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
-
Replace VIRTKEY_FILE_PATH with the absolute file path of the
-
Add a second node, using the same command as above, and replacing
Node1
withNode2
. Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of via PXE or virtual media. The local boot capability must also have been set on the flavor you will use to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:
ironic node-update Node1 add \ properties/capabilities="boot_option:local" ironic node-update Node2 add \ properties/capabilities="boot_option:local"
# ironic node-update Node1 add \ properties/capabilities="boot_option:local" # ironic node-update Node2 add \ properties/capabilities="boot_option:local"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the nodes to the manageable state:
ironic node-set-provision-state Node1 manage ironic node-set-provision-state Node2 manage
# ironic node-set-provision-state Node1 manage # ironic node-set-provision-state Node2 manage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start inspection on the nodes:
ironic node-set-provision-state Node1 inspect ironic node-set-provision-state Node2 inspect
# ironic node-set-provision-state Node1 inspect # ironic node-set-provision-state Node2 inspect
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The node discovery and inspection process must run to completion before the node can be provisioned. To check the status of node inspection, run ironic node-list and look for
Provision State
. Nodes will be in theavailable
state after successful inspection.Validate the node’s setup:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command above should report either
True
orNone
for each interface. Interfaces markedNone
are those that you have not configured, or those that are not supported for your driver.- When the nodes have been successfully added, launch two instances using Chapter 3, Launch Bare Metal Instances.