Chapter 3. Deploying an Overcloud with the Bare Metal Service


For full details about overcloud deployment with the director, see Director Installation and Usage. This chapter only covers deployment steps specific to ironic.

3.1. Creating the Ironic Template

Use an environment file to deploy the overcloud with the Bare Metal service enabled. A template is located on the director node at /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml.

Filling in the template

Additional configuration can be specified either in the provided template or in an additional yaml file, for example ~/templates/ironic.yaml.

  • For a hybrid deployment with both bare metal and virtual instances, you must add AggregateInstanceExtraSpecsFilter to the list of NovaSchedulerDefaultFilters. If you have not set NovaSchedulerDefaultFilters anywhere, you can do so in ironic.yaml. For an example, see Section 3.3, “Example Templates”.

    Note

    If you are using SR-IOV, NovaSchedulerDefaultFilters is already set in tripleo-heat-templates/environments/neutron-sriov.yaml. Append AggregateInstanceExtraSpecsFilter to this list.

  • The type of cleaning that occurs before and between deployments is set by IronicCleaningDiskErase. By default, this is set to ‘full’ by puppet/services/ironic-conductor.yaml. Setting this to ‘metadata’ can substantially speed up the process, as it only cleans the partition table, however, since the deployment will be less secure in a multi-tenant environment, you should only do this in a trusted tenant environment.
  • You can add drivers with the IronicEnabledDrivers parameter. By default, pxe_ipmitool, pxe_drac and pxe_ilo are enabled.

For a full list of configuration parameters, see the section Bare Metal in the Overcloud Parameters guide.

3.2. Network Configuration

Create a bridge called br-baremetal for ironic to use. You can specify this in an additional template:

~/templates/network-environment.yaml

parameter_defaults:
  NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal
  NeutronFlatNetworks: datacentre,baremetal
Copy to Clipboard Toggle word wrap

You can either configure this bridge in the provisioning network (control plane) of the controllers, so you can reuse this network as the bare metal network, or add a dedicated network. The configuration requirements are the same, however the bare metal network cannot be VLAN-tagged, as it is used for provisioning.

~/templates/nic-configs/controller.yaml

network_config:
    -
      type: ovs_bridge
          name: br-baremetal
          use_dhcp: false
          members:
            -
              type: interface
              name: eth1
Copy to Clipboard Toggle word wrap

3.3. Example Templates

The following is an example template file. This file may not meet the requirements of your environment. Before using this example, make sure it does not interfere with any existing configuration in your environment.

~/templates/ironic.yaml

parameter_defaults:

    NovaSchedulerDefaultFilters:
        - RetryFilter
        - AggregateInstanceExtraSpecsFilter
        - AvailabilityZoneFilter
        - RamFilter
        - DiskFilter
        - ComputeFilter
        - ComputeCapabilitiesFilter
        - ImagePropertiesFilter

    IronicCleaningDiskErase: metadata
Copy to Clipboard Toggle word wrap

In this example:

  • The AggregateInstanceExtraSpecsFilter allows both virtual and bare metal instances, for a hybrid deployment.
  • Disk cleaning that is done before and between deployments only erases the partition table (metadata).

3.4. Deploying the Overcloud

To enable the Bare Metal service, include your ironic environment files with -e when deploying or redeploying the overcloud, along with the rest of your overcloud configuration.

For example:

$ openstack overcloud deploy \
  --templates \
  -e ~/templates/node-info.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e ~/templates/network-environment.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic.yaml \
  -e ~/templates/ironic.yaml \
Copy to Clipboard Toggle word wrap

For more information about deploying the overcloud, see Creating the Overcloud with the CLI Tools and Including Environment Files in Overcloud Creation.

3.5. Testing the Bare Metal Service

You can use the OpenStack Integration Test Suite to validate your Red Hat OpenStack deployment. For more information, see the OpenStack Integration Test Suite Guide.

Additional Ways to Verify the Bare Metal Service:

  1. Set up the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. Check that the nova-compute service is running on the controller nodes:

    $ openstack compute service list -c Binary -c Host -c Status
    Copy to Clipboard Toggle word wrap
  3. If you have changed the default ironic drivers, make sure the required drivers are enabled:

    $ openstack baremetal driver list
    Copy to Clipboard Toggle word wrap
  4. Ensure that the ironic endpoints are listed:

    $ openstack catalog list
    Copy to Clipboard Toggle word wrap

    == Configuring for the Bare Metal Service After Deployment

This section describes the steps necessary to configure your overcloud after deployment.

3.6. Configuring OpenStack Networking

Configure OpenStack Networking to communicate with the Bare Metal service for DHCP, PXE boot, and other requirements. The procedure below configures OpenStack Networking for a single, flat network use case for provisioning onto bare metal. The configuration uses the ML2 plug-in and the Open vSwitch agent. Only flat networks are supported.

This procedure creates a bridge using the bare metal network interface, and drops any remote connections.

All steps in the following procedure must be performed on the server hosting OpenStack Networking, while logged in as the root user.

Configuring OpenStack Networking to Communicate with the Bare Metal Service

  1. Set up the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. Create the flat network over which to provision bare metal instances:

    $ openstack network create \
      --provider-network-type flat \
      --provider-physical-network baremetal \
      --share NETWORK_NAME
    Copy to Clipboard Toggle word wrap

    Replace NETWORK_NAME with a name for this network. The name of the physical network over which the virtual network is implemented, (in this case baremetal), was set earlier in ~/templates/network-environment.yaml, with the parameter NeutronBridgeMappings.

  3. Create the subnet on the flat network:

    $ openstack subnet create \
      --network NETWORK_NAME \
      --subnet-range NETWORK_CIDR \
      --ip-version 4 \
      --gateway GATEWAY_IP \
      --allocation-pool start=START_IP,end=END_IP \
      --dhcp SUBNET_NAME
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • Replace SUBNET_NAME with a name for the subnet.
    • Replace NETWORK_NAME with the name of the provisioning network you created in the previous step.
    • Replace NETWORK_CIDR with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. The block of IP addresses specified by the range started by START_IP and ended by END_IP must fall within the block of IP addresses specified by NETWORK_CIDR.
    • Replace GATEWAY_IP with the IP address or host name of the router interface that will act as the gateway for the new subnet. This address must be within the block of IP addresses specified by NETWORK_CIDR, but outside of the block of IP addresses specified by the range started by START_IP and ended by END_IP.
    • Replace START_IP with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
    • Replace END_IP with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
  4. Attach the network and subnet to the router to ensure the metadata requests are served by the OpenStack Networking service.

    $ openstack router create ROUTER_NAME
    Copy to Clipboard Toggle word wrap

    Replace ROUTER_NAME with a name for the router.

  5. Add the Bare Metal subnet to this router:

    $ openstack router add subnet ROUTER_NAME BAREMETAL_SUBNET
    Copy to Clipboard Toggle word wrap

    Replace ROUTER_NAME with the name of your router and BAREMETAL_SUBNET with the ID or subnet name that you previously created. This allows the metadata requests from cloud-init to be served and the node configured.

3.7. Configuring Node Cleaning

By default, the Bare Metal service is set to use a network named provisioning for node cleaning. However, network names are not unique in OpenStack Networking, so it is possible for a tenant to create a network with the same name, causing a conflict with the Bare Metal service. Therefore, it is recommended to use the network UUID instead.

  1. Configure cleaning by providing the provider network UUID on the controller running the Bare Metal Service:

    ~/templates/ironic.yaml

    parameter_defaults:
        IronicCleaningNetwork: UUID
    Copy to Clipboard Toggle word wrap

    Replace UUID with the UUID of the bare metal network created in the previous steps.

    You can find the UUID using openstack network show:

    openstack network show NETWORK_NAME -f value -c id
    Copy to Clipboard Toggle word wrap
    Note

    This configuration must be done after the initial overcloud deployment, because the UUID for the network isn’t available beforehand.

  2. Apply the changes by redeploying the overcloud with the openstack overcloud deploy command as described in Section 3.4, “Deploying the Overcloud”.

    Note

    It is possible to avoid redeploying the overcloud by updating the ironic.conf file on each controller. However, manually updating the ironic.conf file of an OpenStack director installation is not supported. These instructions are only provided for convenience.

    1. Uncomment the following line and replace <None> with the UUID of the bare metal network:

      cleaning_network = <None>
      Copy to Clipboard Toggle word wrap
    2. Restart the Bare Metal service:

      # systemctl restart openstack-ironic-conductor.service
      Copy to Clipboard Toggle word wrap

    Redeploying the overcloud with openstack overcloud deploy will revert any manual changes, so make sure you have added the cleaning configuration to ~/templates/ironic.yaml (described in the previous step) before the next time you use openstack overcloud deploy.

3.7.1. Manual Node Cleaning

To manually initiate node cleaning, the node must be in the manageable state.

Node cleaning has two modes:

Metadata only clean - Removes partitions from all disks on a given node. This is a faster clean cycle, but less secure since it only erases partition tables. Only use this mode on trusted tenant environments.

Full clean - Removes all data from all disks, using either ATA secure erase or by shredding. This can take several hours to complete.

To initiate a metadata clean:

$ openstack baremetal node clean UUID \
    --clean-steps [{"interface": "deploy", "step": "erase_devices_metadata"}]
Copy to Clipboard Toggle word wrap

To initiate a full clean:

$ openstack baremetal node clean UUID \
    --clean-steps [{"interface": "deploy", "step": "erase_devices"}]
Copy to Clipboard Toggle word wrap

Replace UUID with the UUID of the node you would like cleaned.

After a successful cleaning, the node state returns to manageable. If the state is clean failed, check the last_error field for the cause of failure.

3.8. Creating the Bare Metal Flavor

You need to create a flavor to use as a part of the deployment. The specifications (memory, CPU, and disk) of this flavor must be equal to or less than what your bare metal node provides.

  1. Set up the shell to access Identity as the administrative user:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. List existing flavors:

    $ openstack flavor list
    Copy to Clipboard Toggle word wrap
  3. Create a new flavor for the Bare Metal service:

    $ openstack flavor create \
      --id auto --ram RAM \
      --vcpus VCPU --disk DISK \
      --property baremetal=true \
      --public baremetal
    Copy to Clipboard Toggle word wrap

    Replace RAM with the amount of memory, VCPU with the number of vCPUs and DISK with the disk storage value. The property baremetal is used to distinguish bare metal from virtual instances.

  4. Verify that the new flavor is created with the respective values:

    $ openstack flavor list
    Copy to Clipboard Toggle word wrap

3.9. Creating the Bare Metal Images

The deployment requires two sets of images:

  • The deploy image is used by the Bare Metal service to boot the bare metal node and copy a user image onto the bare metal node. The deploy image consists of the kernel image and the ramdisk image.
  • The user image is the image deployed onto the bare metal node. The user image also has a kernel image and ramdisk image, but additionally, the user image contains a main image. The main image is either a root partition, or a whole-disk image.

    • A whole-disk image is an image that contains the partition table and boot loader. The Bare Metal service does not control the subsequent reboot of a node deployed with a whole-disk image as the node supports localboot.
    • A root partition image only contains the root partition of the operating system. If using a root partition, after the deploy image is loaded into the Image service, you can set the deploy image as the node’s boot image in the node’s properties. A subsequent reboot of the node uses netboot to pull down the user image.

The examples in this section use a root partition image to provision bare metal nodes.

3.9.1. Preparing the Deploy Images

You do not have to create the deploy image as it was already used when the overcloud was deployed by the undercloud. The deploy image consists of two images - the kernel image and the ramdisk image as follows:

ironic-python-agent.kernel
ironic-python-agent.initramfs
Copy to Clipboard Toggle word wrap

These images are often in the home directory, unless you have deleted them, or unpacked them elsewhere. If they are not in the home directory, and you still have the rhosp-director-images-ipa package installed, these images will be in the /usr/share/rhosp-director-images/ironic-python-agent*.tar file.

Extract the images and upload them to the Image service:

$ openstack image create \
  --container-format aki \
  --disk-format aki \
  --public \
  --file ./ironic-python-agent.kernel bm-deploy-kernel
$ openstack image create \
  --container-format ari \
  --disk-format ari \
  --public \
  --file ./ironic-python-agent.initramfs bm-deploy-ramdisk
Copy to Clipboard Toggle word wrap

3.9.2. Preparing the User Image

The final image that you need is the user image that will be deployed on the bare metal node. User images also have a kernel and ramdisk, along with a main image.

  1. Download the Red Hat Enterprise Linux KVM guest image from the Customer Portal (requires login).
  2. Define DIB_LOCAL_IMAGE as the downloaded image:

    $ export DIB_LOCAL_IMAGE=rhel-server-7.4-x86_64-kvm.qcow2
    Copy to Clipboard Toggle word wrap
  3. Set your registration information. If you use Red Hat Customer Portal, you must configure the following information:

    $ export REG_USER='USER_NAME'
    $ export REG_PASSWORD='PASSWORD'
    $ export REG_AUTO_ATTACH=true
    $ export REG_METHOD=portal
    $ export https_proxy='IP_address:port' (if applicable)
    $ export http_proxy='IP_address:port' (if applicable)
    Copy to Clipboard Toggle word wrap

    If you use Red Hat Satellite, you must configure the following information:

    $ export REG_USER='USER_NAME'
    $ export REG_PASSWORD='PASSWORD'
    $ export REG_SAT_URL='<SATELLITE URL>'
    $ export REG_ORG='<SATELLITE ORG>'
    $ export REG_ENV='<SATELLITE ENV>'
    $ export REG_METHOD=<METHOD>
    Copy to Clipboard Toggle word wrap

    If you have any offline repositories, you can define DIB_YUM_REPO_CONF as local repository configuration:

    $ export DIB_YUM_REPO_CONF=<path-to-local-repository-config-file>
    Copy to Clipboard Toggle word wrap
  4. Create the user images using the diskimage-builder tool:

    $ disk-image-create rhel7 baremetal -o rhel-image
    Copy to Clipboard Toggle word wrap

    This extracts the kernel as rhel-image.vmlinuz and initial ramdisk as rhel-image.initrd.

  5. Upload the images to the Image service:

    $ KERNEL_ID=$(openstack image create \
      --file rhel-image.vmlinuz --public \
      --container-format aki --disk-format aki \
      -f value -c id rhel-image.vmlinuz)
    $ RAMDISK_ID=$(openstack image create \
      --file rhel-image.initrd --public \
      --container-format ari --disk-format ari \
      -f value -c id rhel-image.initrd)
    $ openstack image create \
      --file rhel-image.qcow2   --public \
      --container-format bare \
      --disk-format qcow2 \
      --property kernel_id=$KERNEL_ID \
      --property ramdisk_id=$RAMDISK_ID \
      rhel-image
    Copy to Clipboard Toggle word wrap

3.10. Adding Physical Machines as Bare Metal Nodes

There are two methods to enroll a bare metal node:

  1. Prepare an inventory file with the node details, import the file into the Bare Metal service, then make the nodes available.
  2. Register a physical machine as a bare metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. These steps can be performed on any node which has your overcloudrc file.

Both methods are detailed in this section.

After enrolling the physical machines, Compute is not immediately notified of new resources, because Compute’s resource tracker synchronizes periodically. Changes will be visible after the next periodic task is run. This value, scheduler_driver_task_period, can be updated in /etc/nova/nova.conf. The default period is 60 seconds.

  1. Create a file overcloud-nodes.yaml, including the node details. Multiple nodes can be enrolled with one file.

    nodes:
        - name: node0
          driver: pxe_ipmitool
          driver_info:
            ipmi_address: <IPMI_IP>
            ipmi_username: <USER>
            ipmi_password: <PASSWORD>
          properties:
            cpus: <CPU_COUNT>
            cpu_arch: <CPU_ARCHITECTURE>
            memory_mb: <MEMORY>
            local_gb: <ROOT_DISK>
            root_device:
                serial: <SERIAL>
          ports:
            - address: <PXE_NIC_MAC>
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • <IPMI_IP> with the address of the Bare Metal controller.
    • <USER> with your username.
    • <PASSWORD> with your password.
    • <CPU_COUNT> with the number of CPUs.
    • <CPU_ARCHITECTURE> with the type of architecture of the CPUs.
    • <MEMORY> with the amount of memory in MiB.
    • <ROOT_DISK> with the size of the root disk in GiB.
    • <MAC_ADDRESS> with the MAC address of the NIC used to PXE boot.

      You only need to include root_device if the machine has multiple disks. Replace <SERIAL> with the serial number of the disk you would like used for deployment.

  2. Set up the shell to use Identity as the administrative user:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  3. Import the inventory file into ironic:

    $ openstack baremetal create overcloud-nodes.yaml
    Copy to Clipboard Toggle word wrap
  4. The nodes are now in the enroll state. Make them available by specifying the deploy kernel and deploy ramdisk on each node:

    $ openstack baremetal node set NODE_UUID \
      --driver-info deploy_kernel=KERNEL_UUID \
      --driver-info deploy_ramdisk=INITRAMFS_UUID
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace KERNEL_UUID with the unique identifier for the kernel deploy image that was uploaded to the Image service. Find this value with:

      $ openstack image show bm-deploy-kernel -f value -c id
      Copy to Clipboard Toggle word wrap
    • Replace INITRAMFS_UUID with the unique identifier for the ramdisk image that was uploaded to the Image service. Find this value with:

      $ openstack image show bm-deploy-ramdisk -f value -c id
      Copy to Clipboard Toggle word wrap
  5. Check that the nodes were successfully enrolled:

    $ openstack baremetal node list
    Copy to Clipboard Toggle word wrap

    There may be a delay between enrolling a node and its state being shown.

3.10.2. Enrolling a Bare Metal Node Manually

  1. Set up the shell to use Identity as the administrative user:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. Add a new node:

    $ openstack baremetal node create --driver pxe_impitool --name NAME
    Copy to Clipboard Toggle word wrap

    To create a node you must specify the driver name. This example uses pxe_impitool. To use a different driver, you must enable it by setting the IronicEnabledDrivers parameter. For more information on supported drivers, see Appendix A, Bare Metal Drivers.

    Important

    Note the unique identifier for the node.

  3. Update the node driver information to allow the Bare Metal service to manage the node:

    $ openstack baremetal node set NODE_UUID \
      --driver_info PROPERTY=VALUE \
      --driver_info PROPERTY=VALUE
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace PROPERTY with a required property returned by the ironic driver-properties command.
    • Replace VALUE with a valid value for that property.
  4. Specify the deploy kernel and deploy ramdisk for the node driver:

    $ openstack baremetal node set NODE_UUID \
      --driver-info deploy_kernel=KERNEL_UUID \
      --driver-info deploy_ramdisk=INITRAMFS_UUID
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace KERNEL_UUID with the unique identifier for the .kernel image that was uploaded to the Image service.
    • Replace INITRAMFS_UUID with the unique identifier for the .initramfs image that was uploaded to the Image service.
  5. Update the node’s properties to match the hardware specifications on the node:

    $ openstack baremetal node set NODE_UUID \
      --property cpus=CPU \
      --property memory_mb=RAM_MB \
      --property local_gb=DISK_GB \
      --property cpu_arch=ARCH
    Copy to Clipboard Toggle word wrap

    Replace the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace CPU with the number of CPUs.
    • Replace RAM_MB with the RAM (in MB).
    • Replace DISK_GB with the disk size (in GB).
    • Replace ARCH with the architecture type.
  6. OPTIONAL: Configure the node to reboot after initial deployment from a local boot loader installed on the node’s disk, instead of using PXE from ironic-conductor. The local boot capability must also be set on the flavor used to provision the node. To enable local boot, the image used to deploy the node must contain grub2. Configure local boot:

    $ openstack baremetal node set NODE_UUID \
      --property capabilities="boot_option:local"
    Copy to Clipboard Toggle word wrap

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.

  7. Inform the Bare Metal service of the node’s network card by creating a port with the MAC address of the NIC on the provisioning network:

    $ openstack baremetal port create --node NODE_UUID MAC_ADDRESS
    Copy to Clipboard Toggle word wrap

    Replace NODE_UUID with the unique identifier for the node. Replace MAC_ADDRESS with the MAC address of the NIC used to PXE boot.

  8. If you have multiple disks, set the root device hints. This informs the deploy ramdisk which disk it should use for deployment.

    $ openstack baremetal node set NODE_UUID \
      --property root_device={"PROPERTY": "VALUE"}
    Copy to Clipboard Toggle word wrap

    Replace with the following values:

    • Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name.
    • Replace PROPERTY and VALUE with details about the disk you want used for deployment, for example root_device='{"size": 128}'

      The following properties are supported:

      • model (String): Device identifier.
      • vendor (String): Device vendor.
      • serial (String): Disk serial number.
      • hctl (String): Host:Channel:Target:Lun for SCSI.
      • size (Integer): Size of the device in GB.
      • wwn (String): Unique storage identifier.
      • wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
      • wwn_vendor_extension (String): Unique vendor storage identifier.
      • rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
      • name (String): The name of the device, for example: /dev/sdb1 Only use this for devices with persistent names.

        Note

        If you specify more than one property, the device must match all of those properties.

  9. Validate the node’s setup:

    $ openstack baremetal node validate NODE_UUID
    +------------+--------+---------------------------------------------+
    | Interface  | Result | Reason                                      |
    +------------+--------+---------------------------------------------+
    | boot       | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | console    | None   | not supported                               |
    | deploy     | False  | Cannot validate image information for node  |
    |            |        | a02178db-1550-4244-a2b7-d7035c743a9b        |
    |            |        | because one or more parameters are missing  |
    |            |        | from its instance_info. Missing are:        |
    |            |        | ['ramdisk', 'kernel', 'image_source']       |
    | inspect    | None   | not supported                               |
    | management | True   |                                             |
    | network    | True   |                                             |
    | power      | True   |                                             |
    | raid       | True   |                                             |
    | storage    | True   |                                             |
    +------------+--------+---------------------------------------------+
    Copy to Clipboard Toggle word wrap

    Replace NODE_UUID with the unique identifier for the node. Alternatively, use the node’s logical name. The output of the command above should report either True or None for each interface. Interfaces marked None are those that you have not configured, or those that are not supported for your driver.

    Note

    Interfaces may fail validation due to missing 'ramdisk', 'kernel', and 'image_source' parameters. This result is fine, because the Compute service populates those missing parameters at the beginning of the deployment process.

OpenStack Compute uses host aggregates to partition availability zones, and group together nodes with specific shared properties. When an instance is provisioned, Compute’s scheduler compares properties on the flavor with the properties assigned to host aggregates, and ensures that the instance is provisioned in the correct aggregate and on the correct host: either on a physical machine or as a virtual machine.

The procedure below describes how to do the following:

  • Add the property baremetal to your flavors, setting it to either true or false.
  • Create separate host aggregates for bare metal hosts and compute nodes with a matching baremetal property. Nodes grouped into an aggregate inherit this property.

Creating a Host Aggregate

  1. Set the baremetal property to true on the baremetal flavor.

    $ openstack flavor set baremetal --property baremetal=true
    Copy to Clipboard Toggle word wrap
  2. Set the baremetal property to false on the flavors used for virtual instances.

    $ openstack flavor set FLAVOR_NAME --property baremetal=false
    Copy to Clipboard Toggle word wrap
  3. Create a host aggregate called baremetal-hosts:

    $ openstack aggregate create --property baremetal=true baremetal-hosts
    Copy to Clipboard Toggle word wrap
  4. Add each controller node to the baremetal-hosts aggregate:

    $ openstack aggregate add host baremetal-hosts HOSTNAME
    Copy to Clipboard Toggle word wrap
    Note

    If you have created a composable role with the NovaIronic service, add all the nodes with this service to the baremetal-hosts aggregate. By default, only the controller nodes have the NovaIronic service.

  5. Create a host aggregate called virtual-hosts:

    $ openstack aggregate create --property baremetal=false virtual-hosts
    Copy to Clipboard Toggle word wrap
  6. Add each compute node to the virtual-hosts aggregate:

    $ openstack aggregate add host virtual-hosts HOSTNAME
    Copy to Clipboard Toggle word wrap
  7. If you did not add the following Compute filter scheduler when deploying the overcloud, add it now to the existing list under scheduler_default_filters in /etc/nova/nova.conf:

    AggregateInstanceExtraSpecsFilter
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat