Chapter 7. Configuring instance scheduling and placement
The Compute scheduler service determines the appropriate Compute node or host aggregate for placing an instance. It analyzes the instance specifications, including the flavor and image to find a suitable host for launching or moving the instance.
The Compute scheduler service uses the configuration of the following components, in the following order, to determine on which Compute node to launch or move an instance:
- Placement service prefilters: The Compute scheduler service uses the Placement service to filter the set of candidate Compute nodes based on specific attributes. For example, the Placement service automatically excludes disabled Compute nodes.
- Filters: Used by the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance.
- Weights: The Compute scheduler service prioritizes the filtered Compute nodes using a weighting system. The highest weight has the highest priority.
In the following diagram, hosts 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling.
Figure 7.1. Example of Compute scheduler service determining hosts for scheduling
7.1. Prefiltering using the Placement service Copy linkLink copied to clipboard!
The Placement service tracks the inventory and usage of the quantitative resources, such as vCPUs, and the qualitative resources, such as traits, of resource providers. The Compute service interacts with the Placement service to efficiently select and consume these resources when creating and managing instances.
The Placement service also tracks the mapping of available qualitative resources to resource providers, such as the type of storage disk trait a resource provider has.
The Placement service applies prefilters to the set of candidate Compute nodes based on Placement service resource provider inventories and traits. You can create prefilters based on the following criteria:
- Supported image types
- Traits
- Projects or tenants
- Availability zone
7.1.1. Filtering by requested image type support Copy linkLink copied to clipboard!
Filter out Compute nodes that cannot support the disk format of the image used to launch an instance. This helps ensure that the scheduler avoids sending launch requests for incompatible images to hosts, such as sending launch requests with QCOW2 images to host using Ceph Storage backends.
Procedure
-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the
customServiceConfigparameter to the Compute scheduler (nova-scheduler) template,schedulerServiceTemplate, to configure the Compute scheduler service to filter by requested image type support:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... nova: template: schedulerServiceTemplate: customServiceConfig: | [scheduler] query_placement_for_image_type_support = trueUpdate the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the -w option to the end of the
getcommand to track deployment progress.-
Optional: Confirm that the control plane is deployed by reviewing the pods in the
openstacknamespace for each of the cells you created. The control plane is deployed when all the pods are either completed or running.
7.1.2. Filtering by resource provider traits Copy linkLink copied to clipboard!
Resource provider traits define qualitative aspects of a host, such as storage disk type or CPU extensions. The Compute scheduler uses these traits, which instances can require or forbid, to identify a suitable Compute node or host aggregate for placement.
To enable your cloud users to create instances on hosts that have particular traits, you can define a flavor that requires or forbids a particular trait, and you can create an image that requires or forbids a particular trait.
For a list of the available traits, see the os-traits library. You can also create custom traits, as required.
Additional resources
7.1.2.1. Creating an image that requires or forbids a resource provider trait Copy linkLink copied to clipboard!
Create an instance image that your cloud users can use to launch instances on hosts that have particular traits.
Prerequisites
-
You installed the
ocandpodmancommand line tools on your workstation. - You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-adminCreate a new image:
$ openstack image create ... trait-imageIdentify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait:
To use an existing trait, list the existing traits to retrieve the trait name:
$ openstack --os-placement-api-version 1.6 trait listTo create a new trait, enter the following command:
$ openstack --os-placement-api-version 1.6 trait \ create CUSTOM_TRAIT_NAMECustom traits must begin with the prefix
CUSTOM_and contain only the letters A through Z, the numbers 0 through 9 and the underscore “_” character.
Collect the existing resource provider traits of each host:
$ existing_traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')Check the existing resource provider traits for the traits you require a host or host aggregate to have:
$ echo $existing_traitsIf the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host:
$ openstack --os-placement-api-version 1.6 \ resource provider trait set $existing_traits \ --trait <TRAIT_NAME> \ <host_uuid>Replace
<TRAIT_NAME>with the name of the trait that you want to add to the resource provider. You can use the--traitoption more than once to add additional traits, as required.NoteThis command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed.
To schedule instances on a host or host aggregate that has a required trait, add the trait to the image extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the image extra specs:
$ openstack image set \ --property trait:HW_CPU_X86_AVX512BW=required \ trait-imageTo filter out hosts or host aggregates that have a forbidden trait, add the trait to the image extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the image extra specs:
$ openstack image set \ --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden \ trait-imageExit the openstackclient pod:
$ exit
7.1.2.2. Creating a flavor that requires or forbids a resource provider trait Copy linkLink copied to clipboard!
Create flavors that your cloud users can use to launch instances on hosts that have particular traits.
Prerequisites
-
You installed the
ocandpodmancommand line tools on your workstation. - You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-adminCreate a flavor:
$ openstack flavor create --vcpus 1 --ram 512 \ --disk 2 trait-flavorIdentify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait:
To use an existing trait, list the existing traits to retrieve the trait name:
$ openstack --os-placement-api-version 1.6 trait listTo create a new trait, enter the following command:
$ openstack --os-placement-api-version 1.6 trait \ create CUSTOM_TRAIT_NAMECustom traits must begin with the prefix
CUSTOM_and contain only the letters A through Z, the numbers 0 through 9 and the underscore “_” character.
Collect the existing resource provider traits of each host:
$ existing_traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')Check the existing resource provider traits for the traits you require a host or host aggregate to have:
$ echo $existing_traitsIf the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host:
$ openstack --os-placement-api-version 1.6 \ resource provider trait set $existing_traits \ --trait <TRAIT_NAME> \ <host_uuid>Replace
<TRAIT_NAME>with the name of the trait that you want to add to the resource provider. You can use the--traitoption more than once to add additional traits, as required.NoteThis command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed.
To schedule instances on a host or host aggregate that has a required trait, add the trait to the flavor extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the flavor extra specs:
$ openstack flavor set \ --property trait:HW_CPU_X86_AVX512BW=required \ trait-flavorTo filter out hosts or host aggregates that have a forbidden trait, add the trait to the flavor extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the flavor extra specs:
$ openstack flavor set \ --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden \ trait-flavorExit the openstackclient pod:
$ exit
7.1.3. Filtering by isolating host aggregates Copy linkLink copied to clipboard!
Restrict scheduling on a host aggregate to only those instances whose flavor and image traits match the metadata of the host aggregate. The combination of flavor and image metadata must require all the host aggregate traits to be eligible for scheduling on Compute nodes in that host aggregate.
Prerequisites
-
You installed
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientIdentify the traits you want to isolate the host aggregate for. You can select an existing trait or create a new trait:
To use an existing trait, list the existing traits to retrieve the trait name:
$ openstack --os-placement-api-version 1.6 trait listTo create a new trait, enter the following command:
$ openstack --os-placement-api-version 1.6 trait \ create CUSTOM_TRAIT_NAMENoteCustom traits must begin with the prefix
CUSTOM_and contain only the letters A through Z, the numbers 0 through 9 and the underscore “_” character.
Collect the existing resource provider traits of each Compute node:
$ existing_traits=$(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')Check the existing resource provider traits for the traits you want to isolate the host aggregate for:
$ echo $existing_traitsIf the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each Compute node in the host aggregate:
$ openstack --os-placement-api-version 1.6 \ resource provider trait set $existing_traits \ --trait <trait_name> \ <host_uuid>-
Replace
<trait_name>with the name of the trait that you want to add to the resource provider. You can use the--traitoption more than once to add additional traits, as required. Replace
<host_uuid>with the UUID of the host.NoteThis command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed.
-
Replace
- Repeat steps 6 - 8 for each Compute node in the host aggregate.
Add the metadata property for the trait to the host aggregate:
$ openstack --os-compute-api-version 2.53 aggregate set \ --property trait:<trait_name>=required <aggregate_name>Add the trait to a flavor or an image:
$ openstack flavor set \ --property trait:<trait_name>=required <flavor> $ openstack image set \ --property trait:<trait_name>=required <image>Exit the
OpenStackClientpod:$ exit
Additional resources
7.2. Configuring filters and weights for the Compute scheduler service Copy linkLink copied to clipboard!
Configure the filters and weights for the Compute scheduler service to determine the Compute node on which to launch an instance.
Procedure
-
On your workstation, open your OpenStackControlPlane custom resource (CR) file,
openstack_control_plane.yaml. Add the filters that you want the scheduler to use to the
[filter_scheduler] enabled_filtersparameter, for example:spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] enabled_filters = AggregateInstanceExtraSpecsFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilterSpecify which attribute to use to calculate the weight of each Compute node, for example:
spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] weight_classes = nova.scheduler.weights.all_weighersFor more information on the available attributes, see Compute scheduler weights.
Optional: Configure the multiplier to apply to each weigher. For example, to specify that the available RAM of a Compute node has a higher weight than the other default weighers, and that the Compute scheduler prefers Compute nodes with more available RAM over those nodes with less available RAM, use the following configuration:
spec: nova: template: schedulerServiceTemplate: customServiceConfig: | [filter_scheduler] weight_classes = nova.scheduler.weights.all_weighers [filter_scheduler] ram_weight_multiplier = 2.0TipYou can also set multipliers to a negative value. In the above example, to prefer Compute nodes with less available RAM over those nodes with more available RAM, set
ram_weight_multiplierto-2.0.Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackAfter the RHOCP creates the resources related to the OpenStackControlPlane CR, run the following command to check the status:
$ oc get openstackcontrolplane -n openstackThe OpenStackControlPlane resources are created when the status is "Setup complete".
TipAppend the
-woption to the end of the get command to track deployment progress.Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of the cells that you created.
$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
Additional resources
7.3. Compute scheduler filters Copy linkLink copied to clipboard!
Compute scheduler filters define the rules the scheduler applies to select an appropriate Compute node to host an instance. Filters pass or exclude hosts based on factors like instance requirements, host capabilities, and resource affinity/anti-affinity needs.
The default configuration applies the following filters:
-
ComputeFilter: The Compute node can service the request. -
ComputeCapabilitiesFilter: The Compute node satisfies the flavor extra specs. -
ImagePropertiesFilter: The Compute node satisfies the requested image properties. -
ServerGroupAntiAffinityFilter: The Compute node is not already hosting an instance in a specified group. -
ServerGroupAffinityFilter: The Compute node is already hosting instances in a specified group. -
SameHostFilter: The Compute node can schedule an instance on the same Compute node as a set of specific instances. -
DifferentHostFilter: The Compute host can schedule an instance on a different Compute node from a set of specific instances. -
PciPassthroughFilter: The Compute host can schedule instances on Compute nodes that have the devices that the instance requests by using the flavor extra_specs. -
NUMATopologyFilter: The Compute host can schedule instances with a NUMA topology on NUMA-capable Compute nodes.
You can add and remove filters. The following table describes all the available filters.
| Filter | Description |
|---|---|
|
| Use this filter to match the image metadata of an instance with host aggregate metadata. If any of the host aggregate metadata matches the metadata of the image, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. The scheduler only recognises valid image metadata properties. |
|
| Use this filter to match namespaced properties defined in the flavor extra specs of an instance with host aggregate metadata.
You must scope your flavor If any of the host aggregate metadata matches the metadata of the flavor extra spec, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. |
|
|
Use this filter to filter hosts by I/O operations with a per-aggregate |
|
|
Use this filter to limit the availability of Compute nodes in project-isolated host aggregates to a specified set of projects. Only projects specified using the Note
The project can still place instances on other hosts. To restrict this, use the |
|
|
Use this filter to limit the number of instances each Compute node in an aggregate can host. You can configure the maximum number of instances per-aggregate by using the |
|
|
Use this filter to pass hosts if no flavor metadata key is set, or the flavor aggregate metadata value contains the name of the requested flavor. The value of the flavor metadata entry is a string that may contain either a single flavor name or a comma-separated list of flavor names, such as |
|
| Use this filter to consider all available Compute nodes for instance scheduling. Note Using this filter does not disable other filters. |
|
| Use this filter to launch instances on a Compute node in the availability zone specified by the instance. |
|
|
Use this filter to match namespaced properties defined in the flavor extra specs of an instance against the Compute node capabilities. You must prefix the flavor extra specs with the
A more efficient alternative to using the |
|
| Use this filter to pass all Compute nodes that are operational and enabled. This filter should always be present. |
|
|
Use this filter to enable scheduling of an instance on a different Compute node from a set of specific instances. To specify these instances when launching an instance, use the
|
|
| Use this filter to filter Compute nodes based on the following properties defined on the instance image:
Compute nodes that can support the specified image properties contained in the instance are passed to the scheduler. |
|
|
Use this filter to only schedule instances with isolated images on isolated Compute nodes. You can also prevent non-isolated images from being used to build instances on isolated Compute nodes by configuring
To specify the isolated set of images and hosts use the
|
|
|
Use this filter to filter out hosts that have concurrent I/O operations that exceed the configured |
|
|
Use this filter to limit scheduling to Compute nodes that report the metrics configured by using To use this filter, add the following configuration to your Compute environment file:
By default, the Compute scheduler service updates the metrics every 60 seconds. |
|
|
Use this filter to schedule instances with a NUMA topology on NUMA-capable Compute nodes. Use flavor |
|
|
Use this filter to filter out Compute nodes that have more instances running than specified by the |
|
|
Use this filter to schedule instances on Compute nodes that have the devices that the instance requests by using the flavor Use this filter if you want to reserve nodes with PCI devices, which are typically expensive and limited, for instances that request them. |
|
|
Use this filter to enable scheduling of an instance on the same Compute node as a set of specific instances. To specify these instances when launching an instance, use the
|
|
| Use this filter to schedule instances in an affinity server group on the same Compute node. To create the server group, enter the following command:
To launch an instance in this group, use the
|
|
| Use this filter to schedule instances that belong to an anti-affinity server group on different Compute nodes. To create the server group, enter the following command:
To launch an instance in this group, use the
|
|
|
Use this filter to schedule instances on Compute nodes that have a specific IP subnet range. To specify the required range, use the
|
7.4. Compute scheduler weights Copy linkLink copied to clipboard!
The Compute scheduler uses weights to prioritize instance scheduling on available hosts after applying filters. It calculates a weight for each candidate node and selects the one with the highest calculated value.
The Compute scheduler determines the weight of each Compute node by performing the following tasks:
- The scheduler normalizes each weight to a value between 0.0 and 1.0.
- The scheduler multiplies the normalized weight by the weigher multiplier.
The Compute scheduler calculates the weight normalization for each resource type by using the lower and upper values for the resource availability across the candidate Compute nodes:
- Nodes with the lowest availability of a resource (minval) are assigned '0'.
- Nodes with the highest availability of a resource (maxval) are assigned '1'.
Nodes with resource availability within the minval - maxval range are assigned a normalized weight calculated by using the following formula:
(node_resource_availability - minval) / (maxval - minval)
If all the Compute nodes have the same availability for a resource then they are all normalized to 0.
For example, the scheduler calculates the normalized weights for available vCPUs across 10 Compute nodes, each with a different number of available vCPUs, as follows:
| Compute node | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
| No of vCPUs | 5 | 5 | 10 | 10 | 15 | 20 | 20 | 15 | 10 | 5 |
| Normalized weight | 0 | 0 | 0.33 | 0.33 | 0.67 | 1 | 1 | 0.67 | 0.33 | 0 |
The Compute scheduler uses the following formula to calculate the weight of a Compute node:
(w1_multiplier * norm(w1)) + (w2_multiplier * norm(w2)) + ...
The following table describes the available configuration options for weights.
You can set weights on host aggregates by using the aggregate metadata key with the same name as the options detailed in the following table. If set on the host aggregate, the host aggregate value takes precedence.
| Configuration option | Type | Description |
|---|---|---|
|
| Floating point |
The
To activate and tune the
|
|
| String |
The
You must set two configuration options to activate and tune the weigher. Set
Define properties as a comma-separated list of
In the example setting
In the example setting |
|
| String | Use this parameter to configure which of the following attributes to use for calculating the weight of each Compute node:
|
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the available RAM. Set to a positive value to prefer hosts with more available RAM, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available RAM, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the available disk space. Set to a positive value to prefer hosts with more available disk space, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available disk space, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the disk weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the available vCPUs. Set to a positive value to prefer hosts with more available vCPUs, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available vCPUs, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the vCPU weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the host workload. Set to a negative value to prefer hosts with lighter workloads, which distributes the workload across more hosts. Set to a positive value to prefer hosts with heavier workloads, which schedules instances onto hosts that are already busy. The absolute value, whether positive or negative, controls how strong the I/O operations weigher is relative to other weighers. Default: -1.0 - The scheduler distributes the workload across more hosts. |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on recent build failures. Set to a positive value to increase the significance of build failures recently reported by the host. Hosts with recent build failures are then less likely to be chosen.
Set to Default: 1000000.0 |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving an instance. By default, the scheduler prefers hosts within the same source cell when migrating an instance. Set to a positive value to prefer hosts within the same cell the instance is currently running. Set to a negative value to prefer hosts located in a different cell from that where the instance is currently running. Default: 1000000.0 |
|
| Positive floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the number of PCI devices on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the more PCI devices a Compute node has the higher the weight allocated to the Compute node. For example, if there are three hosts available, one with a single PCI device, one with multiple PCI devices and one without any PCI devices, then the Compute scheduler prioritizes these hosts based on the demands of the instance. The scheduler should prefer the first host if the instance requests one PCI device, the second host if the instance requires multiple PCI devices and the third host if the instance does not request a PCI device. Configure this option to prevent non-PCI instances from occupying resources on hosts with PCI devices. Default: 1.0 |
|
| Integer | Use this parameter to specify the size of the subset of filtered hosts from which to select the host. You must set this option to at least 1. A value of 1 selects the first host returned by the weighing functions. The scheduler ignores any value less than 1 and uses 1 instead. Set to a value greater than 1 to prevent multiple scheduler processes handling similar requests selecting the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Default: 1 |
|
| Positive floating point | Use this parameter to specify the multiplier to use to weigh hosts for group soft-affinity. Note You need to specify the microversion when creating a group with this policy:
Default: 1.0 |
|
| Positive floating point | Use this parameter to specify the multiplier to use to weigh hosts for group soft-anti-affinity. Note You need to specify the microversion when creating a group with this policy:
Default: 1.0 |
|
| Floating point | Use this parameter to specify the multiplier to use to weigh hosts based on the hypervisor version reported by the host’s virt driver. Set to a negative integer or float value to prefer Compute hosts with older hypervisors. Set to 0 to disable weighing Compute hosts by the hypervisor version. Default: 1.0 - The scheduler prefers Compute hosts with newer hypervisors. |
|
| Floating point |
Use this parameter to specify the multiplier to use for weighting metrics. By default, Set to a number greater than 1.0 to increase the effect of the metric on the overall weight. Set to a number between 0.0 and 1.0 to reduce the effect of the metric on the overall weight.
Set to 0.0 to ignore the metric value and return the value of the Set to a negative number to prioritize the host with lower metrics, and stack instances in hosts. Default: 1.0 |
|
|
Comma-separated list of | Use this parameter to specify the metrics to use for weighting, and the ratio to use to calculate the weight of each metric. Valid metric names:
Example: |
|
| Boolean |
Use this parameter to specify how to handle configured
|
|
| Floating point |
Use this parameter to specify the weight to use if any Default: -10000.0 |
7.5. Declaring custom traits and resource classes Copy linkLink copied to clipboard!
As an administrator, you can declare which custom physical features and consumable resources are available on data plane nodes by defining a custom inventory of resources in a YAML file, provider.yaml.
You can declare the availability of physical host features by defining custom traits, such as CUSTOM_DIESEL_BACKUP_POWER, CUSTOM_FIPS_COMPLIANT, and CUSTOM_HPC_OPTIMIZED. You can also declare the availability of consumable resources by defining resource classes, such as CUSTOM_DISK_IOPS, and CUSTOM_POWER_WATTS.
You can use flavor metadata to request custom resources and custom traits. For more information, see Instance bare-metal resource class and Instance resource traits.
Prerequisites
-
You installed the
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
-
Create a file in
/home/stack/templates/calledprovider.yaml. To configure the resource provider, add the following configuration to your
provider.yamlfile:meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid>-
Replace
<node_uuid>with the UUID for the node, for example,'5213b75d-9260-42a6-b236-f39b0fd10561'. Alternatively, you can use thenameproperty to identify the resource provider:name: 'EXAMPLE_RESOURCE_PROVIDER'.
-
Replace
To configure the available custom resource classes for the resource provider, add the following configuration to your
provider.yamlfile:meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: - CUSTOM_EXAMPLE_RESOURCE_CLASS: total: <total_available> reserved: <reserved> min_unit: <min_unit> max_unit: <max_unit> step_size: <step_size> allocation_ratio: <allocation_ratio>-
Replace
CUSTOM_EXAMPLE_RESOURCE_CLASSwith the name of the resource class. Custom resource classes must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore “_” character. -
Replace
<total_available>with the number of availableCUSTOM_EXAMPLE_RESOURCE_CLASSfor this resource provider. -
Replace
<reserved>with the number of availableCUSTOM_EXAMPLE_RESOURCE_CLASSfor this resource provider. -
Replace
<min_unit>with the minimum units of resources a single instance can consume. -
Replace
<max_unit>with the maximum units of resources a single instance can consume. -
Replace
<step_size>with the number of availableCUSTOM_EXAMPLE_RESOURCE_CLASSfor this resource provider. -
Replace
<allocation_ratio>with the value to set the allocation ratio. If allocation_ratio is set to 1.0, then no overallocation is allowed. But if allocation_ration is greater than 1.0, then the total available resource is more than the physically existing one.
-
Replace
To configure the available traits for the resource provider, add the following configuration to your
provider.yamlfile:meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: ... traits: additional: - 'CUSTOM_EXAMPLE_TRAIT'Replace
CUSTOM_EXAMPLE_TRAITwith the name of the trait. Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore “_” character.Example
provider.yamlfile The following example declares one custom resource class and one custom trait for a resource provider.meta: schema_version: 1.0 providers: - identification: uuid: $COMPUTE_NODE inventories: additional: CUSTOM_LLC: # Describing LLC on this compute node # max_unit indicates maximum size of single LLC # total indicates sum of sizes of all LLC total: 221 reserved: 22 min_unit: 13 max_unit: 114 step_size: 15 allocation_ratio: 1.06 traits: additional: # Describing that this compute node enables support for # P-state control - CUSTOM_P_STATE_ENABLED-
total: 22specifies that the hypervisor has 22 units of last level cache (LLC). -
reserved: 2specifies that two of the units of LLC are reserved for the host. -
The
min_unitandmax_unitvalues define how many units of resources a single instance can consume. -
step_size: 1defines the increments of consumption. -
allocation_ratio: 1.0configures the overallocation of resources.
-
Save and close the
provider.yamlfile. Create a
ConfigMapCR that configures the Compute nodes to use theprovider.yamlfile for the declaration of the custom traits and resources, and save it to a file namedcompute-provider.yamlon your workstation:apiVersion: v1 kind: ConfigMap metadata: name: compute-provider namespace: openstack data: provider.yaml: |For more information about creating
ConfigMapobjects, see Creating and using config maps in Nodes.Create the
ConfigMapobject:$ oc create -f compute-provider.yamlCreate a new custom service,
compute-provider, that includes thecompute-providerConfigMapobject, and save it to a file namedcompute-provider-service.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: compute-provider namespace: openstack spec: label: dataplane-deployment-compute playbook: osp.edpm.nova secrets: [] dataSources: - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key - configMapRef: name: compute-provider - configMapRef: name: nova-extra-config optional: trueCreate the
compute-providerservice:$ oc apply -f compute-provider-service.yamlCreate a new
OpenStackDataPlaneNodeSetCR that defines the nodes that you want to use theprovider.yamlfile for the declaration of the custom traits and resources, and save it to a file namedcompute-provider.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: compute-providerFor information about how to create an
OpenStackDataPlaneNodeSetCR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes.Modify your
compute-providerOpenStackDataPlaneNodeSet CR to use yourcompute-provider-serviceservice instead of the default Compute service:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: compute-provider spec: services: - download-cache - configure-network - validate-network - install-os - configure-os - run-os - ovn - libvirt - compute-provider-service #replaced the nova service - telemetry-
Save the
compute-provider.yamlfile. Create the data plane resources:
$ oc create -f compute-provider.yamlVerify the data plane resources have been created:
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-provider False Deployment not startedVerify the services were created:
$ oc get openstackdataplaneservice NAME AGE download-cache 6d7h configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h compute-provider 6d6h telemetry 6d6hCreate a new
OpenStackDataPlaneDeploymentCR to configure the services on the data plane nodes and deploy the nodes, and save it to a file namedcompute-provider_deploy.yamlon your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-providerSpecify
nodeSetsto include all theOpenStackDataPlaneNodeSetCRs that you want to deploy:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-provider spec: nodeSets: - openstack-edpm - compute-provider - ... - <nodeSet_name>-
Replace
<nodeSet_name>with the names of theOpenStackDataPlaneNodeSetCRs that you want to include in your data plane deployment.
-
Replace
-
Save the
compute-provider_deploy.yamldeployment file. Deploy the data plane:
$ oc create -f compute-provider_deploy.yamlVerify that the data plane is deployed:
$ oc get openstackdataplanedeployment NAME STATUS MESSAGE compute-provider True Deployment Completed $ oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm True Deployed compute-provider True DeployedEnsure that the deployed Compute nodes are visible on the control plane:
$ oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verboseAccess the remote shell for
openstackclientand verify that the deployed Compute nodes are visible on the control plane:$ oc rsh -n openstack openstackclient $ openstack hypervisor list
7.6. Creating and managing host aggregates Copy linkLink copied to clipboard!
As a cloud administrator, you can partition your Compute deployment into logical groups for administrative control and scheduling flexibility by creating host aggregates.
Red Hat OpenStack Services on OpenShift (RHOSO) provides the following mechanisms for partitioning logical groups:
- Host aggregate
A host aggregate is a grouping of Compute nodes into a logical unit based on attributes such as the hardware or performance characteristics. You can assign a Compute node to one or more host aggregates.
You can map flavors and images to host aggregates by setting metadata on the host aggregate, and then matching flavor extra specs or image metadata properties to the host aggregate metadata. The Compute scheduler can use this metadata to schedule instances when the required filters are enabled. Metadata that you specify in a host aggregate limits the use of that host to any instance that has the same metadata specified in its flavor or image.
You can configure weight multipliers for each host aggregate by setting the
xxx_weight_multiplierconfiguration option in the host aggregate metadata.You can use host aggregates to handle load balancing, enforce physical isolation or redundancy, group servers with common attributes, or separate classes of hardware.
When you create a host aggregate, you can specify a zone name. This name is presented to cloud users as an availability zone that they can select.
- Availability zones
An availability zone is the cloud user view of a host aggregate. A cloud user cannot view the Compute nodes in the availability zone, or view the metadata of the availability zone. The cloud user can only see the name of the availability zone.
You can assign each Compute node to only one availability zone. You can configure a default availability zone where instances will be scheduled when the cloud user does not specify a zone. You can direct cloud users to use availability zones that have specific capabilities.
7.6.1. Enabling scheduling on host aggregates Copy linkLink copied to clipboard!
To schedule instances on host aggregates that have specific attributes, update the configuration of the Compute scheduler to enable filtering based on the host aggregate metadata.
Prerequisites
-
You installed
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-admin-
Open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml, on your workstation. Add the following values to the
enabled_filtersparameter, if they are not already present:AggregateInstanceExtraSpecsFilter: Add this value to filter Compute nodes by host aggregate metadata that match flavor extra specs.NoteFor this filter to perform as expected, you must scope the flavor extra specs by prefixing the
extra_specskey with theaggregate_instance_extra_specs:namespace.AggregateImagePropertiesIsolation: Add this value to filter Compute nodes by host aggregate metadata that match image metadata properties.NoteTo filter host aggregate metadata by using image metadata properties, the host aggregate metadata key must match a valid image metadata property. For information about valid image metadata properties, see Image service properties and property keys reference.
AvailabilityZoneFilter: Add this value to filter by availability zone when launching an instance.NoteInstead of using the
AvailabilityZoneFilterCompute scheduler service filter, you can use the Placement service to process availability zone requests.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackExit the openstackclient pod:
$ exit
7.6.2. Creating a host aggregate Copy linkLink copied to clipboard!
Create host aggregates as logical groupings of Compute nodes. You can assign metadata and add nodes to the aggregate to guide the Compute scheduler when placing instances based on specific attributes.
Prerequisites
-
You have the
ocandpodmancommand line tools installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientChange to the
cloud-adminhome directory:$ cd /home/cloud-adminTo create a host aggregate, enter the following command:
# openstack aggregate create <aggregate_name>Replace
<aggregate_name>with the name you want to assign to the host aggregate.Add metadata to the host aggregate:
# openstack aggregate set \ --property <key=value> \ --property <key=value> \ <aggregate_name>-
Replace
<key=value>with the metadata key-value pair. If you are using theAggregateInstanceExtraSpecsFilterfilter, the key can be any arbitrary string, for example,ssd=true. If you are using theAggregateImagePropertiesIsolationfilter, the key must match a valid image metadata property. For more information about valid image metadata properties, see Image configuration parameters. -
Replace
<aggregate_name>with the name of the host aggregate.
-
Replace
Add the Compute nodes to the host aggregate:
# openstack aggregate add host \ <aggregate_name> \ <host_name>-
Replace
<aggregate_name>with the name of the host aggregate to add the Compute node to. -
Replace
<host_name>with the name of the Compute node to add to the host aggregate.
-
Replace
Create a flavor or image for the host aggregate:
Create a flavor:
$ openstack flavor create \ --ram <size_mb> \ --disk <size_gb> \ --vcpus <no_reserved_vcpus> \ host-agg-flavorCreate an image:
$ openstack image create host-agg-image
Set one or more key-value pairs on the flavor or image that match the key-value pairs on the host aggregate.
To set the key-value pairs on a flavor, use the scope
aggregate_instance_extra_specs:# openstack flavor set \ --property aggregate_instance_extra_specs:ssd=true \ host-agg-flavorTo set the key-value pairs on an image, use valid image metadata properties as the key:
# openstack image set \ --property os_type=linux \ host-agg-image
Exit the
OpenStackClientpod:$ exit
7.6.3. Creating an availability zone Copy linkLink copied to clipboard!
Create an availability zone to provide cloud users with selectable options for deploying instances. An availability zone represents a cloud user’s logical view of a host aggregate.
Prerequisites
-
You installed the
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-adminTo create an availability zone, you can create a new availability zone host aggregate, or make an existing host aggregate an availability zone:
To create a new availability zone host aggregate, enter the following command:
# openstack aggregate create \ --zone <availability_zone> \ <aggregate_name>-
Replace
<availability_zone>with the name you want to assign to the availability zone. -
Replace
<aggregate_name>with the name you want to assign to the host aggregate.
-
Replace
To make an existing host aggregate an availability zone, enter the following command:
# openstack aggregate set --zone <availability_zone> \ <aggregate_name>-
Replace
<availability_zone>with the name you want to assign to the availability zone. -
Replace
<aggregate_name>with the name of the host aggregate.
-
Replace
Optional: Add metadata to the availability zone:
# openstack aggregate set --property <key=value> \ <aggregate_name>-
Replace
<key=value>with your metadata key-value pair. You can add as many key-value properties as required. -
Replace
<aggregate_name>with the name of the availability zone host aggregate.
-
Replace
Add Compute nodes to the availability zone host aggregate:
# openstack aggregate add host <aggregate_name> \ <host_name>-
Replace
<aggregate_name>with the name of the availability zone host aggregate to add the Compute node to. -
Replace
<host_name>with the name of the Compute node to add to the availability zone.
-
Replace
Exit the openstackclient pod:
$ exit
7.6.4. Deleting a host aggregate Copy linkLink copied to clipboard!
To delete a host aggregate, you must first remove all associated Compute nodes from it. This prevents scheduling errors and allows for the permanent removal of the logical grouping.
Prerequisites
-
You installed
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the
OpenStackClientpod from your workstation:$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-adminTo view a list of all the Compute nodes assigned to the host aggregate, enter the following command:
# openstack aggregate show <aggregate_name>To remove all assigned Compute nodes from the host aggregate, enter the following command for each Compute node:
# openstack aggregate remove host <aggregate_name> \ <host_name>-
Replace
<aggregate_name>with the name of the host aggregate to remove the Compute node from. -
Replace
<host_name>with the name of the Compute node to remove from the host aggregate.
-
Replace
After you remove all the Compute nodes from the host aggregate, enter the following command to delete the host aggregate:
# openstack aggregate delete <aggregate_name>Exit the
OpenStackClientpod:$ exit
7.6.5. Creating a project-isolated host aggregate Copy linkLink copied to clipboard!
Create a host aggregate that is restricted only to specific projects. This project-isolated host aggregate helps to ensure that only authorized projects can launch instances on those designated Compute nodes.
Project isolation uses the Placement service to filter host aggregates for each project. This process supersedes the functionality of the AggregateMultiTenancyIsolation filter. You therefore do not need to use the AggregateMultiTenancyIsolation filter.
Prerequisites
-
You installed the
ocandpodmancommand line tools on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientChange to the cloud-admin home directory:
$ cd /home/cloud-admin-
On your workstation, open your
OpenStackControlPlanecustom resource (CR) file,openstack_control_plane.yaml. To schedule project instances on the project-isolated host aggregate, set the value of the
query_placement_for_image_type_supportparameter toTrue:[scheduler] query_placement_for_image_type_support = TrueOptional: To ensure that only the projects that you assign to a host aggregate can create instances on your cloud, set the value of the
placement_aggregate_required_for_tenantsparameter toTrue.NoteThe parameter
placement_aggregate_required_for_tenantsis set toFalseby default. When this parameter isFalse, projects that are not assigned to a host aggregate can create instances on any host aggregate.- Save the updates to your Compute environment file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack- Create the host aggregate.
Retrieve the list of project IDs:
# openstack project listUse the
filter_tenant_id<suffix>metadata key to assign projects to the host aggregate:# openstack aggregate set \ --property filter_tenant_id<ID0>=<project_id0> \ --property filter_tenant_id<ID1>=<project_id1> \ ... --property filter_tenant_id<IDn>=<project_idn> \ <aggregate_name>-
Replace
<ID0>,<ID1>, and all IDs up to<IDn>with unique values for each project filter that you want to create. -
Replace
<project_id0>,<project_id1>, and all project IDs up to<project_idn>with the ID of each project that you want to assign to the host aggregate. Replace
<aggregate_name>with the name of the project-isolated host aggregate.For example, use the following syntax to assign projects
78f1,9d3t, andaa29to the host aggregateproject-isolated-aggregate:# openstack aggregate set \ --property filter_tenant_id0=78f1 \ --property filter_tenant_id1=9d3t \ --property filter_tenant_id2=aa29 \ project-isolated-aggregateTipYou can create a host aggregate that is available only to a single specific project by omitting the suffix from the
filter_tenant_idmetadata key:# openstack aggregate set \ --property filter_tenant_id=78f1 \ single-project-isolated-aggregate
-
Replace
Exit the openstackclient pod:
$ exit
Additional resources