Chapter 3. Creating flavors for launching instances
Instance flavors are resource templates specifying the virtual hardware profile and resource allocation for an instance, including vCPUs, RAM, and storage. Cloud users must select a flavor when launching an instance, and you can control who uses which flavors by setting them as public or private.
There are no default flavors in Red Hat OpenStack Services on OpenShift (RHOSO). To create a flavor, you must use the openstack flavor create command, for example:
openstack --os-compute-api=2.86 flavor create --ram 128 --disk 1 --vcpus 1 m1.nano
This command creates a public flavor called m1.nano with 128MB RAM and 1GB disk size. The API micro version enables flavor extra spec validation. Flavor extra spec validation prevents common typos and similar errors when defining flavors. You specify the micro version by using --os-compute-api=2.86.
openstack --os-compute-api=2.86 flavor create --ram 196 --disk 1 --vcpus 1 m1.micro
This command creates a public flavor called m1.micro with 196MB RAM and 1GB disk size.
Flavors can use metadata, also referred to as "extra specs", to specify instance hardware support and quotas. The flavor metadata influences the instance placement, resource usage limits, and performance. For a complete list of available metadata properties, see Flavor metadata.
You can also use the flavor metadata keys to find a suitable host aggregate to host the instance, by matching the extra_specs metadata set on the host aggregate. To schedule an instance on a host aggregate, you must scope the flavor metadata by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. For more information, see Creating and managing host aggregates.
Behaviors that are set by using flavor properties override behaviors that are set by using images. When a cloud user launches an instance, the properties of the flavor they specify override the properties of the image they specify.
3.1. Creating a flavor Copy linkLink copied to clipboard!
Create specialized flavors to change default memory capacity to suit the underlying hardware needs, and to add metadata to force a specific I/O rate for the instance or to match a host aggregate.
Procedure
Create a flavor that specifies the basic resources to make available to an instance:
$ openstack --os-compute-api=2.86 flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <no_vcpus> \ [--private --project <project_id>] <flavor_name>-
Replace
<size_mb>with the size of RAM to allocate to an instance created with this flavor. -
Replace
<size_gb>with the size of root disk to allocate to an instance created with this flavor. -
Replace
<no_vcpus>with the number of vCPUs to reserve for an instance created with this flavor. Optional: Specify the
--privateand--projectoptions to make the flavor accessible only by a particular project or group of users. Replace<project_id>with the ID of the project that can use this flavor to create instances. If you do not specify the accessibility, the flavor defaults to public, which means that it is available to all projects.NoteYou cannot make a public flavor private after it has been created.
Replace
<flavor_name>with a unique name for your flavor.For more information about flavor arguments, see Flavor arguments.
-
Replace
Optional: To specify flavor metadata, set the required properties by using key-value pairs:
$ openstack --os-compute-api=2.86 flavor set \ --property <key=value> --property <key=value> ... <flavor_name>-
Replace
<key>with the metadata key of the property you want to allocate to an instance that is created with this flavor. For a list of available metadata keys, see Flavor metadata. -
Replace
<value>with the value of the metadata key you want to allocate to an instance that is created with this flavor. Replace
<flavor_name>with the name of your flavor.For example, an instance that is launched by using the following flavor has two CPU sockets, each with two CPUs:
$ openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_sockets=2 \ --property hw:cpu_cores=2 processor_topology_flavor
-
Replace
3.2. Flavor arguments Copy linkLink copied to clipboard!
Flavor arguments are parameters you use with the openstack flavor create command to specify the resources and characteristics of a new instance flavor. The only required positional argument is the unique name for the flavor.
| Optional argument | Description |
|---|---|
|
|
Unique ID for the flavor. The default value, |
|
| (Mandatory) Size of memory to make available to the instance, in MB. Default: 256 MB |
|
| (Mandatory) Amount of disk space to use for the root (/) partition, in GB. The root disk is an ephemeral disk that the base image is copied into. When an instance boots from a persistent volume, the root disk is not used. Note
Creation of an instance with a flavor that has Default: 0 GB |
|
| Amount of disk space to use for the ephemeral disks, in GB. Defaults to 0 GB, which means that no secondary ephemeral disk is created. Ephemeral disks offer machine local disk storage linked to the lifecycle of the instance. Ephemeral disks are not included in any snapshots. This disk is destroyed and all data is lost when the instance is deleted. Default: 0 GB |
|
|
Swap disk size in MB. Do not specify Default: 0 GB |
|
| (Mandatory) Number of virtual CPUs for the instance. Default: 1 |
|
| The flavor is available to all projects. By default, a flavor is public and available to all projects. |
|
|
The flavor is only available to the projects specified by using the |
|
| Metadata, or "extra specs", specified by using key-value pairs in the following format:
Repeat this option to set multiple properties. |
|
|
Specifies the project that can use the private flavor. You must use this argument with the Repeat this option to allow access to multiple projects. |
|
|
Specifies the project domain that can use the private flavor. You must use this argument with the Repeat this option to allow access to multiple project domains. |
|
| Description of the flavor. Limited to 65535 characters in length. You can use only printable characters. |
3.3. Flavor metadata Copy linkLink copied to clipboard!
Flavor metadata, also called extra specs, uses the --property option to define instance hardware support and quotas. Flavor metadata determines instance hardware support and quotas, which influence instance placement, instance limits, and performance.
- Instance resource usage
Use the property keys in the following table to configure limits on CPU, memory and disk I/O usage by instances.
NoteThe extra specs for limiting instance CPU resource usage are host-specific tunable properties that are passed directly to libvirt, which then passes the limits onto the host OS. Therefore, the supported instance CPU resource limits configurations are dependent on the underlying host OS.
For more information on how to configure instance CPU resource usage for the Compute nodes in your RHOSO deployment, see Understanding cgroups in the RHEL 9 documentation, and CPU Tuning in the Libvirt documentation.
Expand Table 3.2. Flavor metadata for resource usage Key Description quota:cpu_sharesSpecifies the proportional weighted share of CPU time for the domain. Defaults to the OS provided defaults. The Compute scheduler weighs this value relative to the setting of this property on other instances in the same domain. For example, an instance that is configured with
quota:cpu_shares=2048is allocated double the CPU time as an instance that is configured withquota:cpu_shares=1024.quota:cpu_periodSpecifies the period of time within which to enforce the
cpu_quota, in microseconds. Within thecpu_period, each vCPU cannot consume more thancpu_quotaof runtime. Set to a value in the range 1000 – 1000000. Set to0to disable.quota:cpu_quotaSpecifies the maximum allowed bandwidth for the vCPU in each
cpu_period, in microseconds:- Set to a value in the range 1000 – 18446744073709551.
-
Set to
0to disable. - Set to a negative value to allow infinite bandwidth.
You can use
cpu_quotaandcpu_periodto ensure that all vCPUs run at the same speed. For example, you can use the following flavor to launch an instance that can consume a maximum of only 50% CPU of a physical CPU computing capability:$ openstack flavor set cpu_limits_flavor \ --property quota:cpu_quota=10000 \ --property quota:cpu_period=20000- Instance disk tuning
Use the property keys in the following table to tune the instance disk performance.
NoteThe Compute service applies the following quality of service settings to storage that the Compute service has provisioned, such as ephemeral storage. To tune the performance of Block Storage (cinder) volumes, you must also configure and associate a Quality of Service (QoS) specification for the volume type.
Expand Table 3.3. Flavor metadata for disk tuning Key Description quota:disk_read_bytes_secSpecifies the maximum disk reads available to an instance, in bytes per second.
quota:disk_read_iops_secSpecifies the maximum disk reads available to an instance, in IOPS.
quota:disk_write_bytes_secSpecifies the maximum disk writes available to an instance, in bytes per second.
quota:disk_write_iops_secSpecifies the maximum disk writes available to an instance, in IOPS.
quota:disk_total_bytes_secSpecifies the maximum I/O operations available to an instance, in bytes per second.
quota:disk_total_iops_secSpecifies the maximum I/O operations available to an instance, in IOPS.
- Instance network traffic bandwidth
Use the property keys in the following table to configure bandwidth limits on the instance network traffic by configuring the VIF I/O options.
NoteThe
quota:vif_*properties are deprecated. Instead, use the Networking (neutron) service Quality of Service (QoS) policies. For more information about QoS policies, see Using Quality of Service (QoS) policies to manage data traffic in the Configuring networking services guide. Thequota:vif_*properties are only supported when you use the ML2/OVS mechanism driver withNeutronOVSFirewallDriverset toiptables_hybrid.Expand Table 3.4. Flavor metadata for bandwidth limits Key Description quota:vif_inbound_average(Deprecated) Specifies the required average bit rate on the traffic incoming to the instance, in kbps.
quota:vif_inbound_burst(Deprecated) Specifies the maximum amount of incoming traffic that can be burst at peak speed, in KB.
quota:vif_inbound_peak(Deprecated) Specifies the maximum rate at which the instance can receive incoming traffic, in kbps.
quota:vif_outbound_average(Deprecated) Specifies the required average bit rate on the traffic outgoing from the instance, in kbps.
quota:vif_outbound_burst(Deprecated) Specifies the maximum amount of outgoing traffic that can be burst at peak speed, in KB.
quota:vif_outbound_peak(Deprecated) Specifies the maximum rate at which the instance can send outgoing traffic, in kbps.
- Hardware video RAM
Use the property key in the following table to configure limits on the instance RAM to use for video devices.
Expand Table 3.5. Flavor metadata for video devices Key Description hw_video:ram_max_mbSpecifies the maximum RAM to use for video devices, in MB. Use with the
hw_video_ramimage property.hw_video_rammust be less than or equal tohw_video:ram_max_mb.- Watchdog behavior
Use the property key in the following table to enable the virtual hardware watchdog device on the instance.
Expand Table 3.6. Flavor metadata for watchdog behavior Key Description hw:watchdog_actionSpecify to enable the virtual hardware watchdog device and set its behavior. Watchdog devices perform the configured action if the instance hangs or fails. The watchdog uses the i6300esb device, which emulates a PCI Intel 6300ESB. If
hw:watchdog_actionis not specified, the watchdog is disabled.+ Set to one of the following valid values:
-
disabled: (Default) The device is not attached. -
reset: Force instance reset. -
poweroff: Force instance shut down. -
pause: Pause the instance. none: Enable the watchdog, but do nothing if the instance hangs or fails.NoteWatchdog behavior that you set by using the properties of a specific image override behavior that you set by using flavors.
-
- Random number generator (RNG)
Use the property keys in the following table to enable the RNG device on the instance.
Expand Table 3.7. Flavor metadata for RNG Key Description hw_rng:allowedSet to
Falseto disable the RNG device that is added to the instance through its image properties.Default:
Truehw_rng:rate_bytesSpecifies the maximum number of bytes that the instance can read from the entropy of the host, per period.
hw_rng:rate_periodSpecifies the duration of the read period in milliseconds.
- Virtual Performance Monitoring Unit (vPMU)
Use the property key in the following table to enable the vPMU for the instance.
Expand Table 3.8. Flavor metadata for vPMU Key Description hw:pmuSet to
Trueto enable a vPMU for the instance.Tools such as
perfuse the vPMU on the instance to provide more accurate information to profile and monitor instance performance. For realtime workloads, the emulation of a vPMU can introduce additional latency which might be undesirable. If the telemetry it provides is not required, sethw:pmu=False.- Virtual Trusted Platform Module (vTPM) devices
Use the property keys in the following table to enable a vTPM device for the instance.
Expand Table 3.9. Flavor metadata for vPMU Key Description hw:tpm_versionSet to the version of TPM to use. TPM version
2.0is the only supported version.hw:tpm_modelSet to the model of TPM device to use. Ignored if
hw:tpm_versionis not configured. Set to one of the following valid values:-
tpm-tis: (Default) TPM Interface Specification. -
tpm-crb: Command-Response Buffer. Compatible only with TPM version 2.0.
-
- Instance CPU topology
Use the property keys in the following table to define the topology of the processors in the instance.
Expand Table 3.10. Flavor metadata for CPU topology Key Description hw:cpu_socketsSpecifies the preferred number of sockets for the instance.
Default: the number of vCPUs requested
hw:cpu_coresSpecifies the preferred number of cores per socket for the instance.
Default:
1hw:cpu_threadsSpecifies the preferred number of threads per core for the instance.
Default:
1hw:cpu_max_socketsSpecifies the maximum number of sockets that users can select for their instances by using image properties.
Example:
hw:cpu_max_sockets=2hw:cpu_max_coresSpecifies the maximum number of cores per socket that users can select for their instances by using image properties.
hw:cpu_max_threadsSpecifies the maximum number of threads per core that users can select for their instances by using image properties.
- Serial ports
Use the property key in the following table to configure the number of serial ports per instance.
Expand Table 3.11. Flavor metadata for serial ports Key Description hw:serial_port_countMaximum serial ports per instance.
- CPU pinning policy
By default, instance virtual CPUs (vCPUs) are sockets with one core and one thread. You can use properties to create flavors that pin the vCPUs of instances to the physical CPU cores (pCPUs) of the host. You can also configure the behavior of hardware CPU threads in a simultaneous multithreading (SMT) architecture where one or more cores have thread siblings.
Use the property keys in the following table to define the CPU pinning policy of the instance.
Expand Table 3.12. Flavor metadata for CPU pinning Key Description hw:cpu_policySpecifies the CPU policy to use. Set to one of the following valid values:
-
shared: (Default) The instance vCPUs float across host pCPUs. -
dedicated: Pin the instance vCPUs to a set of host pCPUs. This creates an instance CPU topology that matches the topology of the CPUs to which the instance is pinned. This option implies an overcommit ratio of 1.0. -
mixed: The instance vCPUs use a mix of dedicated (pinned) host pCPUs and shared (unpinned) host pCPUs.
hw:cpu_thread_policySpecifies the CPU thread policy to use when
hw:cpu_policy=dedicated. Set to one of the following valid values:-
prefer: (Default) The host might or might not have an SMT architecture. If an SMT architecture is present, the Compute scheduler gives preference to thread siblings. isolate: The host must not have an SMT architecture or must emulate a non-SMT architecture. This policy ensures that the Compute scheduler places the instance on a host without SMT by requesting hosts that do not report theHW_CPU_HYPERTHREADINGtrait. It is also possible to request this trait explicitly by using the following property:--property trait:HW_CPU_HYPERTHREADING=forbiddenIf the host does not have an SMT architecture, the Compute service places each vCPU on a different core as expected. If the host does have an SMT architecture, then the behaviour is determined by the configuration of the
[workarounds]/disable_fallback_pcpu_queryparameter:-
True: The host with an SMT architecture is not used and scheduling fails. -
False: The Compute service places each vCPU on a different physical core. The Compute service does not place vCPUs from other instances on the same core. All but one thread sibling on each used core is therefore guaranteed to be unusable.
-
require: The host must have an SMT architecture. This policy ensures that the Compute scheduler places the instance on a host with SMT by requesting hosts that report theHW_CPU_HYPERTHREADINGtrait. It is also possible to request this trait explicitly by using the following property:--property trait:HW_CPU_HYPERTHREADING=requiredThe Compute service allocates each vCPU on thread siblings. If the host does not have an SMT architecture, then it is not used. If the host has an SMT architecture, but not enough cores with free thread siblings are available, then scheduling fails.
hw:cpu_dedicated_maskSpecifies which CPUs are dedicated (pinned) or shared (unpinned/floating).
-
To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to
2-3to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. -
To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to
^0-1to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated.
-
- Instance PCI NUMA affinity policy
Use the property key in the following table to create flavors that specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces.
Expand Table 3.13. Flavor metadata for PCI NUMA affinity policy Key Description hw:pci_numa_affinity_policySpecifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values:
-
required: The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. -
preferred: The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If this is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy: (Default) The Compute service creates instances that request a PCI device in one of the following cases:- The PCI device has affinity with at least one of the NUMA nodes.
- The PCI devices do not provide information about their NUMA affinities.
socket: The Compute service creates an instance that requests a PCI device only when at least one of the instance NUMA nodes has affinity with a NUMA node in the same host socket as the PCI device. For example, the following host architecture has two sockets, each socket has two NUMA nodes, and a PCI device is connected to one of the nodes in one of the sockets.
The Compute service can pin an instance with two NUMA nodes and the
socketPCI NUMA affinity policy only to the following combinations of host nodes because they all have at least one instance NUMA node pinned to the PCI device’s socket:- node 0 and node 1
- node 0 and node 2
- node 0 and node 3
- node 1 and node 2
- node 1 and node 3
The only combination of host nodes that the instance cannot be pinned to is node 2 and node 3, as neither of those nodes are on the same socket as the PCI device. If the other nodes are consumed by other instances and only nodes 2 and 3 are available, the instance does not boot.
-
- Instance NUMA topology
You can use properties to create flavors that define the host NUMA placement for the instance vCPU threads, and the allocation of instance vCPUs and memory from the host NUMA nodes.
Defining a NUMA topology for the instance improves the performance of the instance OS for flavors whose memory and vCPU allocations are larger than the size of NUMA nodes in the Compute hosts.
The Compute scheduler uses these properties to determine a suitable host for the instance. For example, a cloud user launches an instance by using the following flavor:
$ openstack flavor set numa_top_flavor \ --property hw:numa_nodes=2 \ --property hw:numa_cpus.0=0,1,2,3,4,5 \ --property hw:numa_cpus.1=6,7 \ --property hw:numa_mem.0=3072 \ --property hw:numa_mem.1=1024The Compute scheduler searches for a host that has two NUMA nodes, one with 3GB of RAM and the ability to run six CPUs, and the other with 1GB of RAM and two CPUS. If a host has a single NUMA node with capability to run eight CPUs and 4GB of RAM, the Compute scheduler does not consider it a valid match.
NoteNUMA topologies defined by a flavor cannot be overridden by NUMA topologies defined by the image. The Compute service raises an
ImageNUMATopologyForbiddenerror if the image NUMA topology conflicts with the flavor NUMA topology.ImportantYou cannot use this feature to constrain instances to specific host CPUs or NUMA nodes. Use this feature only after you complete extensive testing and performance measurements. You can use the
hw:pci_numa_affinity_policyproperty instead.Use the property keys in the following table to define the instance NUMA topology.
Expand Table 3.14. Flavor metadata for NUMA topology Key Description hw:numa_nodesSpecifies the number of host NUMA nodes to restrict execution of instance vCPU threads to. If not specified, the vCPU threads can run on any number of the available host NUMA nodes.
hw:numa_cpus.NA comma-separated list of instance vCPUs to map to instance NUMA node N. If this key is not specified, vCPUs are evenly divided among available NUMA nodes.
N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes.
This property is valid only if you have set
hw:numa_nodes, and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads.hw:numa_mem.NThe number of MB of instance memory to map to instance NUMA node N. If this key is not specified, memory is evenly divided among available NUMA nodes.
N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes.
This property is valid only if you have set
hw:numa_nodes, and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads.WarningIf the combined values of
hw:numa_cpus.Norhw:numa_mem.Nare greater than the available number of CPUs or memory respectively, the Compute service raises an exception.
- CPU real-time policy
Use the property keys in the following table to define the real-time policy of the processors in the instance.
Note- Although most of your instance vCPUs can run with a real-time policy, you must mark at least one vCPU as non-real-time to use for both non-real-time guest processes and emulator overhead processes.
- To use this extra spec, you must enable pinned CPUs.
Expand Table 3.15. Flavor metadata for CPU real-time policy Key Description hw:cpu_realtimeSet to
yesto create a flavor that assigns a real-time policy to the instance vCPUs.Default:
nohw:cpu_realtime_maskSpecifies the vCPUs to not assign a real-time policy to. You must prepend the mask value with a caret symbol (^). The following example indicates that all vCPUs except vCPUs 0 and 1 have a real-time policy:
$ openstack flavor set <flavor> \ --property hw:cpu_realtime="yes" \ --property hw:cpu_realtime_mask=^0-1NoteIf the
hw_cpu_realtime_maskproperty is set on the image then it takes precedence over thehw:cpu_realtime_maskproperty set on the flavor.
- Emulator threads policy
You can assign a pCPU to an instance to use for emulator threads. Emulator threads are emulator processes that are not directly related to the instance. A dedicated emulator thread pCPU is required for real-time workloads. To use the emulator threads policy, you must enable pinned CPUs by setting the following property:
--property hw:cpu_policy=dedicatedUse the property key in the following table to define the emulator threads policy of the instance.
Expand Table 3.16. Flavor metadata for the emulator threads policy Key Description hw:emulator_threads_policySpecifies the emulator threads policy to use for instances. Set to one of the following valid values:
-
share: The emulator thread floats across the pCPUs defined in theNovaComputeCpuSharedSetheat parameter. IfNovaComputeCpuSharedSetis not configured, then the emulator thread floats across the pinned CPUs that are associated with the instance. -
isolate: Reserves an additional dedicated pCPU per instance for the emulator thread. Use this policy with caution, as it is prohibitively resource intensive. - unset: (Default) The emulator thread policy is not enabled, and the emulator thread floats across the pinned CPUs associated with the instance.
-
- Instance memory page size
Use the property keys in the following table to create an instance with an explicit memory page size.
Expand Table 3.17. Flavor metadata for memory page size Key Description hw:mem_page_sizeSpecifies the size of large pages to use to back the instances. Use of this option creates an implicit NUMA topology of 1 NUMA node unless otherwise specified by
hw:numa_nodes. Set to one of the following valid values:-
large: Selects a page size larger than the smallest page size supported on the host, which can be 2 MB or 1 GB on x86_64 systems. -
small: Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any: Selects the largest available huge page size, as determined by the libvirt driver. -
<pagesize>: (String) Sets an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example:4KB,2MB,2048,1GB. - unset: (Default) Large pages are not used to back instances and no implicit NUMA topology is generated.
-
- PCI passthrough
Use the property key in the following table to attach a physical PCI device, such as a graphics card or a network device, to an instance.
Expand Table 3.18. Flavor metadata for PCI passthrough Key Description pci_passthrough:aliasSpecifies the PCI device to assign to an instance by using the following format:
<alias>:<count>-
Replace
<alias>with the alias that corresponds to a particular PCI device class. -
Replace
<count>with the number of PCI devices of type<alias>to assign to the instance.
-
Replace
- Hypervisor signature
Use the property key in the following table to hide the hypervisor signature from the instance.
Expand Table 3.19. Flavor metadata for hiding hypervisor signature Key Description hide_hypervisor_idSet to
Trueto hide the hypervisor signature from the instance, to allow all drivers to load and work on the instance.
- UEFI Secure Boot
Use the property key in the following table to create an instance that is protected with UEFI Secure Boot.
NoteInstances with UEFI Secure Boot must support UEFI and the GUID Partition Table (GPT) standard, and include an EFI system partition.
Expand Table 3.20. Flavor metadata for UEFI Secure Boot Key Description os:secure_bootSet to
requiredto enable Secure Boot for instances launched with this flavor. Disabled by default.
- Instance resource traits
Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. An instance can specify which of these traits it requires.
The traits that you can specify are defined in the
os-traitslibrary. Example traits include the following:-
COMPUTE_TRUSTED_CERTS -
COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG -
COMPUTE_IMAGE_TYPE_RAW -
HW_CPU_X86_AVX -
HW_CPU_X86_AVX512VL HW_CPU_X86_AVX512CDFor details about how to use the
os-traitslibrary, see https://docs.openstack.org/os-traits/latest/user/index.html.Use the property key in the following table to define the resource traits of the instance.
Expand Table 3.21. Flavor metadata for resource traits Key Description trait:<trait_name>Specifies Compute node traits. Set the trait to one of the following valid values:
-
required: The Compute node selected to host the instance must have the trait. -
forbidden: The Compute node selected to host the instance must not have the trait.
Example:
$ openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required avx512-flavor-
-
- Instance bare-metal resource class
Use the property key in the following table to request a bare-metal resource class for an instance.
Expand Table 3.22. Flavor metadata for bare-metal resource class Key Description resources:<resource_class_name>Use this property to specify standard bare-metal resource classes to override the values of, or to specify custom bare-metal resource classes that the instance requires.
The standard resource classes that you can override are
VCPU,MEMORY_MBandDISK_GB. To prevent the Compute scheduler from using the bare-metal flavor properties for scheduling instance, set the value of the standard resource classes to0.The name of custom resource classes must start with
CUSTOM_. To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_.For example, to schedule instances on a node that has
--resource-class baremetal.SMALL, create the following flavor:$ openstack flavor set \ --property resources:CUSTOM_BAREMETAL_SMALL=1 \ --property resources:VCPU=0 --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 compute-small