Chapter 4. Configuring CPUs on Compute nodes
As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).
Use the following features to tune your instances for optimal CPU performance:
- CPU pinning: Pin virtual CPUs to physical CPUs.
- Emulator threads: Pin emulator threads associated with the instance to physical CPUs.
- CPU feature flags: Configure the standard set of CPU feature flags that are applied to instances to improve live migration compatibility across Compute nodes.
4.1. Configuring CPU pinning on Compute nodes Copy linkLink copied to clipboard!
You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node.
You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following:
- Designate Compute nodes for CPU pinning.
- Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes.
- Deploy the data plane.
- Create a flavor for launching instances that require CPU pinning.
- Create a flavor for launching instances that use shared, or floating, CPUs.
Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.
4.1.1. Prerequisites Copy linkLink copied to clipboard!
- You know the NUMA topology of your Compute node.
-
The
oc
command line tool is installed on your workstation. -
You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with
cluster-admin
privileges.
4.1.2. Designating and configuring Compute nodes for CPU pinning Copy linkLink copied to clipboard!
To designate Compute nodes for instances with pinned CPUs, you must create and configure a new OpenStackDataPlaneNodeSet
custom resource (CR) to configure the nodes that are designated for CPU pinning. Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning:
NUMA Node 0 | NUMA Node 1 | ||
Core 0 | Core 1 | Core 4 | Core 5 |
Core 2 | Core 3 | Core 6 | Core 7 |
The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning.
The following procedure applies to new OpenStackDataPlaneNodeSet
CRs that have not yet been provisioned. To reconfigure an existing OpenStackDataPlaneNodeSet
that has already been provisioned, you must first drain the guest instances from all the nodes in the OpenStackDataPlaneNodeSet
.
Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.
You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
Prerequisites
-
You have selected the
OpenStackDataPlaneNodeSet
CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating anOpenStackDataPlaneNodeSet
CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Create or update the
ConfigMap
CR namednova-extra-config.yaml
and set the values of the parameters under [compute] and [default]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the new Compute configuration file. The
nova-operator
generates the default configuration file with the name01-nova.conf
. Do not use the default name, because it would override the infrastructure configuration, such as thetransport_url
. Thenova-compute
service applies every file under/etc/nova/nova.conf.d/
in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file. - 2
- Reserves physical CPU cores for the shared instances.
- 3
- Reserves physical CPU cores for the dedicated instances.
- 4
- Specifies the amount memory to reserve per NUMA node.
For more information about creating
ConfigMap
objects, see Creating and using config maps in Nodes.Create a new
OpenStackDataPlaneDeployment
CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_cpu_pinning_deploy.yaml
on your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about creating an
OpenStackDataPlaneDeployment
CR, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.In the
compute_cpu_pinning_deploy.yaml
, specifynodeSets
to include all theOpenStackDataPlaneNodeSet
CRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSet
CR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSet
CR defines the nodes you want to designate for CPU pinning.WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yaml
ConfigMap
might directly affect more than one node set, depending on how the node sets and theDataPlaneServices
are configured. To check if a node set uses thenova-extra-config
ConfigMap
and therefore will be affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
DataPlaneService
that points to nova. Ensure that the value of the
edpmServiceType
field of theDataPlaneService
is set tonova
.If the dataSources list of the
DataPlaneService
contains a configMapRef namednova-extra-config
, then this node set uses thisConfigMap
and therefore will be affected by the configuration changes in thisConfigMap
. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separateConfigMap
for these node sets.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_cpu_pinning_deploy.yaml
deployment file. Deploy the data plane:
oc create -f compute_cpu_pinning_deploy.yaml
$ oc create -f compute_cpu_pinning_deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc get openstackdataplanenodeset
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-cpu-pinning True Deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for
openstackclient
and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.3. Creating a dedicated CPU flavor for instances Copy linkLink copied to clipboard!
To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances.
Prerequisites
-
Simultaneous multithreading (SMT) is configured on the host if you intend to use the
required
cpu_thread_policy
. You can have a mix of SMT and non-SMT Compute hosts. Flavors with therequire
cpu_thread_policy
will land on SMT hosts, and flavors withisolate
will land on non-SMT. - The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes.
Procedure
Create a flavor for instances that require CPU pinning:
openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <num_guest_vcpus> pinned_cpus
$ openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <num_guest_vcpus> pinned_cpus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are not using file-backed memory, set the
hw:mem_page_size
property of the flavor to enable NUMA-aware memory allocation:openstack --os-compute-api=2.86 flavor set \ --property hw:mem_page_size=<page_size> pinned_cpus
$ openstack --os-compute-api=2.86 flavor set \ --property hw:mem_page_size=<page_size> pinned_cpus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<page_size>
with one of the following valid values:-
large
: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. -
small
: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any
: Selects the page size by using thehw_mem_page_size
set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. -
<pagesize>
: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
-
NoteTo set
hw:mem_page_size
tosmall
orany
, you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances.To request pinned CPUs, set the
hw:cpu_policy
property of the flavor todedicated
:openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_policy=dedicated pinned_cpus
$ openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_policy=dedicated pinned_cpus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To place each vCPU on thread siblings, set the
hw:cpu_thread_policy
property of the flavor torequire
:openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_thread_policy=require pinned_cpus
$ openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_thread_policy=require pinned_cpus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set
hw:cpu_thread_policy
toprefer
instead ofrequire
. Theprefer
policy is the default policy that ensures that thread siblings are used when available. -
If you use
hw:cpu_thread_policy=isolate
, you must have SMT disabled or use a platform that does not support SMT.
-
If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set
To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance:
openstack server create --flavor pinned_cpus \ --image <image> pinned_cpu_instance
$ openstack server create --flavor pinned_cpus \ --image <image> pinned_cpu_instance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.5. Creating a mixed CPU flavor for instances Copy linkLink copied to clipboard!
To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances.
Procedure
Create a flavor for instances that require a mix of dedicated and shared CPUs:
openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <number_of_reserved_vcpus> \ --property hw:cpu_policy=mixed mixed_CPUs_flavor
$ openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <number_of_reserved_vcpus> \ --property hw:cpu_policy=mixed mixed_CPUs_flavor
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify which CPUs must be dedicated or shared:
openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_dedicated_mask=<CPU_MASK> \ mixed_CPUs_flavor
$ openstack --os-compute-api=2.86 flavor set \ --property hw:cpu_dedicated_mask=<CPU_MASK> \ mixed_CPUs_flavor
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<CPU_MASK>
with the CPUs that must be either dedicated or shared:-
To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to
2-3
to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. -
To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to
^0-1
to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated.
-
To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to
If you are not using file-backed memory, set the
hw:mem_page_size
property of the flavor to enable NUMA-aware memory allocation:openstack --os-compute-api=2.86 flavor set \ --property hw:mem_page_size=<page_size> mixed_CPUs_flavor
$ openstack --os-compute-api=2.86 flavor set \ --property hw:mem_page_size=<page_size> mixed_CPUs_flavor
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<page_size>
with one of the following valid values:-
large
: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. -
small
: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any
: Selects the page size by using thehw_mem_page_size
set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize>
: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.NoteTo set
hw:mem_page_size
tosmall
orany
, you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances.
-
4.1.6. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT) Copy linkLink copied to clipboard!
If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling.
For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings:
- Thread sibling 1: logical CPU cores 0 and 2
- Thread sibling 2: logical CPU cores 1 and 3
In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared.
The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list
, where N
is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings:
grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u
# grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u
The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core:
/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3
/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2
/sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3
4.1.7. Additional resources Copy linkLink copied to clipboard!
4.2. Configuring emulator threads Copy linkLink copied to clipboard!
Compute nodes have overhead tasks associated with the hypervisor for each instance, known as emulator threads. By default, emulator threads run on the same CPUs as the instance, which impacts the performance of the instance.
You can configure the emulator thread policy to run emulator threads on separate CPUs to those the instance uses.
To avoid packet loss, you must never preempt the vCPUs in an NFV deployment. Ensure that you configure emulator threads to run on different CPUs than the NFV workload.
You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.
Prerequisites
- You have enabled CPU pinning.
-
You have selected the
OpenStackDataPlaneNodeSet
custom resource (CR) that defines the nodes that you want to configure for emulator threads. For more information about creating anOpenStackDataPlaneNodeSet
CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
Procedure
To configure nodes for emulator threads, create or update the
ConfigMap
CR namednova-extra-config.yaml
and set the values of thecpu_dedicated_set
parameter under[compute]
. For example, the following configuration sets the dedicated CPUs on a Compute node with a 32-core CPU:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To reserve physical CPU cores for the emulator threads, configure the
cpu_shared_set
parameter. For example, the following configuration sets the shared CPUs on a Compute node with a 32-core CPU:[compute] cpu_shared_set = 0,1,16,17
[compute] cpu_shared_set = 0,1,16,17
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Compute scheduler also uses the CPUs in the shared set for instances that run on shared, or floating, CPUs. For more information, see Configuring CPU pinning on Compute nodes.
Create a new
OpenStackDataPlaneDeployment
CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_emulator_threads_deploy.yaml
on your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-emulator-threads
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-emulator-threads
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
compute_emulator_threads_deploy.yaml
, specifynodeSets
to include all theOpenStackDataPlaneNodeSet
CRs you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSet
CR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSet
CR defines the nodes you want to designate for emulator threads.WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yaml
ConfigMap
might directly affect more than one node set, depending on how the node sets and theDataPlaneServices
are configured. To check if a node set uses thenova-extra-config
ConfigMap
and therefore will be affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
OpenStackDataPlaneService
CR that points to thenova
service. Ensure that the value of the
edpmServiceType
field of theOpenStackDataPlaneService
CR is set tonova
.If the
dataSources
list of theOpenStackDataPlaneService
CR contains aconfigMapRef
namednova-extra-config
, then this node set uses thisConfigMap
and therefore will be affected by the configuration changes in thisConfigMap
. If some of the node sets that are affected should not be reconfigured, you must create a newOpenStackDataPlaneService
CR that points to a separateConfigMap
for these node sets.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_emulator_threads_deploy.yaml
deployment file. Deploy the data plane:
oc create -f compute_emulator_threads_deploy.yaml
$ oc create -f compute_emulator_threads_deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc get openstackdataplanenodeset
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-emulator-threads True Deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for
openstackclient
and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a flavor that runs emulator threads for the instance on a dedicated CPU, which is selected from the shared CPUs configured using
cpu_shared_set
:openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=dedicated \ --property hw:emulator_threads_policy=share \ dedicated_emulator_threads
$ openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=dedicated \ --property hw:emulator_threads_policy=share \ dedicated_emulator_threads
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about configuration options for
hw:emulator_threads_policy
, see Emulator threads policy in Flavor metadata.
4.3. Configuring CPU feature flags for instances Copy linkLink copied to clipboard!
You can enable or disable CPU feature flags for an instance without changing the settings on the host Compute node and rebooting the Compute node. By configuring the standard set of CPU feature flags that are applied to instances, you are helping to achieve live migration compatibility across Compute nodes. You are also helping to manage the performance and security of the instances, by disabling flags that have a negative impact on the security or performance of the instances with a particular CPU model, or enabling flags that provide mitigation from a security problem or alleviates performance problems.
4.3.1. Prerequisites Copy linkLink copied to clipboard!
The CPU model and feature flags must be supported by the hardware and software of the host Compute node:
To check the hardware your host supports, enter the following command on the Compute node:
cat /proc/cpuinfo
$ cat /proc/cpuinfo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the CPU models supported on your host, enter the following command on the Compute node:
sudo virsh cpu-models <arch>
$ sudo virsh cpu-models <arch>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<arch>
with the name of the architecture, for example,x86_64
.
-
You have selected the
OpenStackDataPlaneNodeSet
CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating anOpenStackDataPlaneNodeSet
CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.
4.3.2. Configuring CPU feature flags for instances Copy linkLink copied to clipboard!
Configure the Compute service to apply CPU feature flags to instances with specific vCPU models.
Procedure
To apply CPU features flags to instances with vCPU models, create or update the
ConfigMap
CR namednova-extra-config.yaml
and set the values of thecpu_mode
parameter under[libvirt]
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can set
cpu_mode
to one of the following valid values:-
host-model
: (Default) Use the CPU model of the host Compute node. Use this CPU mode to automatically add critical CPU flags to the instance to provide mitigation from security flaws. -
custom
: Use to configure the specific CPU models each instance should use. -
host-passthrough
: Use the same CPU model and feature flags as the Compute node for the instances hosted on that Compute node.
-
Optional: If you set
cpu_mode
tocustom
, configure the instance CPU models that you want to customise, using a comma-separated list:[libvirt] cpu_mode = custom cpu_models = <cpu_model1>,<cpu_model2>
[libvirt] cpu_mode = custom cpu_models = <cpu_model1>,<cpu_model2>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<cpu_modelx>`
with the name of the CPU model, for example,Haswell-noTSX-IBRS
. List the CPU models in order, placing the more common and less advanced CPU models first in the list, and the more feature-rich CPU models last, for example,
x86_64
. For a list of model names, see/usr/share/libvirt/cpu_map/*.xml
, or enter the following command on the host Compute node:sudo virsh cpu-models <arch>
$ sudo virsh cpu-models <arch>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<arch>
with the name of the architecture of the Compute node, for example,x86_64
.
-
Replace
-
Replace
Configure the CPU feature flags for instances with the specified CPU models:
[libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid
[libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Prefix each flag with "+" to enable the flag, or "-" to disable it. If a prefix is not specified, the flag is enabled. For a list of the available feature flags for a given CPU model, see
/usr/share/libvirt/cpu_map/*.xml
. This example enables the CPU feature flagspcid
andVMX
for the Haswell, noTSX, and IBRS models, and disables the feature flagmtrr
.
-
Prefix each flag with "+" to enable the flag, or "-" to disable it. If a prefix is not specified, the flag is enabled. For a list of the available feature flags for a given CPU model, see
Create a new
OpenStackDataPlaneDeployment
CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file namedcompute_feature_flags_deploy.yaml
on your workstation:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-feature-flags
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-feature-flags
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
compute_feature_flags_deploy.yaml
, specifynodeSets
to include all theOpenStackDataPlaneNodeSet
CRs that you want to deploy. Ensure that you include theOpenStackDataPlaneNodeSet
CR that you selected as a prerequisite. ThatOpenStackDataPlaneNodeSet
CR defines the nodes you want to configure for feature flags.WarningIf your deployment has more than one node set, changes to the
nova-extra-config.yaml
ConfigMap
might directly affect more than one node set, depending on how the node sets and theDataPlaneServices
are configured. To check if a node set uses thenova-extra-config
ConfigMap
and therefore will be affected by the reconfiguration, complete the following steps:-
Check the services list of the node set and find the name of the
OpenStackDataPlaneService
CR that points to thenova
service. Ensure that the value of the
edpmServiceType
field of theOpenStackDataPlaneService
CR is set tonova
.If the
dataSources
list of theOpenStackDataPlaneService
CR contains aconfigMapRef
namednova-extra-config
, then this node set uses thisConfigMap
and therefore will be affected by the configuration changes in thisConfigMap
. If some of the node sets that are affected should not be reconfigured, you must create a newOpenStackDataPlaneService
CR that points to a separateConfigMap
for these node sets.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<nodeSet_name>
with the names of theOpenStackDataPlaneNodeSet
CRs that you want to include in your data plane deployment.
-
Check the services list of the node set and find the name of the
-
Save the
compute_feature_flags_deploy.yaml
deployment file. Deploy the data plane:
oc create -f compute_feature_flags_deploy.yaml
$ oc create -f compute_feature_flags_deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the data plane is deployed:
oc get openstackdataplanenodeset
$ oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-feature-flags True Deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the remote shell for
openstackclient
and verify that the deployed Compute nodes are visible on the control plane:oc rsh -n openstack openstackclient openstack hypervisor list
$ oc rsh -n openstack openstackclient $ openstack hypervisor list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow