Chapter 4. Configuring CPUs on Compute nodes


As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).

Use the following features to tune your instances for optimal CPU performance:

  • CPU pinning: Pin virtual CPUs to physical CPUs.
  • Emulator threads: Pin emulator threads associated with the instance to physical CPUs.
  • CPU feature flags: Configure the standard set of CPU feature flags that are applied to instances to improve live migration compatibility across Compute nodes.

4.1. Configuring CPU pinning on Compute nodes

You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node.

You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following:

  1. Designate Compute nodes for CPU pinning.
  2. Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes.
  3. Deploy the data plane.
  4. Create a flavor for launching instances that require CPU pinning.
  5. Create a flavor for launching instances that use shared, or floating, CPUs.
Note

Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.

4.1.1. Prerequisites

  • You know the NUMA topology of your Compute node.
  • The oc command line tool is installed on your workstation.
  • You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges.

To designate Compute nodes for instances with pinned CPUs, you must create and configure a new OpenStackDataPlaneNodeSet custom resource (CR) to configure the nodes that are designated for CPU pinning. Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning:

Expand
Table 4.1. Example of NUMA Topology

NUMA Node 0

NUMA Node 1

Core 0

Core 1

Core 4

Core 5

Core 2

Core 3

Core 6

Core 7

The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning.

Note

The following procedure applies to new OpenStackDataPlaneNodeSet CRs that have not yet been provisioned. To reconfigure an existing OpenStackDataPlaneNodeSet that has already been provisioned, you must first drain the guest instances from all the nodes in the OpenStackDataPlaneNodeSet.

Note

Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts.

Warning

You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [compute] and [default]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       25-nova-cpu-pinning.conf: | 
    1
    
          [compute]
          cpu_shared_set = 2,6 
    2
    
          cpu_dedicated_set = 1,3,5,7 
    3
    
          [DEFAULT]
          reserved_huge_pages = node:0,size:4,count:131072 
    4
    
          reserved_huge_pages = node:1,size:4,count:131072
    Copy to Clipboard Toggle word wrap
    1
    The name of the new Compute configuration file. The nova-operator generates the default configuration file with the name 01-nova.conf. Do not use the default name, because it would override the infrastructure configuration, such as the transport_url. The nova-compute service applies every file under /etc/nova/nova.conf.d/ in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file.
    2
    Reserves physical CPU cores for the shared instances.
    3
    Reserves physical CPU cores for the dedicated instances.
    4
    Specifies the amount memory to reserve per NUMA node.

    For more information about creating ConfigMap objects, see Creating and using config maps in Nodes.

  2. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_cpu_pinning_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-edpm-cpu-pinning
    Copy to Clipboard Toggle word wrap

    For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

  3. In the compute_cpu_pinning_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for CPU pinning.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the DataPlaneService that points to nova.
    2. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova.

      If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-edpm-cpu-pinning
    spec:
      nodeSets:
        - openstack-edpm
        - compute-cpu-pinning
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  4. Save the compute_cpu_pinning_deploy.yaml deployment file.
  5. Deploy the data plane:

    $ oc create -f compute_cpu_pinning_deploy.yaml
    Copy to Clipboard Toggle word wrap
  6. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    NAME           STATUS MESSAGE
    compute-cpu-pinning True   Deployed
    Copy to Clipboard Toggle word wrap
  7. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap

To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances.

Prerequisites

  • Simultaneous multithreading (SMT) is configured on the host if you intend to use the required cpu_thread_policy. You can have a mix of SMT and non-SMT Compute hosts. Flavors with the require cpu_thread_policy will land on SMT hosts, and flavors with isolate will land on non-SMT.
  • The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes.

Procedure

  1. Create a flavor for instances that require CPU pinning:

    $ openstack flavor create --ram <size_mb> \
     --disk <size_gb> --vcpus <num_guest_vcpus> pinned_cpus
    Copy to Clipboard Toggle word wrap
  2. If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation:

     $ openstack --os-compute-api=2.86 flavor set \
     --property hw:mem_page_size=<page_size> pinned_cpus
    Copy to Clipboard Toggle word wrap
    • Replace <page_size> with one of the following valid values:

      • large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems.
      • small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages).
      • any: Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver.
      • <pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
    Note

    To set hw:mem_page_size to small or any, you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances.

  3. To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated:

    $ openstack --os-compute-api=2.86 flavor set \
     --property hw:cpu_policy=dedicated pinned_cpus
    Copy to Clipboard Toggle word wrap
  4. Optional: To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require:

    $ openstack --os-compute-api=2.86 flavor set \
     --property hw:cpu_thread_policy=require pinned_cpus
    Copy to Clipboard Toggle word wrap
    Note
    • If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set hw:cpu_thread_policy to prefer instead of require. The prefer policy is the default policy that ensures that thread siblings are used when available.
    • If you use hw:cpu_thread_policy=isolate, you must have SMT disabled or use a platform that does not support SMT.
  5. To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance:

    $ openstack server create --flavor pinned_cpus \
     --image <image> pinned_cpu_instance
    Copy to Clipboard Toggle word wrap

4.1.4. Creating a shared CPU flavor for instances

To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances.

Prerequisites

Procedure

  1. Create a flavor for instances that do not require CPU pinning:

    $ openstack flavor create --ram <size_mb> \
     --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus
    Copy to Clipboard Toggle word wrap
  2. To request floating CPUs, set the hw:cpu_policy property of the flavor to shared:

    $ openstack --os-compute-api=2.86 flavor set \
     --property hw:cpu_policy=shared floating_cpus
    Copy to Clipboard Toggle word wrap

4.1.5. Creating a mixed CPU flavor for instances

To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances.

Procedure

  1. Create a flavor for instances that require a mix of dedicated and shared CPUs:

    $ openstack flavor create --ram <size_mb> \
     --disk <size_gb> --vcpus <number_of_reserved_vcpus> \
     --property hw:cpu_policy=mixed mixed_CPUs_flavor
    Copy to Clipboard Toggle word wrap
  2. Specify which CPUs must be dedicated or shared:

    $ openstack --os-compute-api=2.86 flavor set \
     --property hw:cpu_dedicated_mask=<CPU_MASK> \
     mixed_CPUs_flavor
    Copy to Clipboard Toggle word wrap
    • Replace <CPU_MASK> with the CPUs that must be either dedicated or shared:

      • To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to 2-3 to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared.
      • To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to ^0-1 to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated.
  3. If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation:

     $ openstack --os-compute-api=2.86 flavor set \
     --property hw:mem_page_size=<page_size> mixed_CPUs_flavor
    Copy to Clipboard Toggle word wrap
    • Replace <page_size> with one of the following valid values:

      • large: Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems.
      • small: (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages).
      • any: Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver.
      • <pagesize>: Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.

        Note

        To set hw:mem_page_size to small or any, you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances.

If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling.

For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings:

  • Thread sibling 1: logical CPU cores 0 and 2
  • Thread sibling 2: logical CPU cores 1 and 3

In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared.

The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list, where N is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings:

# grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u
Copy to Clipboard Toggle word wrap

The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core:

/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2
/sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3
Copy to Clipboard Toggle word wrap

4.1.7. Additional resources

4.2. Configuring emulator threads

Compute nodes have overhead tasks associated with the hypervisor for each instance, known as emulator threads. By default, emulator threads run on the same CPUs as the instance, which impacts the performance of the instance.

You can configure the emulator thread policy to run emulator threads on separate CPUs to those the instance uses.

Note

To avoid packet loss, you must never preempt the vCPUs in an NFV deployment. Ensure that you configure emulator threads to run on different CPUs than the NFV workload.

Warning

You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes.

Prerequisites

  • You have enabled CPU pinning.
  • You have selected the OpenStackDataPlaneNodeSet custom resource (CR) that defines the nodes that you want to configure for emulator threads. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

Procedure

  1. To configure nodes for emulator threads, create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the cpu_dedicated_set parameter under [compute]. For example, the following configuration sets the dedicated CPUs on a Compute node with a 32-core CPU:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       33-nova-emulator-threads.conf: |
         [compute]
         cpu_dedicated_set = 2-15,18-31
    Copy to Clipboard Toggle word wrap
  2. To reserve physical CPU cores for the emulator threads, configure the cpu_shared_set parameter. For example, the following configuration sets the shared CPUs on a Compute node with a 32-core CPU:

        [compute]
        cpu_shared_set = 0,1,16,17
    Copy to Clipboard Toggle word wrap
    Note

    The Compute scheduler also uses the CPUs in the shared set for instances that run on shared, or floating, CPUs. For more information, see Configuring CPU pinning on Compute nodes.

  3. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_emulator_threads_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-emulator-threads
    Copy to Clipboard Toggle word wrap
  4. In the compute_emulator_threads_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for emulator threads.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the OpenStackDataPlaneService CR that points to the nova service.
    2. Ensure that the value of the edpmServiceType field of the OpenStackDataPlaneService CR is set to nova.

      If the dataSources list of the OpenStackDataPlaneService CR contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new OpenStackDataPlaneService CR that points to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-emulator-threads
    spec:
      nodeSets:
        - openstack-edpm
        - compute-emulator-threads
        - my-data-plane-node-set
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  5. Save the compute_emulator_threads_deploy.yaml deployment file.
  6. Deploy the data plane:

    $ oc create -f compute_emulator_threads_deploy.yaml
    Copy to Clipboard Toggle word wrap
  7. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    NAME STATUS MESSAGE
    compute-emulator-threads True Deployed
    Copy to Clipboard Toggle word wrap
  8. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap
  9. Configure a flavor that runs emulator threads for the instance on a dedicated CPU, which is selected from the shared CPUs configured using cpu_shared_set:

    $ openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=dedicated \
    --property hw:emulator_threads_policy=share \
    dedicated_emulator_threads
    Copy to Clipboard Toggle word wrap

    For more information about configuration options for hw:emulator_threads_policy, see Emulator threads policy in Flavor metadata.

4.3. Configuring CPU feature flags for instances

You can enable or disable CPU feature flags for an instance without changing the settings on the host Compute node and rebooting the Compute node. By configuring the standard set of CPU feature flags that are applied to instances, you are helping to achieve live migration compatibility across Compute nodes. You are also helping to manage the performance and security of the instances, by disabling flags that have a negative impact on the security or performance of the instances with a particular CPU model, or enabling flags that provide mitigation from a security problem or alleviates performance problems.

4.3.1. Prerequisites

  • The CPU model and feature flags must be supported by the hardware and software of the host Compute node:

    • To check the hardware your host supports, enter the following command on the Compute node:

      $ cat /proc/cpuinfo
      Copy to Clipboard Toggle word wrap
    • To check the CPU models supported on your host, enter the following command on the Compute node:

      $ sudo virsh cpu-models <arch>
      Copy to Clipboard Toggle word wrap

      Replace <arch> with the name of the architecture, for example, x86_64.

  • You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide.

4.3.2. Configuring CPU feature flags for instances

Configure the Compute service to apply CPU feature flags to instances with specific vCPU models.

Procedure

  1. To apply CPU features flags to instances with vCPU models, create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the cpu_mode parameter under [libvirt]:

    apiVersion: v1
    kind: ConfigMap
    metadata:
       name: nova-extra-config
       namespace: openstack
    data:
       34-nova-feature-flags.conf: |
          [libvirt]
          cpu_mode = custom
    Copy to Clipboard Toggle word wrap

    You can set cpu_mode to one of the following valid values:

    • host-model: (Default) Use the CPU model of the host Compute node. Use this CPU mode to automatically add critical CPU flags to the instance to provide mitigation from security flaws.
    • custom: Use to configure the specific CPU models each instance should use.
    • host-passthrough: Use the same CPU model and feature flags as the Compute node for the instances hosted on that Compute node.
  2. Optional: If you set cpu_mode to custom, configure the instance CPU models that you want to customise, using a comma-separated list:

     [libvirt]
     cpu_mode = custom
     cpu_models = <cpu_model1>,<cpu_model2>
    Copy to Clipboard Toggle word wrap
    • Replace <cpu_modelx>` with the name of the CPU model, for example, Haswell-noTSX-IBRS.
    • List the CPU models in order, placing the more common and less advanced CPU models first in the list, and the more feature-rich CPU models last, for example, x86_64. For a list of model names, see /usr/share/libvirt/cpu_map/*.xml, or enter the following command on the host Compute node:

      $ sudo virsh cpu-models <arch>
      Copy to Clipboard Toggle word wrap
      • Replace <arch> with the name of the architecture of the Compute node, for example, x86_64.
  3. Configure the CPU feature flags for instances with the specified CPU models:

      [libvirt]
      cpu_mode = custom
      cpu_models = Haswell-noTSX-IBRS
      cpu_model_extra_flags = -PDPE1GB, +VMX, pcid
    Copy to Clipboard Toggle word wrap
    • Prefix each flag with "+" to enable the flag, or "-" to disable it. If a prefix is not specified, the flag is enabled. For a list of the available feature flags for a given CPU model, see /usr/share/libvirt/cpu_map/*.xml. This example enables the CPU feature flags pcid and VMX for the Haswell, noTSX, and IBRS models, and disables the feature flag mtrr.
  4. Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_feature_flags_deploy.yaml on your workstation:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-feature-flags
    Copy to Clipboard Toggle word wrap
  5. In the compute_feature_flags_deploy.yaml, specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs that you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to configure for feature flags.

    Warning

    If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps:

    1. Check the services list of the node set and find the name of the OpenStackDataPlaneService CR that points to the nova service.
    2. Ensure that the value of the edpmServiceType field of the OpenStackDataPlaneService CR is set to nova.

      If the dataSources list of the OpenStackDataPlaneService CR contains a configMapRef named nova-extra-config, then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new OpenStackDataPlaneService CR that points to a separate ConfigMap for these node sets.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: openstack-feature-flags
    spec:
      nodeSets:
        - openstack-edpm
        - compute-feature-flags
        - ...
        - <nodeSet_name>
    Copy to Clipboard Toggle word wrap
    • Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment.
  6. Save the compute_feature_flags_deploy.yaml deployment file.
  7. Deploy the data plane:

    $ oc create -f compute_feature_flags_deploy.yaml
    Copy to Clipboard Toggle word wrap
  8. Verify that the data plane is deployed:

    $ oc get openstackdataplanenodeset
    NAME STATUS MESSAGE
    compute-feature-flags True Deployed
    Copy to Clipboard Toggle word wrap
  9. Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane:

    $ oc rsh -n openstack openstackclient
    $ openstack hypervisor list
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat