Este contenido no está disponible en el idioma seleccionado.

Chapter 9. Tuning a Red Hat OpenStack Platform environment


9.1. Pinning emulator threads

Emulator threads handle interrupt requests and non-blocking processes for virtual machine hardware emulation. These threads float across the vCPUs that the guest uses for processing. If threads used for the poll mode driver (PMD) or real-time processing run on these vCPUs, you can experience packet loss or missed deadlines.

You can separate emulator threads from VM processing tasks by pinning the threads to their own vCPUs, increasing performance as a result.

9.1.1. Configuring CPUs to host emulator threads

To improve performance, reserve a subset of pCPUs for hosting emulator threads. Red Hat recommends using pCPUs identified in the OvsDpdkCoreList paramenter.

Procedure
  1. Deploy an overcloud with NovaComputeCpuSharedSet defined for a given role. The value of NovaComputeCpuSharedSet applies to the cpu_shared_set parameter in the nova.conf file for hosts within that role.

    parameter_defaults:
        ComputeOvsDpdkParameters:
            OvsDpdkCoreList: “0-1,16-17”
            NovaComputeCpuSharedSet: “0-1,16-17”
    Copy to Clipboard Toggle word wrap
  2. Create a flavor to build instances with emulator threads separated into a shared pool.

    openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>
    Copy to Clipboard Toggle word wrap
  3. Add the hw:emulator_threads_policy extra specfication, and set the value to share. Instances created with this flavor will use the vCPUs defined in the cpu_share_set parameter in the nova.conf file.

    openstack flavor set <flavor> --property hw:emulator_threads_policy=share
    Copy to Clipboard Toggle word wrap
Note

You must set the cpu_share_set parameter in the nova.conf file manually or with Heat to enable the share policy for this extra specification.

9.1.2. Verify the emulator thread pinning

Procedure
  1. Identify the host for a given instance and the name of the instance.

    openstack server show <instance_id>
    Copy to Clipboard Toggle word wrap
  2. Use SSH to log onto the identified host as heat-admin.

    ssh heat-admin@compute-1
    [compute-1]$ sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`
    Copy to Clipboard Toggle word wrap

9.2. Enabling RT-KVM for NFV Workloads

This section describes the steps to install and configure Red Hat Enterprise Linux 7.5 Real Time KVM (RT-KVM) for the Red Hat OpenStack Platform. Red Hat OpenStack Platform provides real-time capabilities with a new Real-time Compute node role that provisions Red Hat Enterprise Linux for Real-Time, as well as the additional RT-KVM kernel module, and automatic configuration of the Compute node.

9.2.1. Planning for your RT-KVM Compute nodes

You must use Red Hat certified servers for your RT-KVM Compute nodes. See Red Hat Enterprise Linux for Real Time 7 certified servers for details.

See Registering and updating your undercloud for details on how to enable the rhel-7-server-nfv-rpms repository for RT-KVM, and ensuring your system is up to date.

Note

You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

Building the real-time image

Use the following steps to build the overcloud image for Real-time Compute nodes.

  1. To initialize the stack user to use the director command line tools, run the following command:

    [stack@undercloud-0 ~]$ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  2. Install the libguestfs-tools package on the undercloud to get the virt-customize tool:

    (undercloud) [stack@undercloud-0 ~]$ sudo yum install libguestfs-tools
    Copy to Clipboard Toggle word wrap
  3. Extract the images:

    (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/overcloud-full.tar
    (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
    Copy to Clipboard Toggle word wrap
  4. Copy the default image:

    (undercloud) [stack@undercloud-0 ~]$ cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
    Copy to Clipboard Toggle word wrap
  5. Register your image to enable Red Hat repositories relevant to your customizations. Replace [username] and [password] with valid credentials in the following example.

    virt-customize -a overcloud-realtime-compute.qcow2 --run-command \
    'subscription-manager register --username=[username] --password=[password]'
    Copy to Clipboard Toggle word wrap
    Note

    Remove credentials from the history file anytime they are used on the command prompt. You can delete individual lines in history using the history -d command followed by the line number.

  6. Find a list of pool IDs from your account’s subscriptions, and attach the appropriate pool ID to your image.

    sudo subscription-manager list --all --available | less
    ...
    virt-customize -a overcloud-realtime-compute.qcow2 --run-command \
    'subscription-manager attach --pool [pool-ID]'
    Copy to Clipboard Toggle word wrap
  7. Add repositories necessary for Red Hat OpenStack Platform with NFV.

    virt-customize -a overcloud-realtime-compute.qcow2 --run-command \
    'subscription-manager repos --enable=rhel-7-server-nfv-rpms \
    --enable=rhel-7-server-rpms \
    --enable=rhel-7-server-rh-common-rpms \
    --enable=rhel-7-server-extras-rpms \
    --enable=rhel-7-server-openstack-14-rpms'
    Copy to Clipboard Toggle word wrap
  8. Create a script to configure real-time capabilities on the image.

    (undercloud) [stack@undercloud-0 ~]$ cat <<'EOF' > rt.sh
      #!/bin/bash
    
      set -eux
    
      yum -v -y --setopt=protected_packages= erase kernel.$(uname -m)
      yum -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host
      EOF
    Copy to Clipboard Toggle word wrap
  9. Run the script to configure the RT image:

    (undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
    Copy to Clipboard Toggle word wrap
    Note

    You may see the following error in the rt.sh script output: grubby fatal error: unable to find a suitable template. You can safely ignore this error.

  10. You can check that the packages installed using the rt.sh script installed correctly by examining the virt-customize.log file that was created from the previous command.

    (undercloud) [stack@undercloud-0 ~]$ cat virt-customize.log | grep Verifying
    
      Verifying  : kernel-3.10.0-957.el7.x86_64                                 1/1
      Verifying  : 10:qemu-kvm-tools-rhev-2.12.0-18.el7_6.1.x86_64              1/8
      Verifying  : tuned-profiles-realtime-2.10.0-6.el7_6.3.noarch              2/8
      Verifying  : linux-firmware-20180911-69.git85c5d90.el7.noarch             3/8
      Verifying  : tuned-profiles-nfv-host-2.10.0-6.el7_6.3.noarch              4/8
      Verifying  : kernel-rt-kvm-3.10.0-957.10.1.rt56.921.el7.x86_64            5/8
      Verifying  : tuna-0.13-6.el7.noarch                                       6/8
      Verifying  : kernel-rt-3.10.0-957.10.1.rt56.921.el7.x86_64                7/8
      Verifying  : rt-setup-2.0-6.el7.x86_64                                    8/8
    Copy to Clipboard Toggle word wrap
  11. Relabel SELinux:

    (undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
    Copy to Clipboard Toggle word wrap
  12. Extract vmlinuz and initrd:

    Note

    The software version in the vmlinuz and initramfs filenames vary with the kernel version. Use the relevant software version in the filename, for example image/boot/vmlinuz-3.10.0-862.rt56.804.el7x86_64, or use the wildcard symbol * instead.

    (undercloud) [stack@undercloud-0 ~]$ mkdir image
    (undercloud) [stack@undercloud-0 ~]$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image
    (undercloud) [stack@undercloud-0 ~]$ cp image/boot/vmlinuz-*.x86_64 ./overcloud-realtime-compute.vmlinuz
    (undercloud) [stack@undercloud-0 ~]$ cp image/boot/initramfs-*.x86_64.img ./overcloud-realtime-compute.initrd
    (undercloud) [stack@undercloud-0 ~]$ guestunmount image
    Copy to Clipboard Toggle word wrap
  13. Upload the image:

    (undercloud) [stack@undercloud-0 ~]$ openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2
    Copy to Clipboard Toggle word wrap

You now have a real-time image you can use with the ComputeOvsDpdkRT composable role on your selected Compute nodes.

Modifying BIOS settings on RT-KVM Compute nodes

To reduce latency on your RT-KVM Compute nodes, you must modify the BIOS settings. Disable all options for the following in your Compute node BIOS settings:

  • Power Management
  • Hyper-Threading
  • CPU sleep states
  • Logical processors

See Setting BIOS parameters for descriptions of these settings and the impact of disabling them. See your hardware manufacturer documentation for complete details on how to change BIOS settings.

9.2.2. Configuring OVS-DPDK with RT-KVM

Note

You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

9.2.2.1. Generating the ComputeOvsDpdk composable role

You use the ComputeOvsDpdkRT role to specify Compute nodes that use the real-time compute image.

Generate roles_data.yaml for the ComputeOvsDpdkRT role.

# (undercloud) [stack@undercloud-0 ~]$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT
Copy to Clipboard Toggle word wrap

9.2.2.2. Configuring the OVS-DPDK parameters

Important

Attempting to deploy Data Plane Development Kit (DPDK) without appropriate values causes the deployment to fail or lead to unstable deployments. You must determine the best values for the OVS-DPDK parameters set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.

  1. Add the nic configuration for the OVS-DPDK role you use under resource_registry:

    resource_registry:
      # Specify the relative/absolute path to the config files you want to use for override the default.
      OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml
      OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
    Copy to Clipboard Toggle word wrap
  2. Under parameter_defaults, set the OVS-DPDK and RT-KVM parameters:

      # DPDK compute node.
      ComputeOvsDpdkRTParameters:
        KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-7,17-23,9-15,25-31"
        TunedProfileName: "realtime-virtual-host"
        IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31"
        NovaVcpuPinSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31']
        NovaReservedHostMemory: 4096
        OvsDpdkSocketMemory: "1024,1024"
        OvsDpdkMemoryChannels: "4"
        OvsDpdkCoreList: "0,16,8,24"
        OvsPmdCoreList: "1,17,9,25"
        VhostuserSocketGroup: "hugetlbfs"
      ComputeOvsDpdkRTImage: "overcloud-realtime-compute"
    Copy to Clipboard Toggle word wrap

9.2.2.3. Deploying the overcloud

Deploy the overcloud for ML2-OVS:

(undercloud) [stack@undercloud-0 ~]$ openstack overcloud deploy \
--templates \
-r /home/stack/ospd-14-vlan-dpdk-ctlplane-bonding-rt/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovs-dpdk.yaml \
-e /home/stack/ospd-14-vxlan-dpdk-data-bonding-rt-hybrid/containers-prepare-parameter.yaml \
-e /home/stack/ospd-14-vxlan-dpdk-data-bonding-rt-hybrid/network-environment.yaml
Copy to Clipboard Toggle word wrap

9.2.3. Launching an RT-KVM Instance

To launch an RT-KVM instance on a real-time enabled Compute node:

  1. Create an RT-KVM flavor on the overcloud:

    # openstack flavor create --ram 4096 --disk 20 --vcpus 4 <flavor-name>
    # openstack flavor set --property hw:cpu_policy=dedicated <flavor-name>
    # openstack flavor set --property hw:cpu_realtime=yes <flavor-name>
    # openstack flavor set --property hw:mem_page_size=1GB <flavor-name>
    # openstack flavor set --property hw:cpu_realtime_mask="^0-1" <flavor-name>
    # openstack flavor set --property hw:emulator_threads_policy=isolate <flavor-name>
    Copy to Clipboard Toggle word wrap
  2. Launch an RT-KVM instance:

    # openstack server create  --image <rhel> --flavor <flavor-name> --nic net-id=<dpdk-net> test-rt
    Copy to Clipboard Toggle word wrap
  3. Optionally, verify that the instance uses the assigned emulator threads:

    # virsh dumpxml <instance-id> | grep vcpu -A1
    <vcpu placement='static'>4</vcpu>
    <cputune>
      <vcpupin vcpu='0' cpuset='1'/>
      <vcpupin vcpu='1' cpuset='3'/>
      <vcpupin vcpu='2' cpuset='5'/>
      <vcpupin vcpu='3' cpuset='7'/>
      <emulatorpin cpuset='0-1'/>
      <vcpusched vcpus='2-3' scheduler='fifo'
      priority='1'/>
    </cputune>
    Copy to Clipboard Toggle word wrap

9.3. Trusted Virtual Functions

You can configure physical functions (PFs) to trust virtual functions (VFs) so that VFs can perform some privileged actions. For example, you can use this configuration to allow VFs to enable promiscuous mode or to change a hardware address.

9.3.1. Providing trust

Prerequisites
  • An operational installation Red Hat OpenStack Platform director
Procedure

Complete the following steps to deploy the overcloud with the parameters necessary to enable physical function trust of virtual functions:

  1. Add the NeutronPhysicalDevMappings parameter under the parameter_defaults section to make the link between the logical network name and the physical interface.

    parameter_defaults:
      NeutronPhysicalDevMappings: "sriov2:p5p2"
    Copy to Clipboard Toggle word wrap
  2. Add the new property "trusted" to the existing parameters related to SR-IOV.

    parameter_defaults:
      NeutronPhysicalDevMappings: "sriov2:p5p2"
      NeutronSriovNumVFs: ["p5p2:8"]
      NovaPCIPassthrough:
        - devname: "p5p2"
          physical_network: "sriov2"
          trusted: "true"
    Copy to Clipboard Toggle word wrap
    Note

    You must include quotation marks around the value "true".

    Important

    Complete the following step only in trusted environments. This step will allow non-administrative accounts the ability to bind trusted ports.

  3. Modify permissions to allow users the capability of creating and updating port bindings.

    parameter_defaults:
      NeutronApiPolicies: {
        operator_create_binding_profile: { key: 'create_port:binding:profile', value: 'rule:admin_or_network_owner'},
        operator_get_binding_profile: { key: 'get_port:binding:profile', value: 'rule:admin_or_network_owner'},
        operator_update_binding_profile: { key: 'update_port:binding:profile', value: 'rule:admin_or_network_owner'}
      }
    Copy to Clipboard Toggle word wrap

9.3.2. Utilizing trusted virtual functions

Execute the following on a fully deployed overcloud to utilize trusted virtual functions.

Creating a trusted VF network
  1. Create a network of type vlan.

    openstack network create trusted_vf_network  --provider-network-type vlan \
     --provider-segment 111 --provider-physical-network sriov2 \
     --external --disable-port-security
    Copy to Clipboard Toggle word wrap
  2. Create a subnet.

    openstack subnet create --network trusted_vf_network \
      --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \
     subnet-trusted_vf_network
    Copy to Clipboard Toggle word wrap
  3. Create a port, setting the vnic-type option to direct, and the binding-profile option to true.

    openstack port create --network sriov111 \
    --vnic-type direct --binding-profile trusted=true \
    sriov111_port_trusted
    Copy to Clipboard Toggle word wrap
  4. Create an instance binding it to the previously created trusted port.

    openstack server create --image rhel --flavor dpdk  --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
    Copy to Clipboard Toggle word wrap

Verify the trusted virtual function configuration on the hypervior

On the compute node that hosts the newly created instance, run the following command:

# ip link
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff
    vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off
    vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
Copy to Clipboard Toggle word wrap

View the output of the ip link command and verify that the trust status of the virtual function is trust on. The example output contains details of an environment that contains two ports. Note that vf 6 contains the text trust on.

9.4. Configuring RX/TX queue size

You can experience packet loss at high packet rates above 3.5 million packets per second (mpps) for many reasons, such as:

  • a network interrupt
  • a SMI
  • packet processing latency in the Virtual Network Function

To prevent packet loss, increase the queue size from the default of 512 to a maximum of 1024.

Procedure

  • To increase the RX and TX queue size, include the following lines to the parameter_defaults: section of a relevant director role. Here is an example with ComputeOvsDpdk role:

    parameter_defaults:
      ComputeOvsDpdkParameters:
        -NovaLibvirtRxQueueSize: 1024
        -NovaLibvirtTxQueueSize: 1024
    Copy to Clipboard Toggle word wrap

Testing

  • You can observe the values for RX queue size and TX queue size in the nova.conf file:

    [libvirt]
    rx_queue_size=1024
    tx_queue_size=1024
    Copy to Clipboard Toggle word wrap
  • You can check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the compute host.

    <devices>
       <interface type='vhostuser'>
         <mac address='56:48:4f:4d:5e:6f'/>
         <source type='unix' path='/tmp/vhost-user1' mode='server'/>
         <model type='virtio'/>
         <driver name='vhost' rx_queue_size='1024'   tx_queue_size='1024' />
         <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
       </interface>
    </devices>
    Copy to Clipboard Toggle word wrap

    To verify the values for RX queue size and TX queue size, use the following command on a KVM host:

    $ virsh dumpxml <vm name> | grep queue_size
    Copy to Clipboard Toggle word wrap
  • You can check for improved performance, such as 3.8 mpps/core at 0 frame loss.

9.5. Configuring a NUMA-aware vSwitch

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

Before you implement a NUMA-aware vSwitch, examine the following components of your hardware configuration:

  • The number of physical networks.
  • The placement of PCI cards.
  • The physical architecture of the servers.

Memory-mapped I/O (MMIO) devices, such as PCIe NICs, are associated with specific NUMA nodes. When a VM and the NIC are on different NUMA nodes, there is a significant decrease in performance. To increase performance, align PCIe NIC placement and instance processing on the same NUMA node.

Use this feature to ensure that instances that share a physical network are located on the same NUMA node. To optimize datacenter hardware, you can leverage load-sharing VMs by using multiple networks, different network types, or bonding.

Important

To architect NUMA-node load sharing and network access correctly, you must understand the mapping of the PCIe slot and the NUMA node. For detailed information on your specific hardware, refer to your vendor’s documentation.

To prevent a cross-NUMA configuration, place the VM on the correct NUMA node, by providing the location of the NIC to Nova.

Prerequisites

  • You have enabled the filter “NUMATopologyFilter”

Procedure

  • Set a new NeutronPhysnetNUMANodesMapping parameter to map the physical network to the NUMA node that you associate with the physical network.
  • If you use tunnels, such as VxLAN or GRE, you must also set the NeutronTunnelNUMANodes parameter.

    parameter_defaults:
      NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]}
      NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>
    Copy to Clipboard Toggle word wrap

Here is an example with 2 physical networks tunneled to NUMA node 0:

  • one tenant network associated with NUMA node 0
  • one management network without any affinity

    parameter_defaults:
      NeutronBridgeMappings:
        - tenant:br-link0
      NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]}
      NeutronTunnelNUMANodes: 0
    Copy to Clipboard Toggle word wrap

Testing

  • Observe the configuration in the file /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf

    [neutron_physnet_tenant]
    numa_nodes=1
    [neutron_tunnel]
    numa_nodes=1
    Copy to Clipboard Toggle word wrap
  • Confirm the new configuration with the lscpu command:

    $ lscpu
    Copy to Clipboard Toggle word wrap
  • Launch a VM, with the NIC attached to the appropriate network

9.6. Configuring Quality of Service (QoS) in an NFVi environment

For details on Configuring QoS, see Configure Quality-of-Service (QoS). Support is limited to QoS rule type bandwidth-limit on SR-IOV and OVS-DPDK egress interfaces.

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat