Questo contenuto non è disponibile nella lingua selezionata.
Chapter 9. Tuning a Red Hat OpenStack Platform environment
9.1. Pinning emulator threads Copia collegamentoCollegamento copiato negli appunti!
Emulator threads handle interrupt requests and non-blocking processes for virtual machine hardware emulation. These threads float across the CPUs that the guest uses for processing. If threads used for the poll mode driver (PMD) or real-time processing run on these guest CPUs, you can experience packet loss or missed deadlines.
You can separate emulator threads from VM processing tasks by pinning the threads to their own guest CPUs, increasing performance as a result.
9.1.1. Configuring CPUs to host emulator threads Copia collegamentoCollegamento copiato negli appunti!
To improve performance, reserve a subset of host CPUs identified in the OvsDpdkCoreList parameter for hosting emulator threads.
Procedure
Deploy an overcloud with
NovaComputeCpuSharedSetdefined for a given role. The value ofNovaComputeCpuSharedSetapplies to thecpu_shared_setparameter in thenova.conffile for hosts within that role.parameter_defaults: ComputeOvsDpdkParameters: OvsDpdkCoreList: “0-1,16-17” NovaComputeCpuSharedSet: “0-1,16-17” NovaComputeCpuDedicatedSet: “2-15,18-31”parameter_defaults: ComputeOvsDpdkParameters: OvsDpdkCoreList: “0-1,16-17” NovaComputeCpuSharedSet: “0-1,16-17” NovaComputeCpuDedicatedSet: “2-15,18-31”Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor to build instances with emulator threads separated into a shared pool.
openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>
openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
hw:emulator_threads_policyextra specification, and set the value toshare. Instances created with this flavor will use the instance CPUs defined in thecpu_share_setparameter in the nova.conf file.openstack flavor set <flavor> --property hw:emulator_threads_policy=share
openstack flavor set <flavor> --property hw:emulator_threads_policy=shareCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You must set the cpu_share_set parameter in the nova.conf file to enable the share policy for this extra specification. You should use heat for this preferably, as editing nova.conf manually might not persist across redeployments.
9.1.2. Verify the emulator thread pinning Copia collegamentoCollegamento copiato negli appunti!
Procedure
Identify the host and name for a given instance.
openstack server show <instance_id>
openstack server show <instance_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use SSH to log on to the identified host as heat-admin.
ssh heat-admin@compute-1 [compute-1]$ sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`
ssh heat-admin@compute-1 [compute-1]$ sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2. Enabling RT-KVM for NFV Workloads Copia collegamentoCollegamento copiato negli appunti!
To facilitate installing and configuring Red Hat Enterprise Linux 8.0 Real Time KVM (RT-KVM), Red Hat OpenStack Platform provides the following features:
- A real-time Compute node role that provisions Red Hat Enterprise Linux for real-time.
- The additional RT-KVM kernel module.
- Automatic configuration of the Compute node.
9.2.1. Planning for your RT-KVM Compute nodes Copia collegamentoCollegamento copiato negli appunti!
You must use Red Hat certified servers for your RT-KVM Compute nodes. For more information, see: Red Hat Enterprise Linux for Real Time 7 certified servers.
For details on how to enable the rhel-8-server-nfv-rpms repository for RT-KVM, and ensuring your system is up to date, see: Registering and updating your undercloud.
You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.
Building the real-time image
Install the libguestfs-tools package on the undercloud to get the virt-customize tool:
sudo dnf install libguestfs-tools
(undercloud) [stack@undercloud-0 ~]$ sudo dnf install libguestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you install the
libguestfs-toolspackage on the undercloud, disableiscsid.socketto avoid port conflicts with thetripleo_iscsidservice on the undercloud:sudo systemctl disable --now iscsid.socket
$ sudo systemctl disable --now iscsid.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the images:
tar -xf /usr/share/rhosp-director-images/overcloud-full.tar tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
(undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/overcloud-full.tar (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tarCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the default image:
cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register your image to enable Red Hat repositories relevant to your customizations. Replace
[username]and[password]with valid credentials in the following example.virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]' \ subscription-manager release --set 8.1
virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]' \ subscription-manager release --set 8.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor security, you can remove credentials from the history file if they are used on the command prompt. You can delete individual lines in history using the
history -dcommand followed by the line number.Find a list of pool IDs from your account’s subscriptions, and attach the appropriate pool ID to your image.
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the repositories necessary for Red Hat OpenStack Platform with NFV.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a script to configure real-time capabilities on the image.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script to configure the real-time image:
virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you see the following line in the
rt.shscript output,"grubby fatal error: unable to find a suitable template", you can ignore this error.Examine the
virt-customize.logfile that resulted from the previous command, to check that the packages installed correctly using thert.shscript .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Relabel SELinux:
virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabelCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract vmlinuz and initrd:
mkdir image guestmount -a overcloud-realtime-compute.qcow2 -i --ro image cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd guestunmount image
(undercloud) [stack@undercloud-0 ~]$ mkdir image (undercloud) [stack@undercloud-0 ~]$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud) [stack@undercloud-0 ~]$ cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud) [stack@undercloud-0 ~]$ cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd (undercloud) [stack@undercloud-0 ~]$ guestunmount imageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe software version in the
vmlinuzandinitramfsfilenames vary with the kernel version.Upload the image:
openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You now have a real-time image you can use with the ComputeOvsDpdkRT composable role on your selected Compute nodes.
Modifying BIOS settings on RT-KVM Compute nodes
To reduce latency on your RT-KVM Compute nodes, disable all options for the following parameters in your Compute node BIOS settings:
- Power Management
- Hyper-Threading
- CPU sleep states
- Logical processors
For descriptions of these settings and the impact of disabling them, see: Setting BIOS parameters. See your hardware manufacturer documentation for complete details on how to change BIOS settings.
9.2.2. Configuring OVS-DPDK with RT-KVM Copia collegamentoCollegamento copiato negli appunti!
You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. For more details, see Section 8.1, “Deriving DPDK parameters with workflows”.
9.2.2.1. Generating the ComputeOvsDpdk composable role Copia collegamentoCollegamento copiato negli appunti!
Use the ComputeOvsDpdkRT role to specify Compute nodes for the real-time compute image.
Generate roles_data.yaml for the ComputeOvsDpdkRT role.
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT
# (undercloud) [stack@undercloud-0 ~]$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT
9.2.2.2. Configuring the OVS-DPDK parameters Copia collegamentoCollegamento copiato negli appunti!
Determine the best values for the OVS-DPDK parameters in the network-environment.yaml file to optimize your deployment. For more information, see Section 8.1, “Deriving DPDK parameters with workflows”.
Add the NIC configuration for the OVS-DPDK role you use under
resource_registry:resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OVS-DPDK, and RT-KVM parameters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.2.3. Deploying the overcloud Copia collegamentoCollegamento copiato negli appunti!
Deploy the overcloud for ML2-OVS:
9.2.3. Launching an RT-KVM instance Copia collegamentoCollegamento copiato negli appunti!
Perform the following steps to launch an RT-KVM instance on a real-time enabled Compute node:
Create an RT-KVM flavor on the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch an RT-KVM instance:
openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt
# openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the instance uses the assigned emulator threads, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Trusted Virtual Functions Copia collegamentoCollegamento copiato negli appunti!
You can configure trust between physical functions (PFs) and virtual functions (VFs), so that VFs can perform privileged actions, such as enabling promiscuous mode, or modifying a hardware address.
9.3.1. Configuring trust between virtual and physical functions Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- An operational installation of Red Hat OpenStack Platform including director
Procedure
Complete the following steps to configure and deploy the overcloud with trust between physical and virtual functions:
Add the
NeutronPhysicalDevMappingsparameter in theparameter_defaultssection to link between the logical network name and the physical interface.parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new property,
trusted, to the SR-IOV parameters.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must include double quotation marks around the value "true".
ImportantComplete the following step in trusted environments, as it allows trusted port binding by non-administrative accounts.
Modify permissions to allow users to create and update port bindings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2. Utilizing trusted VF networks Copia collegamentoCollegamento copiato negli appunti!
Create a network of type
vlan.openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-security
openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-securityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet.
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_networkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port. Set the
vnic-typeoption todirect, and thebinding-profileoption totrue.openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance, and bind it to the previously-created trusted port.
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the trusted VF configuration on the hypervisor
- On the compute node that you created the instance, run the following command:
ip link
# ip link
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff
vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off
vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
-
Verify that the trust status of the VF is
trust on. The example output contains details of an environment that contains two ports. Note thatvf 6contains the texttrust on.
9.4. Configuring RX/TX queue size Copia collegamentoCollegamento copiato negli appunti!
You can experience packet loss at high packet rates above 3.5 million packets per second (mpps) for many reasons, such as:
- a network interrupt
- a SMI
- packet processing latency in the Virtual Network Function
To prevent packet loss, increase the queue size from the default of 512 to a maximum of 1024.
Prerequisites
- To configure RX, ensure that you have libvirt v2.3 and QEMU v2.7.
- To configure TX, ensure that you have libvirt v3.7 and QEMU v2.10.
Procedure
To increase the RX and TX queue size, include the following lines to the
parameter_defaults:section of a relevant director role. Here is an example with ComputeOvsDpdk role:parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Testing
You can observe the values for RX queue size and TX queue size in the nova.conf file:
[libvirt] rx_queue_size=1024 tx_queue_size=1024
[libvirt] rx_queue_size=1024 tx_queue_size=1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the compute host.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the values for RX queue size and TX queue size, use the following command on a KVM host:
virsh dumpxml <vm name> | grep queue_size
$ virsh dumpxml <vm name> | grep queue_sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You can check for improved performance, such as 3.8 mpps/core at 0 frame loss.
9.5. Configuring a NUMA-aware vSwitch Copia collegamentoCollegamento copiato negli appunti!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Before you implement a NUMA-aware vSwitch, examine the following components of your hardware configuration:
- The number of physical networks.
- The placement of PCI cards.
- The physical architecture of the servers.
Memory-mapped I/O (MMIO) devices, such as PCIe NICs, are associated with specific NUMA nodes. When a VM and the NIC are on different NUMA nodes, there is a significant decrease in performance. To increase performance, align PCIe NIC placement and instance processing on the same NUMA node.
Use this feature to ensure that instances that share a physical network are located on the same NUMA node. To optimize datacenter hardware, you can leverage load-sharing VMs by using multiple networks, different network types, or bonding.
To architect NUMA-node load sharing and network access correctly, you must understand the mapping of the PCIe slot and the NUMA node. For detailed information on your specific hardware, refer to your vendor’s documentation.
To prevent a cross-NUMA configuration, place the VM on the correct NUMA node, by providing the location of the NIC to Nova.
Prerequisites
- You have enabled the filter “NUMATopologyFilter”
Procedure
-
Set a new
NeutronPhysnetNUMANodesMappingparameter to map the physical network to the NUMA node that you associate with the physical network. If you use tunnels, such as VxLAN or GRE, you must also set the
NeutronTunnelNUMANodesparameter.parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Here is an example with two physical networks tunneled to NUMA node 0:
- one project network associated with NUMA node 0
one management network without any affinity
parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Testing
Observe the configuration in the file /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf
[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1
[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the new configuration with the
lscpucommand:lscpu
$ lscpuCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Launch a VM, with the NIC attached to the appropriate network
9.6. Configuring Quality of Service (QoS) in an NFVi environment Copia collegamentoCollegamento copiato negli appunti!
For details on Configuring QoS, see Configuring Quality-of-Service (QoS) policies. Support is limited to QoS rule type bandwidth-limit on SR-IOV and OVS-DPDK egress interfaces.