Chapter 9. Tuning a Red Hat OpenStack Platform environment
9.1. Trusted Virtual Functions Copy linkLink copied to clipboard!
You can configure physical functions (PFs) to trust virtual functions (VFs) so that VFs can perform some privileged actions. For example, you can use this configuration to allow VFs to enable promiscuous mode or to change a hardware address.
9.1.1. Providing trust Copy linkLink copied to clipboard!
Prerequisites
- An operational installation Red Hat OpenStack Platform director
Procedure
Complete the following steps to deploy the overcloud with the parameters necessary to enable physical function trust of virtual functions:
Add the
NeutronPhysicalDevMappingsparameter under theparameter_defaultssection to make the link between the logical network name and the physical interface.parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the new property "trusted" to the existing parameters related to SR-IOV.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must include quotation marks around the value "true".
ImportantComplete the following step only in trusted environments. This step will allow non-administrative accounts the ability to bind trusted ports.
Modify permissions to allow users the capability of creating and updating port bindings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2. Utilizing trusted virtual functions Copy linkLink copied to clipboard!
Execute the following on a fully deployed overcloud to utilize trusted virtual functions.
Creating a trusted VF network
Create a network of type vlan.
openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-security
openstack network create trusted_vf_network --provider-network-type vlan \ --provider-segment 111 --provider-physical-network sriov2 \ --external --disable-port-securityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet.
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_networkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a port, setting the
vnic-typeoption to direct, and thebinding-profileoption to true.openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
openstack port create --network sriov111 \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance binding it to the previously created trusted port.
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted
openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trustedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the trusted virtual function configuration on the hypervior
On the compute node that hosts the newly created instance, run the following command:
ip link
# ip link
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff
vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off
vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
View the output of the ip link command and verify that the trust status of the virtual function is trust on. The example output contains details of an environment that contains two ports. Note that vf 6 contains the text trust on.
9.2. Configuring RX/TX queue size Copy linkLink copied to clipboard!
You can experience packet loss at high packet rates above 3.5 million packets per second (mpps) for many reasons, such as:
- a network interrupt
- a SMI
- packet processing latency in the Virtual Network Function
To prevent packet loss, increase the queue size from the default of 512 to a maximum of 1024.
Prerequisites
- To configure RX, ensure that you have libvirt v2.3 and QEMU v2.7.
- To configure TX, ensure that you have libvirt v3.7 and QEMU v2.10.
Procedure
To increase the RX and TX queue size, include the following lines to the
parameter_defaults:section of a relevant director role. Here is an example with ComputeOvsDpdk role:parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Testing
You can observe the values for RX queue size and TX queue size in the nova.conf file:
[libvirt] rx_queue_size=1024 tx_queue_size=1024
[libvirt] rx_queue_size=1024 tx_queue_size=1024Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the compute host.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the values for RX queue size and TX queue size, use the following command on a KVM host:
virsh dumpxml <vm name> | grep queue_size
$ virsh dumpxml <vm name> | grep queue_sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You can check for improved performance, such as 3.8 mpps/core at 0 frame loss.
9.3. Enabling RT-KVM for NFV Workloads Copy linkLink copied to clipboard!
This section describes the steps to install and configure Red Hat Enterprise Linux 7.5 Real Time KVM (RT-KVM) for the Red Hat OpenStack Platform. Red Hat OpenStack Platform provides real-time capabilities with a new Real-time Compute node role that provisions Red Hat Enterprise Linux for Real-Time, as well as the additional RT-KVM kernel module, and automatic configuration of the Compute node.
9.3.1. Planning for your RT-KVM Compute nodes Copy linkLink copied to clipboard!
You must use Red Hat certified servers for your RT-KVM Compute nodes. See Red Hat Enterprise Linux for Real Time 7 certified servers for details.
See Registering and updating your undercloud for details on how to enable the rhel-7-server-nfv-rpms repository for RT-KVM, and ensuring your system is up to date.
You will need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.
Building the real-time image
Use the following steps to build the overcloud image for Real-time Compute nodes:
To initialize the stack user to use the director command line tools, run the following command:
source ~/stackrc
[stack@undercloud-0 ~]$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the libguestfs-tools package on the undercloud to get the virt-customize tool:
sudo yum install libguestfs-tools
(undercloud) [stack@undercloud-0 ~]$ sudo yum install libguestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you install the
libguestfs-toolspackage on the undercloud, disableiscsid.socketto avoid port conflicts with thetripleo_iscsidservice on the undercloud:sudo systemctl disable --now iscsid.socket
$ sudo systemctl disable --now iscsid.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the images:
tar -xf /usr/share/rhosp-director-images/overcloud-full.tar tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
(undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/overcloud-full.tar (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tarCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the default image:
cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register your image to enable Red Hat repositories relevant to your customizations. Replace
[username]and[password]with valid credentials in the following example.virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]'
virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager register --username=[username] --password=[password]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemove credentials from the history file anytime they are used on the command prompt. You can delete individual lines in history using the
history -dcommand followed by the line number.Find a list of pool IDs from your account’s subscriptions, and attach the appropriate pool ID to your image.
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'
sudo subscription-manager list --all --available | less ... virt-customize -a overcloud-realtime-compute.qcow2 --run-command \ 'subscription-manager attach --pool [pool-ID]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add repositories necessary for Red Hat OpenStack Platform with NFV.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a script to configure real-time capabilities on the image.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script to configure the RT image:
virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou may see the following error in the
rt.shscript output:grubby fatal error: unable to find a suitable template. You can safely ignore this error.You can check that the packages installed using the
rt.shscript installed correctly by examining thevirt-customize.logfile that was created from the previous command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Relabel SELinux:
virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
(undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabelCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract vmlinuz and initrd:
NoteThe software version in the
vmlinuzandinitramfsfilenames vary with the kernel version. Use the relevant software version in the filename, for exampleimage/boot/vmlinuz-3.10.0-862.rt56.804.el7x86_64, or use the wildcard symbol*instead.mkdir image guestmount -a overcloud-realtime-compute.qcow2 -i --ro image cp image/boot/vmlinuz-*.x86_64 ./overcloud-realtime-compute.vmlinuz cp image/boot/initramfs-*.x86_64.img ./overcloud-realtime-compute.initrd guestunmount image
(undercloud) [stack@undercloud-0 ~]$ mkdir image (undercloud) [stack@undercloud-0 ~]$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud) [stack@undercloud-0 ~]$ cp image/boot/vmlinuz-*.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud) [stack@undercloud-0 ~]$ cp image/boot/initramfs-*.x86_64.img ./overcloud-realtime-compute.initrd (undercloud) [stack@undercloud-0 ~]$ guestunmount imageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the image:
openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You now have a real-time image you can use with the ComputeOvsDpdkRT composable role on select Compute nodes.
Modifying BIOS settings on RT-KVM Compute nodes
To reduce latency on your RT-KVM Compute nodes, you must modify the BIOS settings. You should disable all options for the following in your Compute node BIOS settings:
- Power Management
- Hyper-Threading
- CPU sleep states
- Logical processors
See Setting BIOS parameters for descriptions of these settings and the impact of disabling them. See your hardware manufacturer documentation for complete details on how to change BIOS settings.
9.3.2. Configuring OVS-DPDK with RT-KVM Copy linkLink copied to clipboard!
You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.
9.3.2.1. Generating the ComputeOvsDpdk composable role Copy linkLink copied to clipboard!
You use the ComputeOvsDpdkRT role to specify Compute nodes that use the real-time compute image.
Generate roles_data.yaml for the ComputeOvsDpdkRT role.
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT
# (undercloud) [stack@undercloud-0 ~]$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT
9.3.2.2. Configuring the OVS-DPDK parameters Copy linkLink copied to clipboard!
Attempting to deploy Data Plane Development Kit (DPDK) without appropriate values causes the deployment to fail or lead to unstable deployments. You must determine the best values for the OVS-DPDK parameters set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. See Section 8.1, “Deriving DPDK parameters with workflows” for details.
Add the nic configuration for the OVS-DPDK role you use under
resource_registry:resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml
resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, Set the OVS-DPDK and RT-KVM parameters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3.2.3. Preparing the container images. Copy linkLink copied to clipboard!
Prepare the container images:
openstack overcloud container image prepare --namespace=192.0.40.1:8787/rhosp13 --env-file=/home/stack/ospd-13-vlan-dpdk/docker-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovs-dpdk.yaml -e /home/stack/ospd-13-vlan-dpdk/network-environment.yaml --roles-file /home/stack/ospd-13-vlan-dpdk/roles_data.yaml --prefix=openstack- --tag=2018-03-29.1 --set ceph_namespace=registry.redhat.io/rhceph --set ceph_image=rhceph-3-rhel7 --set ceph_tag=latest
(undercloud) [stack@undercloud-0 ~]$ openstack overcloud container image prepare --namespace=192.0.40.1:8787/rhosp13 --env-file=/home/stack/ospd-13-vlan-dpdk/docker-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovs-dpdk.yaml -e /home/stack/ospd-13-vlan-dpdk/network-environment.yaml --roles-file /home/stack/ospd-13-vlan-dpdk/roles_data.yaml --prefix=openstack- --tag=2018-03-29.1 --set ceph_namespace=registry.redhat.io/rhceph --set ceph_image=rhceph-3-rhel7 --set ceph_tag=latest
9.3.2.4. Deploying the overcloud Copy linkLink copied to clipboard!
Deploy the overcloud for ML2-OVS:
9.3.3. Launching an RT-KVM Instance Copy linkLink copied to clipboard!
To launch an RT-KVM instance on a real-time enabled Compute node:
Create an RT-KVM flavor on the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Launch an RT-KVM instance:
openstack server create --image <rhel> --flavor <flavor-name> --nic net-id=<dpdk-net> test-rt
# openstack server create --image <rhel> --flavor <flavor-name> --nic net-id=<dpdk-net> test-rtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, verify that the instance uses the assigned emulator threads:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4. Configuring a NUMA-aware vSwitch (Technology Preview) Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Before you implement a NUMA-aware vSwitch, examine the following components of your hardware configuration:
- The number of physical networks.
- The placement of PCI cards.
- The physical architecture of the servers.
Memory-mapped I/O (MMIO) devices, such as PCIe NICs, are associated with specific NUMA nodes. When a VM and the NIC are on different NUMA nodes, there is a significant decrease in performance. To increase performance, align PCIe NIC placement and instance processing on the same NUMA node.
Use this feature to ensure that instances that share a physical network are located on the same NUMA node. To optimize datacenter hardware, you can leverage load-sharing VMs by using multiple networks, different network types, or bonding.
To architect NUMA-node load sharing and network access correctly, you must understand the mapping of the PCIe slot and the NUMA node. For detailed information on your specific hardware, refer to your vendor’s documentation.
To prevent a cross-NUMA configuration, place the VM on the correct NUMA node, by providing the location of the NIC to Nova.
Prerequisites
- You have enabled the filter “NUMATopologyFilter”
Procedure
-
Set a new
NeutronPhysnetNUMANodesMappingparameter to map the physical network to the NUMA node that you associate with the physical network. If you use tunnels, such as VxLAN or GRE, you must also set the
NeutronTunnelNUMANodesparameter.parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Here is an example with two physical networks tunneled to NUMA node 0:
- one project network associated with NUMA node 0
one management network without any affinity
parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Testing
Observe the configuration in the file /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf
[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1
[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the new configuration with the
lscpucommand:lscpu
$ lscpuCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Launch a VM, with the NIC attached to the appropriate network
9.5. Configuring Quality of Service (QoS) in an NFVi environment Copy linkLink copied to clipboard!
For details on Configuring QoS, see Configuring Real-Time Compute. Support is limited to QoS rule type bandwidth-limit on SR-IOV and OVS-DPDK egress interfaces.
9.6. Deploying an overcloud with HCI and DPDK Copy linkLink copied to clipboard!
You can deploy your NFV infrastructure with hyper-converged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.
For more information about hyper-converged infrastructure (HCI), see: Hyper Converged Infrastructure Guide
Prerequisites
- Red Hat OpenStack Platform 13.12 Maintenance Release 19 December 2019 or newer.
- Ceph 12.2.12-79 (luminous) or newer.
- Ceph-ansible 3.2.38 or newer.
Procedure
Install
ceph-ansibleon the undercloud.sudo yum install ceph-ansible -y
$ sudo yum install ceph-ansible -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the
roles_data.yamlfile for the ComputeHCI role.openstack overcloud roles generate -o ~/<templates>/roles_data.yaml Controller \ ComputeHCIOvsDpdk
$ openstack overcloud roles generate -o ~/<templates>/roles_data.yaml Controller \ ComputeHCIOvsDpdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create and configure a new flavor with the
openstack flavor createandopenstack flavor setcommands. For more information about creating a flavor, see Creating a new role in the Advanced Overcloud Customization Guide. Deploy the overcloud with the custom
roles_data.yamlfile that you generated.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.6.1. Example NUMA node configuration Copy linkLink copied to clipboard!
For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.
CPU allocation:
| NUMA-0 | NUMA-1 |
|---|---|
| Number of Ceph OSDs * 4 HT | Guest vCPU for the VNF and non-NFV VMs |
| DPDK lcore - 2 HT | DPDK lcore - 2 HT |
| DPDK PMD - 2 HT | DPDK PMD - 2 HT |
Example of CPU allocation:
| NUMA-0 | NUMA-1 | |
|---|---|---|
| Ceph OSD | 32,34,36,38,40,42,76,78,80,82,84,86 | |
| DPDK-lcore | 0,44 | 1,45 |
| DPDK-pmd | 2,46 | 3,47 |
| nova | 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 |
9.6.2. Example ceph configuration file Copy linkLink copied to clipboard!
Assign CPU resources for ceph OSD processes with the following parameters. Adjust the values based on the workload and hardware in this hyperconverged environment.
- 1
- ceph_osd_docker_cpuset_cpus: Allocate 4 CPU threads for each OSD for SSD disks, or 1 CPU for each OSD for HDD disks. Include the list of cores and sibling threads from the NUMA node associated with ceph, and the CPUs not found in the three lists:
NovaVcpuPinSet,OvsDpdkCoreList, andOvsPmdCoreList. - 2
- ceph_osd_docker_cpu_limit: Set this value to
0, to pin the ceph OSDs to the CPU list fromceph_osd_docker_cpuset_cpus. - 3
- ceph_osd_numactl_opts: Set this value to
preferredfor cross-NUMA operations, as a precaution.
9.6.3. Example DPDK configuration file Copy linkLink copied to clipboard!
- 1
- KernelArgs: To calculate
hugepages, subtract the value of theNovaReservedHostMemoryparameter from total memory. - 2
- IsolCpusList: Assign a set of CPU cores that you want to isolate from the host processes with this parameter. Add the value of the
OvsPmdCoreListparameter to the value of theNovaVcpuPinSetparameter to calculate the value for theIsolCpusListparameter. - 3
- OvsDpdkSocketMemory: Specify the amount of memory in MB to pre-allocate from the hugepage pool per NUMA node with the
OvsDpdkSocketMemoryparameter. For more information about calculating OVS-DPDK parameters, see: ovsdpdk parameters - 4
- OvsPmdCoreList: Specify the CPU cores that are used for the DPDK poll mode drivers (PMD) with this parameter. Choose CPU cores that are associated with the local NUMA nodes of the DPDK interfaces. Allocate 2 HT sibling threads for each NUMA node to calculate the value for the
OvsPmdCoreListparameter. - 5
- OvsDpdkCoreList: Specify CPU cores for non-data path OVS-DPDK processes, such as handler, and revalidator threads, with this parameter. Allocate 2 HT sibling threads for each NUMA node to calculate the value for the
OvsDpdkCoreListparameter.
9.6.4. Example nova configuration file Copy linkLink copied to clipboard!
- 1
- NovaReservedHugePages: Pre-allocate memory in MB from the hugepage pool with the
NovaReservedHugePagesparameter. It is the same memory total as the value for theOvsDpdkSocketMemoryparameter. - 2
- NovaReservedHostMemory: Reserve memory in MB for tasks on the host with the
NovaReservedHostMemoryparameter. Use the following guidelines to calculate the amount of memory that you must reserve:- 5 GB for each OSD.
- 0.5 GB overhead for each VM.
- 4GB for general host processing. Ensure that you allocate sufficient memory to prevent potential performance degradation caused by cross-NUMA OSD operation.
- 3
- NovaVcpuPinSet: List the CPUs not found in
OvsPmdCoreList,OvsDpdkCoreList, orCeph_osd_docker_cpuset_cpuswith theNovaVcpuPinSetparameter. The CPUs must be in the same NUMA node as the DPDK NICs.
9.6.5. Recommended configuration for HCI-DPDK deployments Copy linkLink copied to clipboard!
| Block Device Type | OSDs, Memory, vCPUs per device |
|---|---|
| NVMe |
Memory : 5GB per OSD |
| SSD |
Memory : 5GB per OSD |
| HDD |
Memory : 5GB per OSD |
Use the same NUMA node for the following functions:
- Disk controller
- Storage networks
- Storage CPU and memory
Allocate another NUMA node for the following functions of the DPDK provider network:
- NIC
- PMD CPUs
- Socket memory