Chapter 13. Configuring real-time compute
As a cloud administrator, you might need instances on your Compute nodes to adhere to low-latency policies and perform real-time processing. Real-time Compute nodes include a real-time capable kernel, specific virtualization modules, and optimized deployment parameters, to facilitate real-time processing requirements and minimize latency.
The process to enable Real-time Compute includes:
- configuring the BIOS settings of the Compute nodes
- building a real-time image with real-time kernel and Real-Time KVM (RT-KVM) kernel module
-
assigning the
ComputeRealTime
role to the Compute nodes
For a use-case example of real-time Compute deployment for NFV workloads, see the Example: Configuring OVS-DPDK with ODL and VXLAN tunnelling section in the Network Functions Virtualization Planning and Configuration Guide.
Real-time Compute nodes are supported only with Red Hat Enterprise Linux version 7.5 or later.
13.1. Preparing Compute nodes for real-time
Before you can deploy Real-time Compute in your overcloud, you must enable Red Hat Enterprise Linux Real-Time KVM (RT-KVM), configure your BIOS to support real-time, and build the real-time overcloud image.
Prerequisites
- You must use Red Hat certified servers for your RT-KVM Compute nodes. See Red Hat Enterprise Linux for Real Time 7 certified servers for details.
-
You need a separate subscription to Red Hat OpenStack Platform for Real Time to access the
rhel-8-for-x86_64-nfv-rpms
repository. For details on managing repositories and subscriptions for your undercloud, see Registering the undercloud and attaching subscriptions in the Director Installation and Usage guide.
Procedure
To build the real-time overcloud image, you must enable the
rhel-8-for-x86_64-nfv-rpms
repository for RT-KVM. To check which packages will be installed from the repository, enter the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow dnf repo-pkgs rhel-8-for-x86_64-nfv-rpms list
$ dnf repo-pkgs rhel-8-for-x86_64-nfv-rpms list Loaded plugins: product-id, search-disabled-repos, subscription-manager Available Packages kernel-rt.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug-devel.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-debug-kvm.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-devel.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-doc.noarch 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms kernel-rt-kvm.x86_64 4.18.0-80.7.1.rt9.153.el8_0 rhel-8-for-x86_64-nfv-rpms [ output omitted…]
To build the overcloud image for Real-time Compute nodes, install the
libguestfs-tools
package on the undercloud to get thevirt-customize
tool:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo dnf install libguestfs-tools
(undercloud)$ sudo dnf install libguestfs-tools
ImportantIf you install the
libguestfs-tools
package on the undercloud, disableiscsid.socket
to avoid port conflicts with thetripleo_iscsid
service on the undercloud:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo systemctl disable --now iscsid.socket
$ sudo systemctl disable --now iscsid.socket
Extract the images:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar -xf /usr/share/rhosp-director-images/overcloud-full.tar tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
(undercloud)$ tar -xf /usr/share/rhosp-director-images/overcloud-full.tar (undercloud)$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
Copy the default image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
(undercloud)$ cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
Register the image and configure the required subscriptions:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=<username> --password=<password>'
(undercloud)$ virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=<username> --password=<password>' [ 0.0] Examining the guest ... [ 10.0] Setting a random seed [ 10.0] Running: subscription-manager register --username=<username> --password=<password> [ 24.0] Finishing off
Replace the
username
andpassword
values with your Red Hat customer account details.For general information about building a Real-time overcloud image, see the knowledgebase article Modifying the Red Hat Enterprise Linux OpenStack Platform Overcloud Image with virt-customize.
Find the SKU of the Red Hat OpenStack Platform for Real Time subscription. The SKU might be located on a system that is already registered to the Red Hat Subscription Manager with the same account and credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager list
$ sudo subscription-manager list
Attach the Red Hat OpenStack Platform for Real Time subscription to the image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool [subscription-pool]'
(undercloud)$ virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool [subscription-pool]'
Create a script to configure
rt
on the image:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat rt.sh #!/bin/bash set -eux subscription-manager repos --enable=[REPO_ID] dnf -v -y --setopt=protected_packages= erase kernel.$(uname -m) dnf -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host # END OF SCRIPT
(undercloud)$ cat rt.sh #!/bin/bash set -eux subscription-manager repos --enable=[REPO_ID] dnf -v -y --setopt=protected_packages= erase kernel.$(uname -m) dnf -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host # END OF SCRIPT
Run the script to configure the real-time image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
(undercloud)$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
Re-label SELinux:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
(undercloud)$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
Extract
vmlinuz
andinitrd
. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir image guestmount -a overcloud-realtime-compute.qcow2 -i --ro image cp image/boot/vmlinuz-4.18.0-80.7.1.rt9.153.el8_0.x86_64 ./overcloud-realtime-compute.vmlinuz cp image/boot/initramfs-4.18.0-80.7.1.rt9.153.el8_0.x86_64.img ./overcloud-realtime-compute.initrd guestunmount image
(undercloud)$ mkdir image (undercloud)$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud)$ cp image/boot/vmlinuz-4.18.0-80.7.1.rt9.153.el8_0.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud)$ cp image/boot/initramfs-4.18.0-80.7.1.rt9.153.el8_0.x86_64.img ./overcloud-realtime-compute.initrd (undercloud)$ guestunmount image
NoteThe software version in the
vmlinuz
andinitramfs
filenames vary with the kernel version.Upload the image:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud image upload \ --update-existing --os-image-name
(undercloud)$ openstack overcloud image upload \ --update-existing --os-image-name overcloud-realtime-compute.qcow2
You now have a real-time image you can use with the
ComputeRealTime
composable role on select Compute nodes.To reduce latency on your Real-time Compute nodes, you must modify the BIOS settings in the Compute nodes. You should disable all options for the following components in your Compute node BIOS settings:
- Power Management
- Hyper-Threading
- CPU sleep states
Logical processors
See Setting BIOS parameters for descriptions of these settings and the impact of disabling them. See your hardware manufacturer documentation for complete details on how to change BIOS settings.
13.2. Deploying the Real-time Compute role
Red Hat OpenStack Platform (RHOSP) director provides the template for the ComputeRealTime
role, which you can use to deploy real-time Compute nodes. You must perform additional steps to designate Compute nodes for real-time.
Procedure
Based on the
/usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml
file, create acompute-real-time.yaml
environment file that sets the parameters for theComputeRealTime
role.Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml /home/stack/templates/compute-real-time.yaml
cp /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml /home/stack/templates/compute-real-time.yaml
The file must include values for the following parameters:
-
IsolCpusList
andNovaComputeCpuDedicatedSet
: List of isolated CPU cores and virtual CPU pins to reserve for real-time workloads. This value depends on the CPU hardware of your real-time Compute nodes. -
NovaComputeCpuSharedSet
: List of host CPUs to reserve for emulator threads. -
KernelArgs
: Arguments to pass to the kernel of the Real-time Compute nodes. For example, you can usedefault_hugepagesz=1G hugepagesz=1G hugepages=<number_of_1G_pages_to_reserve> hugepagesz=2M hugepages=<number_of_2M_pages>
to define the memory requirements of guests that have huge pages with multiple sizes. In this example, the default size is 1GB but you can also reserve 2M huge pages. -
NovaComputeDisableIrqBalance
: Ensure that this parameter is set totrue
for Real-time Compute nodes, because thetuned
service manages IRQ balancing for real-time deployments instead of theirqbalance
service.
-
Add the
ComputeRealTime
role to your roles data file and regenerate the file. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud roles generate -o /home/stack/templates/rt_roles_data.yaml Controller Compute ComputeRealTime
$ openstack overcloud roles generate -o /home/stack/templates/rt_roles_data.yaml Controller Compute ComputeRealTime
This command generates a
ComputeRealTime
role with contents similar to the following example, and also sets theImageDefault
option toovercloud-realtime-compute
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - name: ComputeRealTime description: | Compute role that is optimized for real-time behaviour. When using this role it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList, NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set accordingly to the hardware of the real-time compute nodes. CountDefault: 1 networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computerealtime-%index%' ImageDefault: overcloud-realtime-compute RoleParametersDefault: TunedProfileName: "realtime-virtual-host" KernelArgs: "" # these must be set in an environment file IsolCpusList: "" # or similar according to the hardware NovaComputeCpuDedicatedSet: "" # of real-time nodes NovaComputeCpuSharedSet: "" # NovaLibvirtMemStatsPeriodSeconds: 0 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent
- name: ComputeRealTime description: | Compute role that is optimized for real-time behaviour. When using this role it is mandatory that an overcloud-realtime-compute image is available and the role specific parameters IsolCpusList, NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet are set accordingly to the hardware of the real-time compute nodes. CountDefault: 1 networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computerealtime-%index%' ImageDefault: overcloud-realtime-compute RoleParametersDefault: TunedProfileName: "realtime-virtual-host" KernelArgs: "" # these must be set in an environment file IsolCpusList: "" # or similar according to the hardware NovaComputeCpuDedicatedSet: "" # of real-time nodes NovaComputeCpuSharedSet: "" # NovaLibvirtMemStatsPeriodSeconds: 0 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent
For general information about custom roles and about the
roles-data.yaml
, see Roles.Create the
compute-realtime
flavor to tag nodes that you want to designate for real-time workloads. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/stackrc openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-realtime openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute-realtime" compute-realtime
$ source ~/stackrc $ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-realtime $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute-realtime" compute-realtime
Tag each node that you want to designate for real-time workloads with the
compute-realtime
profile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack baremetal node set --property capabilities='profile:compute-realtime,boot_option:local' <node_uuid>
$ openstack baremetal node set --property capabilities='profile:compute-realtime,boot_option:local' <node_uuid>
Map the
ComputeRealTime
role to thecompute-realtime
flavor by creating an environment file with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow parameter_defaults: OvercloudComputeRealTimeFlavor: compute-realtime
parameter_defaults: OvercloudComputeRealTimeFlavor: compute-realtime
Add your environment files and the new roles file to the stack with your other environment files and deploy the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/rt~/my_roles_data.yaml \ -e home/stack/templates/compute-real-time.yaml
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/rt~/my_roles_data.yaml \ -e home/stack/templates/compute-real-time.yaml
13.3. Sample deployment and testing scenario
The following example procedure uses a simple single-node deployment to test that the environment variables and other supporting configuration is set up correctly. Actual performance results might vary, depending on the number of nodes and instances that you deploy in your cloud.
Procedure
Create the
compute-real-time.yaml
file with the following parameters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow parameter_defaults: ComputeRealTimeParameters: IsolCpusList: "1" NovaComputeCpuDedicatedSet: "1" NovaComputeCpuSharedSet: "0" KernelArgs: "default_hugepagesz=1G hugepagesz=1G hugepages=16" NovaComputeDisableIrqBalance: true
parameter_defaults: ComputeRealTimeParameters: IsolCpusList: "1" NovaComputeCpuDedicatedSet: "1" NovaComputeCpuSharedSet: "0" KernelArgs: "default_hugepagesz=1G hugepagesz=1G hugepages=16" NovaComputeDisableIrqBalance: true
Create a new
rt_roles_data.yaml
file with theComputeRealTime
role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud roles generate \ -o ~/rt_roles_data.yaml Controller ComputeRealTime
$ openstack overcloud roles generate \ -o ~/rt_roles_data.yaml Controller ComputeRealTime
Add
compute-real-time.yaml
andrt_roles_data.yaml
to the stack with your other environment files and deploy the overcloud:Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud deploy --templates \ -r /home/stack/rt_roles_data.yaml \ -e [your environment files] \ -e /home/stack/templates/compute-real-time.yaml
(undercloud)$ openstack overcloud deploy --templates \ -r /home/stack/rt_roles_data.yaml \ -e [your environment files] \ -e /home/stack/templates/compute-real-time.yaml
This command deploys one Controller node and one Real-time Compute node.
Log into the Real-time Compute node and check the following parameters:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow uname -a cat /proc/cmdline tuned-adm active grep ^isolated_cores /etc/tuned/realtime-virtual-host-variables.conf cat /usr/lib/tuned/realtime-virtual-host/lapic_timer_adv_ns cat /sys/module/kvm/parameters/lapic_timer_advance_ns cat /proc/meminfo | grep -E HugePages_Total|Hugepagesize cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages cat /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_dedicated_set crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_shared_set systemctl status irqbalance
[root@overcloud-computerealtime-0 ~]# uname -a Linux overcloud-computerealtime-0 4.18.0-80.7.1.rt9.153.el8_0.x86_64 #1 SMP PREEMPT RT Wed Dec 13 13:37:53 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [root@overcloud-computerealtime-0 ~]# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-4.18.0-80.7.1.rt9.153.el8_0.x86_64 root=UUID=45ae42d0-58e7-44fe-b5b1-993fe97b760f ro console=tty0 crashkernel=auto console=ttyS0,115200 default_hugepagesz=1G hugepagesz=1G hugepages=16 [root@overcloud-computerealtime-0 ~]# tuned-adm active Current active profile: realtime-virtual-host [root@overcloud-computerealtime-0 ~]# grep ^isolated_cores /etc/tuned/realtime-virtual-host-variables.conf isolated_cores=1 [root@overcloud-computerealtime-0 ~]# cat /usr/lib/tuned/realtime-virtual-host/lapic_timer_adv_ns 4000 # The returned value must not be 0 [root@overcloud-computerealtime-0 ~]# cat /sys/module/kvm/parameters/lapic_timer_advance_ns 4000 # The returned value must not be 0 # To validate hugepages at a host level: [root@overcloud-computerealtime-0 ~]# cat /proc/meminfo | grep -E HugePages_Total|Hugepagesize HugePages_Total: 64 Hugepagesize: 1048576 kB # To validate hugepages on a per NUMA level (below example is a two NUMA compute host): [root@overcloud-computerealtime-0 ~]# cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages 32 [root@overcloud-computerealtime-0 ~]# cat /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages 32 [root@overcloud-computerealtime-0 ~]# crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_dedicated_set 1 [root@overcloud-computerealtime-0 ~]# crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf compute cpu_shared_set 0 [root@overcloud-computerealtime-0 ~]# systemctl status irqbalance ● irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled) Active: inactive (dead) since Tue 2021-03-30 13:36:31 UTC; 2s ago
13.4. Launching and tuning real-time instances
After you deploy and configure Real-time Compute nodes, you can launch real-time instances on those nodes. You can further configure these real-time instances with CPU pinning, NUMA topology filters, and huge pages.
Prerequisites
-
The
compute-realtime
flavor exists on the overcloud, as described in Deploying the Real-time Compute role.
Procedure
Launch the real-time instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack server create --image <rhel> \ --flavor r1.small --nic net-id=<dpdk_net> test-rt
# openstack server create --image <rhel> \ --flavor r1.small --nic net-id=<dpdk_net> test-rt
Optional: Verify that the instance uses the assigned emulator threads:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow virsh dumpxml <instance_id> | grep vcpu -A1
# virsh dumpxml <instance_id> | grep vcpu -A1 <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0-1'/> <vcpusched vcpus='2-3' scheduler='fifo' priority='1'/> </cputune>
Pinning CPUs and setting emulator thread policy
To ensure that there are enough CPUs on each Real-time Compute node for real-time workloads, you need to pin at least one virtual CPU (vCPU) for an instance to a physical CPU (pCPUs) on the host. The emulator threads for that vCPU then remain dedicated to that pCPU.
Configure your flavor to use a dedicated CPU policy. To do so, set the hw:cpu_policy
parameter to dedicated
on the flavor. For example:
openstack flavor set --property hw:cpu_policy=dedicated 99
# openstack flavor set --property hw:cpu_policy=dedicated 99
Make sure that your resources quota has enough pCPUs for the Real-time Compute nodes to consume.
Optimizing your network configuration
Depending on the needs of your deployment, you might need to set parameters in the network-environment.yaml
file to tune your network for certain real-time workloads.
To review an example configuration optimized for OVS-DPDK, see the Configuring the OVS-DPDK parameters section of the Network Functions Virtualization Planning and Configuration Guide.
Configuring huge pages
It is recommended to set the default huge pages size to 1GB. Otherwise, TLB flushes might create jitter in the vCPU execution. For general information about using huge pages, see the Running DPDK applications web page.
Disabling Performance Monitoring Unit (PMU) emulation
Instances can provide PMU metrics by specifying an image or flavor with a vPMU. Providing PMU metrics introduces latency.
The vPMU defaults to enabled when NovaLibvirtCPUMode
is set to host-passthrough
.
If you do not need PMU metrics, then disable the vPMU to reduce latency by setting the PMU property to "False" in the image or flavor used to create the instance:
-
Image:
hw_pmu=False
-
Flavor:
hw:pmu=False