Search

Chapter 24. Configuring Real-Time Compute

download PDF

In some use-cases, you might need instances on your Compute nodes to adhere to low-latency policies and perform real-time processing. Real-time Compute nodes include a real-time capable kernel, specific virtualization modules, and optimized deployment parameters, to facilitate real-time processing requirements and minimize latency.

The process to enable Real-time Compute includes:

  • configuring the BIOS settings of the Compute nodes
  • building a real-time image with real-time kernel and Real-Time KVM (RT-KVM) kernel module
  • assigning the ComputeRealTime role to the Compute nodes

For a use-case example of Real-time Compute deployment for NFV workloads, see the Example: Configuring OVS-DPDK with ODL and VXLAN tunnelling section in the Network Functions Virtualization Planning and Configuration Guide.

24.1. Preparing Your Compute Nodes for Real-Time

Note

Real-time Compute nodes are supported only with Red Hat Enterprise Linux version 7.5 or later.

Before you can deploy Real-time Compute in your overcloud, you must enable Red Hat Enterprise Linux Real-Time KVM (RT-KVM), configure your BIOS to support real-time, and build the real-time image.

Prerequisites

  • You must use Red Hat certified servers for your RT-KVM Compute nodes. See Red Hat Enterprise Linux for Real Time 7 certified servers for details.
  • You must enable the rhel-7-server-nfv-rpms repository for RT-KVM to build the real-time image.

    Note

    You need a separate subscription to Red Hat OpenStack Platform for Real Time before you can access this repository. For details on managing repositories and subscriptions for your undercloud, see the Registering and updating your undercloud section in the Director Installation and Usage guide.

    To check which packages will be installed from the repository, run the following command:

    $ yum repo-pkgs rhel-7-server-nfv-rpms list
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Available Packages
    kernel-rt.x86_64                                                                     3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-debug.x86_64                                                               3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-debug-devel.x86_64                                                         3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-debug-kvm.x86_64                                                           3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-devel.x86_64                                                               3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-doc.noarch                                                                 3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    kernel-rt-kvm.x86_64                                                                 3.10.0-693.21.1.rt56.639.el7                                                       rhel-7-server-nfv-rpms
    [ output omitted…]

Building the real-time image

To build the overcloud image for Real-time Compute nodes:

  1. Install the libguestfs-tools package on the undercloud to get the virt-customize tool:

    (undercloud) [stack@undercloud-0 ~]$ sudo yum install libguestfs-tools
  2. Extract the images:

    (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/overcloud-full.tar
    (undercloud) [stack@undercloud-0 ~]$ tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar
  3. Copy the default image:

    (undercloud) [stack@undercloud-0 ~]$ cp overcloud-full.qcow2 overcloud-realtime-compute.qcow2
  4. Register the image and configure the required subscriptions:

    (undercloud) [stack@undercloud-0 ~]$  virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=[username] --password=[password]'
    [  0.0] Examining the guest ...
    [ 10.0] Setting a random seed
    [ 10.0] Running: subscription-manager register --username=[username] --password=[password]
    [ 24.0] Finishing off

    Replace the username and password values with your Red Hat customer account details. For general information about building a Real-time overcloud image, see the Modifying the Red Hat Enterprise Linux OpenStack Platform Overcloud Image with virt-customize knowledgebase article.

  5. Find the SKU of the Red Hat OpenStack Platform for Real Time subscription. The SKU might be located on a system that is already registered to the Red Hat Subscription Manager with the same account and credentials. For example:

    $ sudo subscription-manager list
  6. Attach the Red Hat OpenStack Platform for Real Time subscription to the image:

    (undercloud) [stack@undercloud-0 ~]$  virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool=[subscription-pool]'
  7. Create a script to configure rt on the image:

    (undercloud) [stack@undercloud-0 ~]$ cat rt.sh
      #!/bin/bash
    
      set -eux
    
      subscription-manager repos --enable=[REPO_ID]
      yum -v -y --setopt=protected_packages= erase kernel.$(uname -m)
      yum -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host
    
      # END OF SCRIPT
  8. Run the script to configure the real-time image:

    (undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log
  9. Re-label SELinux:

    (undercloud) [stack@undercloud-0 ~]$ virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel
  10. Extract vmlinuz and initrd:

    (undercloud) [stack@undercloud-0 ~]$ mkdir image
    (undercloud) [stack@undercloud-0 ~]$ guestmount -a overcloud-realtime-compute.qcow2 -i --ro image
    (undercloud) [stack@undercloud-0 ~]$ cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz
    (undercloud) [stack@undercloud-0 ~]$ cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd
    (undercloud) [stack@undercloud-0 ~]$ guestunmount image
    Note

    The software version in the vmlinuz and initramfs filenames vary with the kernel version.

  11. Upload the image:

    (undercloud) [stack@undercloud-0 ~]$ openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2

You now have a real-time image you can use with the ComputeRealTime composable role on select Compute nodes.

Modifying BIOS settings on Real-time Compute nodes

To reduce latency on your Real-time Compute nodes, you must modify the BIOS settings in the Compute nodes. You should disable all options for the following components in your Compute node BIOS settings:

  • Power Management
  • Hyper-Threading
  • CPU sleep states
  • Logical processors

See Setting BIOS parameters for descriptions of these settings and the impact of disabling them. See your hardware manufacturer documentation for complete details on how to change BIOS settings.

24.2. Deploying the Real-time Compute Role

Red Hat OpenStack Platform Director provides the template for the ComputeRealTime role, which you can then use to deploy Real-time Compute nodes. However, you must perform additional steps to designate Compute nodes for real-time.

  1. Based on the /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml file, create a compute-real-time.yaml environment file that sets the parameters for the ComputeRealTime role.

    cp /usr/share/openstack-tripleo-heat-templates/environments/compute-real-time-example.yaml /home/stack/templates/compute-real-time.yaml

    The file must include values for the following parameters:

    • IsolCpusList and NovaVcpuPinSet. List of isolated CPU cores and virtual CPU pins to reserve for real-time workloads. This value depends on the CPU hardware of your Real-time Compute nodes.
    • KernelArgs. Arguments to pass to the kernel of the Real-time Compute nodes. For example, you can use default_hugepagesz=1G hugepagesz=1G hugepages=<number_of_1G_pages_to_reserve> hugepagesz=2M hugepages=<number_of_2M_pages> to define the memory requirements of guests that have huge pages with multiple sizes. In this example, the default size is 1GB but you can also reserve 2M huge pages.
  2. Add the ComputeRealTime role to your roles data file and regenerate the file. For example:

    $ openstack overcloud roles generate -o /home/stack/templates/rt_roles_data.yaml Controller Compute ComputeRealTime

    This command generates a ComputeRealTime role with contents similar to the following example, and also sets the ImageDefault option to overcloud-realtime-compute.

    ###############################################################
    # Role: ComputeRealTime                                                               #
    ###############################################################
    
    - name: ComputeRealTime
      description: |
        Compute role that is optimized for real-time behaviour. When using this role
        it is mandatory that an overcloud-realtime-compute image is available and
        the role specific parameters IsolCpusList and NovaVcpuPinSet are set
        accordingly to the hardware of the real-time compute nodes.
      CountDefault: 1
      networks:
        - InternalApi
        - Tenant
        - Storage
      HostnameFormatDefault: '%stackname%-computerealtime-%index%'
      disable_upgrade_deployment: True
      ImageDefault: overcloud-realtime-compute
      RoleParametersDefault:
        TunedProfileName: "realtime-virtual-host"
        KernelArgs: ""      # these must be set in an environment file or similar
        IsolCpusList: ""    # according to the hardware of real-time nodes
        NovaVcpuPinSet: ""  #
      ServicesDefault:
        - OS::TripleO::Services::Aide
        - OS::TripleO::Services::AuditD
        - OS::TripleO::Services::CACerts
        - OS::TripleO::Services::CephClient
        - OS::TripleO::Services::CephExternal
        - OS::TripleO::Services::CertmongerUser
        - OS::TripleO::Services::Collectd
        - OS::TripleO::Services::ComputeCeilometerAgent
        - OS::TripleO::Services::ComputeNeutronCorePlugin
        - OS::TripleO::Services::ComputeNeutronL3Agent
        - OS::TripleO::Services::ComputeNeutronMetadataAgent
        - OS::TripleO::Services::ComputeNeutronOvsAgent
        - OS::TripleO::Services::Docker
        - OS::TripleO::Services::Fluentd
        - OS::TripleO::Services::Ipsec
        - OS::TripleO::Services::Iscsid
        - OS::TripleO::Services::Kernel
        - OS::TripleO::Services::LoginDefs
        - OS::TripleO::Services::MySQLClient
        - OS::TripleO::Services::NeutronBgpVpnBagpipe
        - OS::TripleO::Services::NeutronLinuxbridgeAgent
        - OS::TripleO::Services::NeutronVppAgent
        - OS::TripleO::Services::NovaCompute
        - OS::TripleO::Services::NovaLibvirt
        - OS::TripleO::Services::NovaMigrationTarget
        - OS::TripleO::Services::Ntp
        - OS::TripleO::Services::ContainersLogrotateCrond
        - OS::TripleO::Services::OpenDaylightOvs
        - OS::TripleO::Services::Rhsm
        - OS::TripleO::Services::RsyslogSidecar
        - OS::TripleO::Services::Securetty
        - OS::TripleO::Services::SensuClient
        - OS::TripleO::Services::SkydiveAgent
        - OS::TripleO::Services::Snmp
        - OS::TripleO::Services::Sshd
        - OS::TripleO::Services::Timezone
        - OS::TripleO::Services::TripleoFirewall
        - OS::TripleO::Services::TripleoPackages
        - OS::TripleO::Services::Vpp
        - OS::TripleO::Services::OVNController
        - OS::TripleO::Services::OVNMetadataAgent
        - OS::TripleO::Services::Ptp

    For general information about custom roles and about the roles-data.yaml, see the Roles section.

  3. Create the compute-realtime flavor to tag nodes that you want to designate for real-time workloads. For example:

    $ source ~/stackrc
    $ openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-realtime
    $ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute-realtime" compute-realtime
  4. Tag each node that you want to designate for real-time workloads with the compute-realtime profile.

    $ openstack baremetal node set --property capabilities='profile:compute-realtime,boot_option:local' <NODE UUID>
  5. Map the ComputeRealTime role to the compute-realtime flavor by creating an environment file with the following content:

    parameter_defaults:
      OvercloudComputeRealTimeFlavor: compute-realtime
  6. Run the openstack overcloud deploy command with the -e option and specify all the environment files that you created, as well as the new roles file. For example:

    $ openstack overcloud deploy -r /home/stack/templates/rt~/my_roles_data.yaml  -e /home/stack/templates/compute-real-time.yaml <FLAVOR_ENV_FILE>

24.3. Sample Deployment and Testing Scenario

The following example procedure uses a simple single-node deployment to test that the environment variables and other supporting configuration is set up correctly. Actual performance results might vary, depending on the number of nodes and guests that you deploy in your cloud.

  1. Create the compute-real-time.yaml file with the following parameters:

    parameter_defaults:
      ComputeRealTimeParameters:
        IsolCpusList: "1"
        NovaVcpuPinSet: "1"
        KernelArgs: "default_hugepagesz=1G hugepagesz=1G hugepages=16"
  2. Create a new roles_data.yaml file with the ComputeRealTime role.

    $ openstack overcloud roles generate -o ~/rt_roles_data.yaml Controller ComputeRealTime

    This command deploys one Controller node and one Real-time Compute node.

  3. Log into the Real-time Compude node and check the following parameters. Make sure to replace <...> with the values of the relevant parameters from the compute-real-time.yaml.

    [root@overcloud-computerealtime-0 ~]# uname -a
    Linux overcloud-computerealtime-0 3.10.0-693.11.1.rt56.632.el7.x86_64 #1 SMP PREEMPT RT Wed Dec 13 13:37:53 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    [root@overcloud-computerealtime-0 ~]# cat /proc/cmdline
    BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.1.rt56.632.el7.x86_64 root=UUID=45ae42d0-58e7-44fe-b5b1-993fe97b760f ro console=tty0 crashkernel=auto console=ttyS0,115200 default_hugepagesz=1G hugepagesz=1G hugepages=16
    [root@overcloud-computerealtime-0 ~]# tuned-adm active
    Current active profile: realtime-virtual-host
    [root@overcloud-computerealtime-0 ~]# grep ^isolated_cores /etc/tuned/realtime-virtual-host-variables.conf
    isolated_cores=<IsolCpusList>
    [root@overcloud-computerealtime-0 ~]# cat /usr/lib/tuned/realtime-virtual-host/lapic_timer_adv_ns
    X (X != 0)
    [root@overcloud-computerealtime-0 ~]# cat /sys/module/kvm/parameters/lapic_timer_advance_ns
    X (X != 0)
    [root@overcloud-computerealtime-0 ~]# cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
    X (X != 0)
    [root@overcloud-computerealtime-0 ~]# grep ^vcpu_pin_set /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf
    vcpu_pin_set=<NovaVcpuPinSet>

24.4. Launching and Tuning Real-Time Instances

After you deploy and configure Real-time Compute nodes, you can launch real-time instances on those nodes. You can further configure these real-time instances with CPU pinning, NUMA topology filters, and huge pages.

Launching a real-time instance

  1. Make sure that the compute-realtime flavor exists on the overcloud, as described in the Deploying the Real-time Compute Role section.
  2. Launch the real-time instance.

    # openstack server create  --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt
  3. Optionally, verify that the instance uses the assigned emulator threads.

    # virsh dumpxml <instance-id> | grep vcpu -A1
    <vcpu placement='static'>4</vcpu>
    <cputune>
      <vcpupin vcpu='0' cpuset='1'/>
      <vcpupin vcpu='1' cpuset='3'/>
      <vcpupin vcpu='2' cpuset='5'/>
      <vcpupin vcpu='3' cpuset='7'/>
      <emulatorpin cpuset='0-1'/>
      <vcpusched vcpus='2-3' scheduler='fifo'
      priority='1'/>
    </cputune>

Pinning CPUs and setting emulator thread policy

To ensure that there are enough CPUs on each Real-time Compute node for real-time workloads, you need to pin at least one virtual CPU (vCPU) for an instance to a physical CPU (pCPUs) on the host. The emulator threads for that vCPU then remain dedicated to that pCPU.

Configure your flavor to use a dedicated CPU policy. To do so, set the hw:cpu_policy parameter to dedicated on the flavor. For example:

# openstack flavor set --property hw:cpu_policy=dedicated 99
Note

Make sure that your resources quota has enough pCPUs for the Real-time Compute nodes to consume.

Optimizing your network configuration

Depending on the needs of your deployment, you might need to set parameters in the network-environment.yaml file to tune your network for certain real-time workloads.

To review an example configuration optimized for OVS-DPDK, see the Configuring the OVS-DPDK parameters section of the Network Functions Virtualization Planning and Configuration Guide.

Configuring huge pages

It is recommended to set the default huge pages size to 1GB. Otherwise, TLB flushes might create jitter in the vCPU execution. For general information about using huge pages, see the Running DPDK applications web page.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.