Chapter 3. Configuring the host environment for real-time virtual machines


To ensure that your RHEL 10 can work as a host for real-time virtual machines, you must optimize the host’s performance and test its latency between input and system response.

To optimize your RHEL 10 system as a host for real-time virtual machines (VMs), configure and enable the realtime-virtual-host profile for TuneD.

Prerequisites

  • Your host meets the system requirements for real-time virtualization.
  • The irqbalance service is disabled. If irqbalance is enabled, its handling of Interrupt requests (IRQs) might conflict with TuneD. To disable irqbalance:

    # systemctl stop irqbalance && systemctl disable irqbalance

Procedure

  1. Start editing the configuration of the realtime-virtual-host profile for TuneD. To do so, open the /etc/tuned/realtime-virtual-host-variables.conf file in a text editor.
  2. Adjust the configuration in /etc/tuned/realtime-virtual-host-variables.conf to suit your requirements. Consider especially the following factors in the setup:

    • The number of cores and NUMA nodes your machine has
    • The number of RT guests that you plan to run
    • The number of vCPUs that each RT guest will have

    The most important modifications to /etc/tuned/realtime-virtual-host-variables.conf include the following:

    • Update the isolated_cores parameter to adjust which host cores per socket will be dedicated to RT virtualization tasks and which cores will remain for system maintenance on the host (also known as housekeeping).

      For example, the following setting uses core 3, core 6, and cores 8 to 15 for RT tasks, and all the other cores as housekeeping:

      isolated_cores=3,6,8-15

      Note that by default, one core per socket (core 0) is used for housekeeping and all other cores for RT tasks.

      Important

      Core 0 must always be set as a housekeeping core. Using core 0 for RT tasks disrupts the RT functionality.

    • Enable IRQ isolation for kernel-managed IRQs. To do so, ensure the following line is not commented out in the configuration:

      isolate_managed_irq=Y

      If IRQ isolation is disabled, host kernel-managed IRQs can interrupt isolated cores, which might cause unexpected latency.

    • Uncomment the netdev_queue_count parameter and set its value to the number of housekeeping cores.
  3. Save the changes to /etc/tuned/realtime-virtual-host-variables.conf.
  4. Activate the real-time virtual host profile.

    # tuned-adm profile realtime-virtual-host
  5. Restart the host.

To further decrease latency in virtual machines (VMs) on RHEL 10, set the host to use huge memory pages. Huge pages can significantly enhance the performance of applications that use large amounts of memory, which is generally the case for RT applications.

For more information about huge pages, see Configuring huge pages.

Prerequisites

Procedure

  1. Set up the default huge page size to be 1 gibibyte.

    $ grubby --args "default_hugepagesz=1G" --update-kernel ALL
  2. Reserve huge pages on the host.

    $ echo <X> > /sys/devices/system/node/node_<Y>_/hugepages/<hugepages-size_dir>/nr-hugepages

    In this command, replace the variables as follows:

    • <X> with the number of huge pages to reserve. This value depends on the number of VMs and how much memory they will have. If you are running a single VM, start with two 1GB pages.
    • <Y> with the number of the NUMA node where real-time vCPUs are pinned.
    • <hugepage-size_dir> with the huge-page size in kB. For example, for 2MB hugepages, this would be hugepages-2048kB.
    Important

    This command sets up huge pages transiently. As a result, you must use the command after every host reboot before you start any real-time VMs. To avoid this, perform the following optional step, which makes huge pages persistent.

  3. Optional: If you want to make the huge-page configuration persistent, also do the following:

    1. Create a file named /usr/lib/systemd/system/hugetlb-gigantic-pages.service with the following contents:

      [Unit]
      Description=HugeTLB Gigantic Pages Reservation
      DefaultDependencies=no
      Before=dev-hugepages.mount
      ConditionPathExists=/sys/devices/system/node
      ConditionKernelCommandLine=default_hugepagesz=1G
      
      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/usr/lib/systemd/hugetlb-reserve-pages
      
      [Install]
      WantedBy=sysinit.target
    2. Create a file named /usr/lib/systemd/hugetlb-reserve-pages with the following contents:

      #!/bin/bash
      nodes_path=/sys/devices/system/node/
      if [ ! -d $nodes_path ]; then
      	echo "ERROR: $nodes_path does not exist"
      	exit 1
      fi
      
      reserve_pages()
      {
      	echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
      }
      
      # This example reserves 2 1G pages on node0 and 1 1G page on node1. You
      # can modify it to your needs or add more lines to reserve memory in
      # other nodes. Don't forget to uncomment the lines, otherwise then won't
      # be executed.
      # reserve_pages 2 node0
      # reserve_pages 1 node1
    3. Enable early boot reservation by using the following commands:

      $ chmod +x /usr/lib/systemd/hugetlb-reserve-pages
      $ sudo systemctl enable hugetlb-gigantic-pages
      $ sudo systemctl status hugetlb-gigantic-pages
    4. Uncomment the bottom two lines of /usr/lib/systemd/hugetlb-reserve-pages and update them based on your huge-page reservation requirements.
  4. Reboot to apply all the configuration changes.

To verify that the BIOS of your real-time host has been successfully set up for low-latency workloads, use the hwlatdetect program.

Prerequisites

Procedure

  1. Run the hwladetect utility for at least an hour, and ensure that the measured latency does not exceed 1 microsecond (μs).

    # hwlatdetect --threshold=1μs --duration=60m
    
      hwlatdetect:  test duration 60 minutes
    	parameters:
    		Latency threshold: 1μs
    		Sample window:     1000000μs
    		Sample width:      500000μs
    		Non-sampling period:  500000μs
    		Output File:       None
    
    Starting test
    test finished
    Max Latency: 0us
    Samples recorded: 0
    Samples exceeding threshold: 0
  2. Optional: For improved validation, run the same test for 24 hours.

    # hwlatdetect --threshold=1μs -duration=24h

After you have configured the host for real-time virtual machines (VMs), you must verify that it is set up correctly. To do so, check the settings for the kernel, huge pages, and isolated CPUs, and ensure that the TuneD profile is active.

Prerequisites

Procedure

  1. View the content of the /proc/cmdline file, and check that the values for the following parameters correspond with how you configured them:

    • Real-time kernel
    • Huge pages
    • Isolated CPUs

      For example:

      cat /proc/cmdline
      
      BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-70.13.1.rt21.83.el10_0.x86_64 root=/dev/mapper/rhel_virtlab505-root ro crashkernel=auto resume=/dev/mapper/rhel_virtlab505-swap rd.lvm.lv=rhel_virtlab505/root rd.lvm.lv=rhel_virtlab505/swap console=ttyS1,115200 default_hugepages=1G skew_tick=1 isolcpus=1,3,5,7,9,11,13,14,15 intel_pstate=disable nosoftlockup tsc=nowatchdog nohz=on nohz_full=1,3,5,7,9,11,13,14,15 rcu_nocbs=1,3,5,7,9,11,13,14,15
  2. Ensure that the realtime-virtual-host tuned profile is active.

    $ tuned-adm active
    Current active profile: realtime-virtual-host
  3. Check the number of huge memory pages. For example:

    $ cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
    
    2

To ensure that the RHEL for Real Time host or guest that you set up maintains low latency when under heavy load, perform real-time latency stress tests.

Prerequisites

Procedure

  1. Add stress to the housekeeping cores. To do so, start compiling the linux kernel on twice the number of housekeeping cores that you have set up in the previous sections.

    1. Clone the Linux kernel repository and move to its directory.

      # git clone https://github.com/torvalds/linux.git ; cd linux
    2. Create a default configuration for the kernel compilation.

      # make defconfig
    3. Start compiling the Linux kernel.

      # while true; do make -j <double-number-of-housekeeping-cpus> && make clean; done
  2. Perform a cyclictest procedure on the host for 12 hours. In the following example, replace <list_isolated_cores> with a list of cores isolated for real-time tasks, such as 1,3,5,7,9,11,13,14,15.

    # cyclictest -m -q -p95 --policy=fifo -D 12h -h60 -t <number_of_isolated_cpus> -a <list_isolated_cores> -mainaffinity <list_housekeeping_cpus> -i 200

    When using a modern high-end AMD64 or Intel 64 processor (also known as x86_64), the optimal value of Max Latencies in the output is under 40 microsecond (μs). To terminate the test if the measured latency exceeds 40μs, add the -b 40 option to the command.

  3. Perform an OS-level latency test (OSLAT) on the host for 12 hours.

    # ./oslat --cpu-list <list_isolated_cores> --rtprio 1 --D 12h -w memmove -m 4K

    When using a modern high-end x86_64 processor, the optimal value of Maximum in the output is under 20 μs. To terminate the test if the measured latency exceeds 20 μs, add the -T 20 option to the command.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top