18.6. Optimizing virtual machine CPU performance


Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU:

  1. Adjust how many host CPUs are assigned to the VM. You can do this by using the CLI or the web console.
  2. Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host:

    # virt-xml testguest1 --edit --cpu host-model
  3. Manage kernel same-page merging (KSM).
  4. If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness.

    For details, see Configuring NUMA in a virtual machine and Virtual machine performance optimization for specific workloads.

18.6.1. vCPU overcommitment

Virtual CPU (vCPU) overcommitment allows you to have a setup where the sum of all vCPUs in virtual machines (VMs) running on a host exceeds the number of physical CPUs on the host. However, you might experience performance deterioration when simultaneously running more cores in your VMs than are physically available on the host.

For best performance, assign VMs with only as many vCPUs as are required to run the intended workloads in each VM.

vCPU overcommitment suggestions:

  • Assign the minimum amount of vCPUs required by by the VM’s workloads for best performance.
  • Avoid overcommitting vCPUs in production without extensive testing.
  • If overcomitting vCPUs, the safe ratio is typically 5 vCPUs to 1 physical CPU for loads under 100%.
  • It is not recommended to have more than 10 total allocated vCPUs per physical processor core.
  • Monitor CPU usage to prevent performance degradation under heavy loads.
重要

Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Do not overcommit memory or CPUs in a production environment without extensive testing, as the CPU overcommit ratio is workload-dependent.

To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM.

When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 10, and Red Hat highly discourages its use.

Procedure

  1. Optional: View the current state of the vCPUs in the selected VM. For example, to display the number of vCPUs on the testguest VM:

    # virsh vcpucount testguest
    maximum      config         4
    maximum      live           2
    current      config         2
    current      live           1

    This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.

  2. Adjust the maximum number of vCPUs that can be attached to the VM, which takes effect on the VM’s next boot.

    For example, to increase the maximum vCPU count for the testguest VM to 8:

    # virsh setvcpus testguest 8 --maximum --config

    Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors.

  3. Adjust the current number of vCPUs attached to the VM, up to the maximum configured in the previous step. For example:

    • To increase the number of vCPUs attached to the running testguest VM to 4:

      # virsh setvcpus testguest 4 --live

      This increases the VM’s performance and host load footprint of testguest until the VM’s next boot.

    • To permanently decrease the number of vCPUs attached to the testguest VM to 1:

      # virsh setvcpus testguest 1 --config

      This decreases the VM’s performance and host load footprint of testguest after the VM’s next boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance.

Verification

  • Confirm that the current state of vCPU for the VM reflects your changes.

    # virsh vcpucount testguest
    maximum      config         8
    maximum      live           4
    current      config         1
    current      live           4

18.6.3. Managing virtual CPUs by using the web console

By using the RHEL 10 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected.

Prerequisites

Procedure

  1. Log in to the RHEL 10 web console.
  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Click edit next to the number of vCPUs in the Overview pane.

    The vCPU details dialog appears.

  4. Configure the virtual CPUs for the selected VM.

    • vCPU Count - The number of vCPUs currently in use.

      注意

      The vCPU count cannot be greater than the vCPU Maximum.

    • vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the VM.
    • Sockets - The number of sockets to expose to the VM.
    • Cores per socket - The number of cores for each socket to expose to the VM.
    • Threads per core - The number of threads for each core to expose to the VM.

      重要

      Note that the Sockets, Cores per socket, and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values.

  5. Click Apply.

    The virtual CPUs for the VM are configured.

  6. If the VM is running, restart it for the changes to virtual CPU settings to take effect.

18.6.4. Configuring NUMA in a virtual machine

You can use several methods to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 10 host.

For ease of use, you can set up a VM’s NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement.

Prerequisites

  • The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh nodeinfo command and see the NUMA cell(s) line:

    # virsh nodeinfo
    CPU model:           x86_64
    CPU(s):              48
    CPU frequency:       1200 MHz
    CPU socket(s):       1
    Core(s) per socket:  12
    Thread(s) per core:  2
    NUMA cell(s):        2
    Memory size:         67012964 KiB

    If the value of the line is 2 or greater, the host is NUMA-compatible.

  • Optional: You have the numactl package installed on the host.

    # dnf install numactl

Procedure

  1. Set the VM’s NUMA policy to Preferred. For example, to configure the testguest5 VM:

    # virt-xml testguest5 --edit --vcpus placement=auto
    # virt-xml testguest5 --edit --numatune mode=preferred
  2. Set up the numad service to automatically align the VM CPU with memory resources.

    # echo 1 > /proc/sys/kernel/numa_balancing
  3. Start the numad service.

    # systemctl start numad
  4. Optional: Tune NUMA settings manually. Specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM’s vCPU.

    1. Use the numactl command to view the NUMA topology on the host:

      # numactl --hardware
      
      available: 2 nodes (0-1)
      node 0 size: 18156 MB
      node 0 free: 9053 MB
      node 1 size: 18180 MB
      node 1 free: 6853 MB
      node distances:
      node   0   1
        0:  10  20
        1:  20  10
    2. Edit the XML configuration of a VM to assign CPU and memory resources to specific NUMA nodes. For example, the following configuration sets testguest6 to use vCPUs 0-7 on NUMA node 0 and vCPUS 8-15 on NUMA node 1. Both nodes are also assigned 16 GiB of VM’s memory.

      # virsh edit <testguest6>
      
      <domain type='kvm'>
        <name>testguest6</name>
        ...
        <vcpu placement='static'>16</vcpu>
        ...
        <cpu ...>
          <numa>
            <cell id='0' cpus='0-7' memory='16' unit='GiB'/>
            <cell id='1' cpus='8-15' memory='16' unit='GiB'/>
          </numa>
        ...
      </domain>
      注意

      For best performance results, it is a good practice to respect the maximum memory size for each NUMA node on the host.

    3. If the VM is running, restart it to apply the configuration.

18.6.5. Configuring virtual CPU pinning

To improve the CPU performance of a virtual machine (VM), you can pin a virtual CPU (vCPU) to a specific physical CPU thread on the host. This ensures that the vCPU will have its own dedicated physical CPU thread, which can significantly improve the vCPU performance.

To further optimize the CPU performance, you can also pin QEMU process threads associated with a specified VM to a specific host CPU.

Procedure

  1. Check the CPU topology on the host:

    # lscpu -p=node,cpu
    
    Node,CPU
    0,0
    0,1
    0,2
    0,3
    0,4
    0,5
    0,6
    0,7
    1,0
    1,1
    1,2
    1,3
    1,4
    1,5
    1,6
    1,7

    In this example, the output contains NUMA nodes and the available physical CPU threads on the host.

  2. Check the number of vCPU threads inside the VM:

    # lscpu -p=node,cpu
    
    Node,CPU
    0,0
    0,1
    0,2
    0,3

    In this example, the output contains NUMA nodes and the available vCPU threads inside the VM.

  3. Pin specific vCPU threads from a VM to a specific host CPU or range of CPUs. This is suggested as a safe method of vCPU performance improvement.

    For example, the following commands pin vCPU threads 0 to 3 of the testguest6 VM to host CPUs 1, 3, 5, 7, respectively:

    # virsh vcpupin testguest6 0 1
    # virsh vcpupin testguest6 1 3
    # virsh vcpupin testguest6 2 5
    # virsh vcpupin testguest6 3 7
  4. Optional: Verify whether the vCPU threads are successfully pinned to CPUs.

    # virsh vcpupin testguest6
    VCPU   CPU Affinity
    ----------------------
    0      1
    1      3
    2      5
    3      7
  5. Optional: After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. This can further help the QEMU process to run more efficiently on the physical CPU.

    For example, the following commands pin the QEMU process thread of testguest6 to CPUs 2 and 4, and verify this was successful:

    # virsh emulatorpin testguest6 2,4
    # virsh emulatorpin testguest6
    emulator: CPU Affinity
    ----------------------------------
           *: 2,4

18.6.6. Configuring virtual CPU capping

You can use virtual CPU (vCPU) capping to limit the amount of CPU resources a virtual machine (VM) can use. vCPU capping can improve the overall performance by preventing excessive use of host’s CPU resources by a single VM and by making it easier for the hypervisor to manage CPU scheduling.

Procedure

  1. View the current vCPU scheduling configuration on the host.

    # virsh schedinfo <vm_name>
    
    Scheduler      : posix
    cpu_shares     : 0
    vcpu_period : 0
    vcpu_quota : 0
    emulator_period: 0
    emulator_quota : 0
    global_period  : 0
    global_quota   : 0
    iothread_period: 0
    iothread_quota : 0
  2. To configure an absolute vCPU cap for a VM, set the vcpu_period and vcpu_quota parameters. Both parameters use a numerical value that represents a time duration in microseconds.

    1. Set the vcpu_period parameter by using the virsh schedinfo command. For example:

      # virsh schedinfo <vm_name> --set vcpu_period=100000

      In this example, the vcpu_period is set to 100,000 microseconds, which means the scheduler enforces vCPU capping during this time interval.

      You can also use the --live --config options to configure a running VM without restarting it.

    2. Set the vcpu_quota parameter by using the virsh schedinfo command. For example:

      # virsh schedinfo <vm_name> --set vcpu_quota=50000

      In this example, the vcpu_quota is set to 50,000 microseconds, which specifies the maximum amount of CPU time that the VM can use during the vcpu_period time interval. In this case, vcpu_quota is set as the half of vcpu_period, so the VM can use up to 50% of the CPU time during that interval.

      You can also use the --live --config options to configure a running VM without restarting it.

Verification

  • Check that the vCPU scheduling parameters have the correct values.

    # virsh schedinfo <vm_name>
    
    Scheduler      : posix
    cpu_shares     : 2048
    vcpu_period    : 100000
    vcpu_quota     : 50000
    ...

18.6.7. Tuning CPU weights

The CPU weight (or CPU shares) setting controls how much CPU time a virtual machine (VM) receives compared to other running VMs. By increasing the CPU weight of a specific VM, you can ensure that this VM gets more CPU time relative to other VMs. To prioritize CPU time allocation between multiple VMs, set the cpu_shares parameter

The possible CPU weight values range from 0 to 262144 and the default value for a new KVM VM is 1024.

Procedure

  1. Check the current CPU weight of a VM.

    # virsh schedinfo <vm_name>
    
    Scheduler      : posix
    cpu_shares : 1024
    vcpu_period    : 0
    vcpu_quota     : 0
    emulator_period: 0
    emulator_quota : 0
    global_period  : 0
    global_quota   : 0
    iothread_period: 0
    iothread_quota : 0
  2. Adjust the CPU weight to a preferred value.

    # virsh schedinfo <vm_name> --set cpu_shares=2048
    
    Scheduler      : posix
    cpu_shares : 2048
    vcpu_period    : 0
    vcpu_quota     : 0
    emulator_period: 0
    emulator_quota : 0
    global_period  : 0
    global_quota   : 0
    iothread_period: 0
    iothread_quota : 0

    In this example, cpu_shares is set to 2048. This means that if all other VMs have the value set to 1024, this VM gets approximately twice the amount of CPU time.

    You can also use the --live --config options to configure a running VM without restarting it.

18.6.8. Enabling and disabling kernel same-page merging

Kernel Same-Page Merging (KSM) improves memory density by sharing identical memory pages between virtual machines (VMs). Therefore, enabling KSM might improve memory efficiency of your VM deployment.

However, enabling KSM also increases CPU utilization, and might negatively affect overall performance depending on the workload.

In RHEL 10, KSM is disabled by default. To enable KSM and test its impact on your VM performance, see the following instructions.

Prerequisites

  • Root access to your host system.

Procedure

  1. Enable KSM:

    警告

    Enabling KSM increases CPU utilization and affects overall CPU performance.

    1. Install the ksmtuned service:

      # dnf install ksmtuned
    2. Start the service:

      • To enable KSM for a single session, use the systemctl utility to start the ksm and ksmtuned services.

        # systemctl start ksm
        # systemctl start ksmtuned
      • To enable KSM persistently, use the systemctl utility to enable the ksm and ksmtuned services.

        # systemctl enable ksm
        Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service  /usr/lib/systemd/system/ksm.service
        
        # systemctl enable ksmtuned
        Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service  /usr/lib/systemd/system/ksmtuned.service
  2. Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of activating KSM. Specifically, ensure that the additional CPU usage by KSM does not offset the memory improvements and does not cause additional performance issues. In latency-sensitive workloads, also pay attention to cross-NUMA page merges.
  3. Optional: If KSM has not improved your VM performance, disable it:

    • To disable KSM for a single session, use the systemctl utility to stop ksm and ksmtuned services.

      # systemctl stop ksm
      # systemctl stop ksmtuned
    • To disable KSM persistently, use the systemctl utility to disable ksm and ksmtuned services.

      # systemctl disable ksm
      Removed /etc/systemd/system/multi-user.target.wants/ksm.service.
      # systemctl disable ksmtuned
      Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.
      注意

      Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM pages in the system by using the following command:

      # echo 2 > /sys/kernel/mm/ksm/run

      However, this command increases memory usage, and might cause performance problems on your host or your VMs.

Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部