4.10. Configuring High Performance Virtual Machines, Templates, and Pools
You can configure a virtual machine for high performance, so that it runs with performance metrics as close to bare metal as possible. When you choose high performance optimization, the virtual machine is configured with a set of automatic, and recommended manual, settings for maximum efficiency.
The high performance option is only accessible in the Administration Portal, by selecting High Performance from the Optimized for dropdown list in the Edit or New virtual machine, template, or pool window. This option is not available in the VM Portal.
The high performance option is supported by Red Hat Virtualization 4.2 and later. It is not available for earlier versions.
Virtual Machines
If you change the optimization mode of a running virtual machine to high performance, some configuration changes require restarting the virtual machine.
To change the optimization mode of a new or existing virtual machine to high performance, you may need to make manual changes to the cluster and to the pinned host configuration first.
A high performance virtual machine has certain limitations, because enhanced performance has a trade-off in decreased flexibility:
- If pinning is set for CPU threads, I/O threads, emulator threads, or NUMA nodes, according to the recommended settings, only a subset of cluster hosts can be assigned to the high performance virtual machine.
- Many devices are automatically disabled, which limits the virtual machine’s usability.
Templates and Pools
High performance templates and pools are created and edited in the same way as virtual machines. If a high performance template or pool is used to create new virtual machines, those virtual machines inherits this property and its configurations. Certain settings, however, are not inherited and must be set manually:
- CPU pinning
- Virtual NUMA and NUMA pinning topology
- I/O and emulator threads pinning topology
- Pass-through Host CPU
4.10.1. Creating a High Performance Virtual Machine, Template, or Pool
To create a high performance virtual machine, template, or pool:
In the New or Edit window, select High Performance from the Optimized for drop-down menu.
Selecting this option automatically performs certain configuration changes to this virtual machine, which you can view by clicking different tabs. You can change them back to their original settings or override them. (See Automatic High Performance Configuration Settings for details.) If you change a setting, its latest value is saved.
Click
.If you have not set any manual configurations, the High Performance Virtual Machine/Pool Settings screen describing the recommended manual configurations appears.
If you have set some of the manual configurations, the High Performance Virtual Machine/Pool Settings screen displays the settings you have not made.
If you have set all the recommended manual configurations, the High Performance Virtual Machine/Pool Settings screen does not appear.
If the High Performance Virtual Machine/Pool Settings screen appears, click to return to the New or Edit window to perform the manual configurations. See Configuring the Recommended Manual Settings for details.
Alternatively, click
to ignore the recommendations. The result may be a drop in the level of performance.Click
.You can view the optimization type in the General tab of the details view of the virtual machine, pool, or template.
Certain configurations can override the high performance settings. For example, if you select an instance type for a virtual machine before selecting High Performance from the Optimized for drop-down menu and performing the manual configuration, the instance type configuration will not affect the high performance configuration. If, however, you select the instance type after the high performance configurations, you should verify the final configuration in the different tabs to ensure that the high performance configurations have not been overridden by the instance type.
The last-saved configuration usually takes priority.
Support for instance types is now deprecated, and will be removed in a future release.
4.10.1.1. Automatic High Performance Configuration Settings
The following table summarizes the automatic settings. The Enabled (Y/N) column indicates configurations that are enabled or disabled. The Applies to column indicates the relevant resources:
- VM - Virtual machine
- T - Template
- P - Pool
- C - Cluster
Setting | Enabled (Y/N) | Applies to |
---|---|---|
Headless Mode (Console tab) |
|
|
USB Enabled (Console tab) |
|
|
Smartcard Enabled (Console tab) |
|
|
Soundcard Enabled (Console tab) |
|
|
Enable VirtIO serial console (Console tab) |
|
|
Allow manual migration only (Host tab) |
|
|
Pass-Through Host CPU (Host tab) |
|
|
Highly Available [1] (High Availability tab) |
|
|
No-Watchdog (High Availability tab) |
|
|
Memory Balloon Device (Resource Allocation tab) |
|
|
I/O Threads Enabled [2] (Resource Allocation tab) |
|
|
Paravirtualized Random Number Generator PCI (virtio-rng) device (Random Generator tab) |
|
|
I/O and emulator threads pinning topology |
|
|
CPU cache layer 3 |
|
|
-
Highly Available
is not automatically enabled. If you select it manually, high availability should be enabled for pinned hosts only. - Number of I/O threads = 1.
4.10.1.2. I/O and Emulator Threads Pinning Topology (Automatic Settings)
The I/O and emulator threads pinning topology is a new configuration setting for Red Hat Virtualization 4.2. It requires that I/O threads, NUMA nodes, and NUMA pinning be enabled and set for the virtual machine. Otherwise, a warning will appear in the engine log.
Pinning topology:
- The first two CPUs of each NUMA node are pinned.
If all vCPUs fit into one NUMA node of the host:
- The first two vCPUs are automatically reserved/pinned
- The remaining vCPUs are available for manual vCPU pinning
If the virtual machine spans more than one NUMA node:
- The first two CPUs of the NUMA node with the most pins are reserved/pinned
- The remaining pinned NUMA node(s) are for vCPU pinning only
Pools do not support I/O and emulator threads pinning.
If a host CPU is pinned to both a vCPU and I/O and emulator threads, a warning will appear in the log and you will be asked to consider changing the CPU pinning topology to avoid this situation.
4.10.1.3. High Performance Icons
The following icons indicate the states of a high performance virtual machine in the
Icon | Description |
---|---|
| High performance virtual machine |
| High performance virtual machine with Next Run configuration |
| Stateless, high performance virtual machine |
| Stateless, high performance virtual machine with Next Run configuration |
| Virtual machine in a high performance pool |
| Virtual machine in a high performance pool with Next Run configuration |
4.10.2. Configuring the Recommended Manual Settings
You can configure the recommended manual settings in either the New or the Edit windows.
If a recommended setting is not performed, the High Performance Virtual Machine/Pool Settings screen displays the recommended setting when you save the resource.
The recommended manual settings are:
4.10.2.1. Manual High Performance Configuration Settings
The following table summarizes the recommended manual settings. The Enabled (Y/N) column indicates configurations that should be enabled or disabled. The Applies to column indicates the relevant resources:
- VM - Virtual machine
- T - Template
- P - Pool
- C - Cluster
Setting | Enabled (Y/N) | Applies to |
---|---|---|
NUMA Node Count (Host tab) |
|
|
Tune Mode (NUMA Pinning screen) |
|
|
NUMA Pinning (Host tab) |
|
|
CPU Pinning topology (Resource Allocation tab) |
|
|
hugepages (Custom Properties tab) |
|
|
KSM (Optimization tab) |
|
|
4.10.2.2. Pinning CPUs
To pin vCPUs to a specific host’s physical CPU:
- In the Host tab, select the Specific Host(s) radio button.
In the Resource Allocation tab, enter the CPU Pinning Topology, verifying that the configuration fits the pinned host’s configuration. See Virtual Machine Resource Allocation settings explained for information about the syntax of this field.
This field is populated automatically and the CPU topology is updated when automatic NUMA pinning is activated.
Verify that the virtual machine configuration is compatible with the host configuration:
- A virtual machine’s number of sockets must not be greater than the host’s number of sockets.
- A virtual machine’s number of cores per virtual socket must not be greater than the host’s number of cores.
- CPU-intensive workloads perform best when the host and virtual machine expect the same cache usage. To achieve the best performance, a virtual machine’s number of threads per core must not be greater than that of the host.
CPU pinning has the following requirements:
- If the host is NUMA-enabled, the host’s NUMA settings (memory and CPUs) must be considered because the virtual machine has to fit the host’s NUMA configuration.
- The I/O and emulator threads pinning topology must be considered.
- CPU pinning can only be set for virtual machines and pools, but not for templates. Therefore, you must set CPU pinning manually whenever you create a high performance virtual machine or pool, even if they are based on a high performance template.
4.10.2.3. Setting the NUMA Pinning Policy
To set the NUMA Pinning Policy, you need a NUMA-enabled pinned host with at least two NUMA nodes.
To set the NUMA pinning policy manually:
- Click .
- In the NUMA Topology window, click and drag virtual NUMA nodes from the box on the right to the host’s physical NUMA nodes on the left as required.
-
Select Strict, Preferred, or Interleave from the Tune Mode drop-down list in each NUMA node. If the selected mode is Preferred, the NUMA Node Count must be set to
1
. - Click .
To set the NUMA pinning policy automatically:
-
In the Resource Allocation tab, under CPU Allocation, select
Resize and Pin NUMA
from the CPU Pinning Policy drop-down list. - Click .
The number of declared virtual NUMA nodes and the NUMA pinning policy must take into account:
- The host’s NUMA settings (memory and CPUs)
- The NUMA node in which the host devices are declared
- The CPU pinning topology
- The IO and emulator threads pinning topology
- Huge page sizes
- NUMA pinning can only be set for virtual machines, not for pools or templates. You must set NUMA pinning manually when you create a high performance virtual machine based on a template.
4.10.2.4. Configuring Huge Pages
Huge pages are pre-allocated when a virtual machine starts to run (dynamic allocation is disabled by default).
To configure huge pages:
- In the Custom Properties tab, select hugepages from the custom properties list, which displays Please select a key… by default.
Enter the huge page size in KB.
You should set the huge page size to the largest size supported by the pinned host. The recommended size for x86_64 is 1 GiB.
The huge page size has the following requirements:
- The virtual machine’s huge page size must be the same size as the pinned host’s huge page size.
- The virtual machine’s memory size must fit into the selected size of the pinned host’s free huge pages.
- The NUMA node size must be a multiple of the huge page’s selected size.
To enable dynamic allocation of huge pages:
- Disable the HugePages filter in the scheduler.
In the
[performance]
section in/etc/vdsm/vdsm.conf
set the following:use_dynamic_hugepages = true
Comparison between dynamic and static hugepages
The following table outlines advantages and disadvantages of dynamic and static hugepages.
Setting | Advantages | Disadvantages | Recommendations |
---|---|---|---|
dynamic hugepages |
| Failure to allocate due to fragmentation | Use 2MB hugepages |
static hugepages | Predictable results |
|
The following limitations apply:
- Memory hotplug/unplug is disabled
- The host’s memory resource is limited
4.10.2.5. Disabling KSM
To disable Kernel Same-page Merging (KSM) for the cluster:
-
Click
and select the cluster. - Click .
- In the Optimization tab, clear the Enable KSM check box.