Questo contenuto non è disponibile nella lingua selezionata.
8.2. Memory Tuning on Virtual Machines
8.2.1. Memory Monitoring Tools
Memory usage can be monitored in virtual machines using tools used in bare metal environments. Tools useful for monitoring memory usage and diagnosing memory-related problems include:
top
vmstat
numastat
/proc/
Note
For details on using these performance tools, see the Red Hat Enterprise Linux 7 Performance Tuning Guide and the man pages for these commands.
8.2.2. Memory Tuning with virsh
The optional
<memtune>
element in the guest XML configuration allows administrators to configure guest virtual machine memory settings manually. If <memtune>
is omitted, the VM uses memory based on how it was allocated and assigned during the VM creation.
Display or set memory parameters in the
<memtune>
element in a virtual machine with the virsh memtune
command, replacing values according to your environment:
# virsh memtune virtual_machine --parameter size
Optional parameters include:
hard_limit
- The maximum memory the virtual machine can use, in kibibytes (blocks of 1024 bytes).
Warning
Setting this limit too low can result in the virtual machine being killed by the kernel. soft_limit
- The memory limit to enforce during memory contention, in kibibytes (blocks of 1024 bytes).
swap_hard_limit
- The maximum memory plus swap the virtual machine can use, in kibibytes (blocks of 1024 bytes). The
swap_hard_limit
value must be more than thehard_limit
value. min_guarantee
- The guaranteed minimum memory allocation for the virtual machine, in kibibytes (blocks of 1024 bytes).
Note
See
# virsh help memtune
for more information on using the virsh memtune
command.
The optional
<memoryBacking>
element may contain several elements that influence how virtual memory pages are backed by host pages.
Setting
locked
prevents the host from swapping out memory pages belonging to the guest. Add the following to the guest XML to lock the virtual memory pages in the host's memory:
<memoryBacking> <locked/> </memoryBacking>
Important
When setting
locked
, a hard_limit
must be set in the <memtune>
element to the maximum memory configured for the guest, plus any memory consumed by the process itself.
Setting
nosharepages
prevents the host from merging the same memory used among guests. To instruct the hypervisor to disable share pages for a guest, add the following to the guest's XML:
<memoryBacking> <nosharepages/> </memoryBacking>
8.2.3. Huge Pages and Transparent Huge Pages
AMD64 and Intel 64 CPUs usually address memory in 4kB pages, but they are capable of using larger 2MB or 1GB pages known as huge pages. KVM guests can be deployed with huge page memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer (TLB).
A kernel feature enabled by default in Red Hat Enterprise Linux 7, huge pages can significantly increase performance, particularly for large memory and memory-intensive workloads. Red Hat Enterprise Linux 7 is able to manage large amounts of memory more effectively by increasing the page size through the use of huge pages. To increase the effectiveness and convenience of managing huge pages, Red Hat Enterprise Linux 7 uses Transparent Huge Pages (THP) by default. For more information on huge pages and THP, see the Performance Tuning Guide.
Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime. See Section 8.2.3.3, “Enabling 1 GB huge pages for guests at boot or runtime” for instructions on enabling multiple huge page sizes.
8.2.3.1. Configuring Transparent Huge Pages
Transparent huge pages (THP) are an abstraction layer that automates most aspects of creating, managing, and using huge pages. By default, they automatically optimize system settings for performance.
Note
Using KSM can reduce the occurrence of transparent huge pages, so it is recommended to disable KSM before enabling THP. For more information, see Section 8.3.4, “Deactivating KSM”.
Transparent huge pages are enabled by default. To check the current status, run:
# cat /sys/kernel/mm/transparent_hugepage/enabled
To enable transparent huge pages to be used by default, run:
# echo always > /sys/kernel/mm/transparent_hugepage/enabled
This will set /sys/kernel/mm/transparent_hugepage/enabled
to always
.
To disable transparent huge pages:
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
Transparent Huge Page support does not prevent the use of static huge pages. However, when static huge pages are not used, KVM will use transparent huge pages instead of the regular 4kB page size.
8.2.3.2. Configuring Static Huge Pages
In some cases, greater control of huge pages is preferable. To use static huge pages on guests, add the following to the guest XML configuration using
virsh edit
:
<memoryBacking> <hugepages/> </memoryBacking>
This instructs the host to allocate memory to the guest using huge pages, instead of using the default page size.
View the current huge pages value by running the following command:
cat /proc/sys/vm/nr_hugepages
Procedure 8.1. Setting huge pages
The following example procedure shows the commands to set huge pages.
- View the current huge pages value:
# cat /proc/meminfo | grep Huge
AnonHugePages: 2048 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB - Huge pages are set in increments of 2MB. To set the number of huge pages to 25000, use the following command:
echo 25000 > /proc/sys/vm/nr_hugepages
Note
To make the setting persistent, add the following lines to the/etc/sysctl.conf
file on the guest machine, with X being the intended number of huge pages:# echo 'vm.nr_hugepages = X' >> /etc/sysctl.conf # sysctl -p
Afterwards, addtransparent_hugepage=never
to the kernel boot parameters by appending it to the end of the/kernel
line in the/etc/grub2.cfg
file on the guest. - Mount the huge pages:
# mount -t hugetlbfs hugetlbfs /dev/hugepages
- Add the following lines to the memoryBacking section in the virtual machine's XML configuration:
<hugepages> <page size='1' unit='GiB'/> </hugepages>
- Restart libvirtd:
#
systemctl restart libvirtd
- Start the VM:
#
virsh start virtual_machine
- Restart the VM if it is already running:
#
virsh reset virtual_machine
- Verify the changes in
/proc/meminfo
:# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB HugePages_Total: 25000 HugePages_Free: 23425 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
Huge pages can benefit not only the host but also guests, however, their total huge pages value must be less than what is available in the host.
8.2.3.3. Enabling 1 GB huge pages for guests at boot or runtime
Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime.
Procedure 8.2. Allocating 1GB huge pages at boot time
- To allocate different sizes of huge pages at boot time, use the following command, specifying the number of huge pages. This example allocates four 1GB huge pages and 1024 2MB huge pages:
'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024'
Change this command line to specify a different number of huge pages to be allocated at boot.Note
The next two steps must also be completed the first time you allocate 1GB huge pages at boot time. - Mount the 2MB and 1GB huge pages on the host:
# mkdir /dev/hugepages1G # mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G # mkdir /dev/hugepages2M # mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M
- Add the following lines to the memoryBacking section in the virtual machine's XML configuration:
<hugepages> <page size='1' unit='GiB'/> </hugepages>
- Restart libvirtd to enable the use of 1GB huge pages on guests:
# systemctl restart libvirtd
Procedure 8.3. Allocating 1GB huge pages at runtime
1GB huge pages can also be allocated at runtime. Runtime allocation allows the system administrator to choose which NUMA node to allocate those pages from. However, runtime page allocation can be more prone to allocation failure than boot time allocation due to memory fragmentation.
- To allocate different sizes of huge pages at runtime, use the following command, replacing values for the number of huge pages, the NUMA node to allocate them from, and the huge page size:
# echo 4 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages # echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
This example command allocates four 1GB huge pages fromnode1
and 1024 2MB huge pages fromnode3
.These huge page settings can be changed at any time with the above command, depending on the amount of free memory on the host system.Note
The next two steps must also be completed the first time you allocate 1GB huge pages at runtime. - Mount the 2MB and 1GB huge pages on the host:
# mkdir /dev/hugepages1G # mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G # mkdir /dev/hugepages2M # mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M
- Add the following lines to the memoryBacking section in the virtual machine's XML configuration:
<hugepages> <page size='1' unit='GiB'/> </hugepages>
- Restart libvirtd to enable the use of 1GB huge pages on guests:
# systemctl restart libvirtd