Chapter 36. Configuring huge pages
Physical memory is managed in fixed-size chunks called pages. On the x86_64 architecture, supported by Red Hat Enterprise Linux 8, the default size of a memory page is 4 KB
. This default page size has proved to be suitable for general-purpose operating systems, such as Red Hat Enterprise Linux, which supports many different kinds of workloads.
However, specific applications can benefit from using larger page sizes in certain cases. For example, an application that works with a large and relatively fixed data set of hundreds of megabytes or even dozens of gigabytes can have performance issues when using 4 KB
pages. Such data sets can require a huge amount of 4 KB
pages, which can increase resource usage in the operating system and the CPU.
This section provides information about huge pages available in RHEL 8 and how you can configure them.
36.1. Available huge page features Copy linkLink copied to clipboard!
With Red Hat Enterprise Linux 8, you can use huge pages for applications that work with big data sets, and improve the performance of such applications.
The following are the huge page methods, which are supported in RHEL 8:
HugeTLB pages
HugeTLB pages are also called static huge pages. There are two ways of reserving HugeTLB pages:
At boot time
: It increases the possibility of success because the memory has not yet been significantly fragmented. However, on NUMA machines, the number of pages is automatically split among the NUMA nodes.For more information about parameters that influence HugeTLB page behavior at boot time, see Parameters for reserving HugeTLB pages at boot time and how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time.
At runtime
: It allows you to reserve the huge pages per NUMA node. If the run-time reservation is done as early as possible in the boot process, the probability of memory fragmentation is lower.For more information about parameters that influence HugeTLB page behavior at run time, see Parameters for reserving HugeTLB pages at run time and how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time.
Transparent HugePages (THP)
With THP, the kernel automatically assigns huge pages to processes, and therefore there is no need to manually reserve the static huge pages. The following are the two modes of operation in THP:
-
system-wide
: Here, the kernel tries to assign huge pages to a process whenever it is possible to allocate the huge pages and the process is using a large contiguous virtual memory area. per-process
: Here, the kernel only assigns huge pages to the memory areas of individual processes which you can specify using themadvise
() system call.NoteThe THP feature only supports
2 MB
pages.
-
For more information about parameters that influence HugeTLB page behavior at boot time, see Enabling transparent hugepages and Disabling transparent hugepages.
36.2. Parameters for reserving HugeTLB pages at boot time Copy linkLink copied to clipboard!
Use the following parameters to influence HugeTLB page behavior at boot time.
For more information on how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time.
Parameter | Description | Default value |
---|---|---|
| Defines the number of persistent huge pages configured in the kernel at boot time. In a NUMA system, huge pages, that have this parameter defined, are divided equally between nodes.
You can assign huge pages to specific nodes at runtime by changing the value of the nodes in the |
The default value is
To update this value at boot, change the value of this parameter in the |
| Defines the size of persistent huge pages configured in the kernel at boot time. |
Valid values are |
| Defines the default size of persistent huge pages configured in the kernel at boot time. |
Valid values are |
36.3. Configure HugeTLB at boot time Copy linkLink copied to clipboard!
The HugeTLB enables the use of huge pages by reserving them at boot time, thereby minimizing memory fragmentation and ensuring that sufficient resources are available for workloads that benefit from larger memory pages.
36.3.1. Configuring HugeTLB by using kernel command line parameters Copy linkLink copied to clipboard!
You can reserve HugeTLB pages at the earliest stage of boot process by using kernel command-line parameters. This method provides the highest chance of successfully reserving the required number and size of huge pages, because memory is allocated during the kernel boot.
Prefer reserving HugeTLB pages by using kernel boot parameters, as this method ensures allocation of larger contiguous memory regions compared to using a systemd unit.
The examples in the procedure demonstrate how to use the command-line options for configuring HugeTLB pages. These examples do not necessarily apply to your system configuration. Review your system requirements and objectives before applying these settings in your environment.
Prerequisites
- You must have root privileges on your system.
Procedure
Update the kernel command line to include HugeLTB options.
To reserve HugeTLB pages of the default size for your architecture:
grubby --update-kernel=ALL --args="hugepages=10"
# grubby --update-kernel=ALL --args="hugepages=10"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command reserves 10 HugeTLB pages of the default pool size. For example, on
x86_64
, the default pool size is2 MB
. On systems with Non-Uniform Memory Access (NUMA), the allocation is distributed evenly across NUMA nodes. If the system has two NUMA nodes, each node reserves five pages.To reserve different sizes of HugeTLB pages, specify the
hugepagesz
andhugepages
options in the kernel command line:grubby --update-kernel=ALL --args="hugepagesz=2M hugepages=10 hugepagesz=1G hugepages=1"
# grubby --update-kernel=ALL --args="hugepagesz=2M hugepages=10 hugepagesz=1G hugepages=1"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command reserves 10 pages of
2 MB
each and 1 page of1 GB
.The system reserves the specified number and size of HugeTLB pages at boot time, ensuring that memory is allocated before the operating system begins normal operation.
ImportantThe order of the options is significant. Each
hugepagesz=
option must be immediately followed by its correspondinghugepages=
option.
36.3.2. Configuring HugeTLB by using systemd service unit Copy linkLink copied to clipboard!
You can configure HugeTLB pages during the user-space boot process by using a systemd service unit. This method allows you to reserve large memory pages after the kernel has initialized but before most user-space services start. Although this approach is not as early as kernel command-line configuration, it can still be effective for ensuring that applications have access to the required huge pages during system operation.
Prerequisites
- You must have root privileges on your system.
Procedure
Create a new file called
hugetlb-gigantic-pages.service
in the/usr/lib/systemd/system/
directory and add the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file called
hugetlb-reserve-pages.sh
in the/usr/lib/systemd/
directory and add the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<number_of_pages>
with the number of 1GB pages you want to reserve, and<node>
with the name of the node on which to reserve these pages. For example, to reserve two 1 GB pages onnode0
and one 1GB page onnode1
, replace<number_of_pages>
with2
fornode0
and1
fornode1
.Create an executable script:
chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh
# chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable early boot reservation:
systemctl enable hugetlb-gigantic-pages.service
# systemctl enable hugetlb-gigantic-pages.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
You can try reserving more 1 GB pages at runtime by writing to
nr_hugepages
at any time. However, to prevent failures due to memory fragmentation, reserve 1 GB pages early during the boot process. - Reserving static huge pages can effectively reduce the amount of memory available to the system, and prevents it from properly using its full memory capacity. Although a properly sized pool of reserved huge pages can be beneficial to applications that use it, an oversized or unused pool of reserved huge pages will eventually be detrimental to overall system performance. When setting a reserved huge page pool, ensure that the system can properly use its full memory capacity.
-
You can try reserving more 1 GB pages at runtime by writing to
36.4. Parameters for reserving HugeTLB pages at run time Copy linkLink copied to clipboard!
Use the following parameters to influence HugeTLB page behavior at run time.
For more information about how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time.
Parameter | Description | File name |
---|---|---|
| Defines the number of huge pages of a specified size assigned to a specified NUMA node. |
|
| Defines the maximum number of additional huge pages that can be created and used by the system through overcommitting memory. Writing any non-zero value into this file indicates that the system obtains that number of huge pages from the kernel’s normal page pool if the persistent huge page pool is exhausted. As these surplus huge pages become unused, they are then freed and returned to the kernel’s normal page pool. |
|
36.5. Configuring HugeTLB at run time Copy linkLink copied to clipboard!
This procedure describes how to add 20 2048 kB huge pages to node2.
To reserve pages based on your requirements, replace:
- 20 with the number of huge pages you wish to reserve,
- 2048kB with the size of the huge pages,
- node2 with the node on which you wish to reserve the pages.
Procedure
Display the memory statistics:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the number of huge pages of a specified size to the node:
echo 20 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
# echo 20 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that the number of huge pages are added:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
36.6. Managing transparent hugepages Copy linkLink copied to clipboard!
Transparent hugepages (THP) are enabled by default in Red Hat Enterprise Linux 8. However, you can enable, disable, or set the transparent hugepages to madvise
with runtime configuration, TuneD profiles, kernel command line parameters, or systemd unit file.
36.6.1. Managing transparent hugepages with runtime configuration Copy linkLink copied to clipboard!
Transparent hugepages (THP) can be managed at runtime to optimize memory usage. The runtime configuration is not persistent across system reboots.
Procedure
Check the status of THP:
cat /sys/kernel/mm/transparent_hugepage/enabled
$ cat /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure THP.
Enabling THP:
echo always > /sys/kernel/mm/transparent_hugepage/enabled
$ echo always > /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disabling THP:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
$ echo never > /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Setting THP to
madvise
:echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
$ echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To prevent applications from allocating more memory resources than necessary, disable the system-wide transparent hugepages and only enable them for the applications that explicitly request it through the
madvise
system call.NoteSometimes, providing low latency to short-lived allocations has higher priority than immediately achieving the best performance with long-lived allocations. In such cases, you can disable direct compaction while leaving THP enabled.
Direct compaction is a synchronous memory compaction during the huge page allocation. Disabling direct compaction provides no guarantee of saving memory, but can decrease the risk of higher latencies during frequent page faults. Also, disabling direct compaction allows synchronous compaction of Virtual Memory Areas (VMAs) highlighted in
madvise
only. Note that if the workload benefits significantly from THP, the performance decreases. Disable direct compaction:$ echo never > /sys/kernel/mm/transparent_hugepage/defrag
36.6.2. Managing transparent hugepages with TuneD profiles Copy linkLink copied to clipboard!
You can manage transparent hugepages (THP) by using TuneD profiles. The tuned.conf
file provides the configuration of TuneD profiles. This configuration is persistent across system reboots.
Prerequisites
-
TuneD
package is installed. -
TuneD
service is enabled.
Procedure
Copy the active profile file to the same directory:
sudo cp -R /usr/lib/tuned/my_profile /usr/lib/tuned/my_copied_profile
$ sudo cp -R /usr/lib/tuned/my_profile /usr/lib/tuned/my_copied_profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
tune.conf
file:sudo vi /usr/lib/tuned/my_copied_profile/tuned.conf
$ sudo vi /usr/lib/tuned/my_copied_profile/tuned.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable THP, add the line:
[bootloader] cmdline = transparent_hugepage=always
[bootloader] cmdline = transparent_hugepage=always
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable THP, add the line:
[bootloader] cmdline = transparent_hugepage=never
[bootloader] cmdline = transparent_hugepage=never
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To set THP to
madvise
, add the line:[bootloader] cmdline = transparent_hugepage=madvise
[bootloader] cmdline = transparent_hugepage=madvise
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the
TuneD
service:sudo systemctl restart tuned
$ sudo systemctl restart tuned
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new profile active:
sudo tuned-adm profile my_copied_profile
$ sudo tuned-adm profile my_copied_profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new profile is active:
sudo tuned-adm active
$ sudo tuned-adm active
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the required mode of THP is set:
cat /sys/kernel/mm/transparent_hugepage/enabled
$ cat /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
36.6.3. Managing transparent hugepages with kernel command line parameters Copy linkLink copied to clipboard!
You can manage transparent hugepages (THP) at boot time by modifying kernel parameters. This configuration is persistent across system reboots.
Prerequisites
- You have root permissions on the system.
Procedure
Get the current kernel command line parameters:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure THP by adding kernel parameters.
To enable THP:
grubby --args="transparent_hugepage=always" --update-kernel=DEFAULT
# grubby --args="transparent_hugepage=always" --update-kernel=DEFAULT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable THP:
grubby --args="transparent_hugepage=never" --update-kernel=DEFAULT
# grubby --args="transparent_hugepage=never" --update-kernel=DEFAULT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To set THP to
madvise
:grubby --args="transparent_hugepage=madvise" --update-kernel=DEFAULT
# grubby --args="transparent_hugepage=madvise" --update-kernel=DEFAULT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reboot the system for changes to take effect:
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify the status of THP, view the following files:
cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow grep AnonHugePages: /proc/meminfo
# grep AnonHugePages: /proc/meminfo AnonHugePages: 0 kB
Copy to Clipboard Copied! Toggle word wrap Toggle overflow grep nr_anon_transparent_hugepages /proc/vmstat
# grep nr_anon_transparent_hugepages /proc/vmstat nr_anon_transparent_hugepages 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
36.6.4. Managing transparent hugepages with a systemd unit file Copy linkLink copied to clipboard!
You can manage transparent hugepages (THP) at system startup by using systemd unit files. By creating a systemd service, you get consistent THP configuration across system reboots.
Prerequisites
- You have root permissions on the system.
Procedure
-
Create new systemd service files for enabling, disabling and setting THP to
madvise
. For example,/etc/systemd/system/disable-thp.service
. Configure THP by adding the following contents to a new systemd service file.
To enable THP, add the following content to
<new_thp_file>.service
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable THP, add the following content to
<new_thp_file>.service
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To set THP to
madvise
, add the following content to<new_thp_file>.service
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable and start the service:
systemctl enable <new_thp_file>.service
# systemctl enable <new_thp_file>.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl start <new_thp_file>.service
# systemctl start <new_thp_file>.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify the status of THP, view the following files:
cat /sys/kernel/mm/transparent_hugepage/enabled
$ cat /sys/kernel/mm/transparent_hugepage/enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
36.7. Impact of page size on translation lookaside buffer size Copy linkLink copied to clipboard!
Reading address mappings from the page table is time-consuming and resource-expensive, so CPUs are built with a cache for recently-used addresses, called the Translation Lookaside Buffer (TLB). However, the default TLB can only cache a certain number of address mappings.
If a requested address mapping is not in the TLB, called a TLB miss, the system still needs to read the page table to determine the physical to virtual address mapping. Because of the relationship between application memory requirements and the size of pages used to cache address mappings, applications with large memory requirements are more likely to suffer performance degradation from TLB misses than applications with minimal memory requirements. It is therefore important to avoid TLB misses wherever possible.
Both HugeTLB and Transparent Huge Page features allow applications to use pages larger than 4 KB
. This allows addresses stored in the TLB to reference more memory, which reduces TLB misses and improves application performance.