Chapter 36. Configuring huge pages


Physical memory is managed in fixed-size chunks called pages. On the x86_64 architecture, supported by Red Hat Enterprise Linux 8, the default size of a memory page is 4 KB. This default page size has proved to be suitable for general-purpose operating systems, such as Red Hat Enterprise Linux, which supports many different kinds of workloads.

However, specific applications can benefit from using larger page sizes in certain cases. For example, an application that works with a large and relatively fixed data set of hundreds of megabytes or even dozens of gigabytes can have performance issues when using 4 KB pages. Such data sets can require a huge amount of 4 KB pages, which can increase resource usage in the operating system and the CPU.

This section provides information about huge pages available in RHEL 8 and how you can configure them.

36.1. Available huge page features

With Red Hat Enterprise Linux 8, you can use huge pages for applications that work with big data sets, and improve the performance of such applications.

The following are the huge page methods, which are supported in RHEL 8:

HugeTLB pages

HugeTLB pages are also called static huge pages. There are two ways of reserving HugeTLB pages:

  • At boot time: It increases the possibility of success because the memory has not yet been significantly fragmented. However, on NUMA machines, the number of pages is automatically split among the NUMA nodes.

    For more information about parameters that influence HugeTLB page behavior at boot time, see Parameters for reserving HugeTLB pages at boot time and how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time.

  • At runtime: It allows you to reserve the huge pages per NUMA node. If the run-time reservation is done as early as possible in the boot process, the probability of memory fragmentation is lower.

    For more information about parameters that influence HugeTLB page behavior at run time, see Parameters for reserving HugeTLB pages at run time and how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time.

Transparent HugePages (THP)

With THP, the kernel automatically assigns huge pages to processes, and therefore there is no need to manually reserve the static huge pages. The following are the two modes of operation in THP:

  • system-wide: Here, the kernel tries to assign huge pages to a process whenever it is possible to allocate the huge pages and the process is using a large contiguous virtual memory area.
  • per-process: Here, the kernel only assigns huge pages to the memory areas of individual processes which you can specify using the madvise() system call.

    Note

    The THP feature only supports 2 MB pages.

For more information about parameters that influence HugeTLB page behavior at boot time, see Enabling transparent hugepages and Disabling transparent hugepages.

Use the following parameters to influence HugeTLB page behavior at boot time.

For more information on how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time.

Expand
Table 36.1. Parameters used to configure HugeTLB pages at boot time
ParameterDescriptionDefault value

hugepages

Defines the number of persistent huge pages configured in the kernel at boot time.

In a NUMA system, huge pages, that have this parameter defined, are divided equally between nodes.

You can assign huge pages to specific nodes at runtime by changing the value of the nodes in the /sys/devices/system/node/node_id/hugepages/hugepages-size/nr_hugepages file.

The default value is 0.

To update this value at boot, change the value of this parameter in the /proc/sys/vm/nr_hugepages file.

hugepagesz

Defines the size of persistent huge pages configured in the kernel at boot time.

Valid values are 2 MB and 1 GB. The default value is 2 MB.

default_hugepagesz

Defines the default size of persistent huge pages configured in the kernel at boot time.

Valid values are 2 MB and 1 GB. The default value is 2 MB.

36.3. Configure HugeTLB at boot time

The HugeTLB enables the use of huge pages by reserving them at boot time, thereby minimizing memory fragmentation and ensuring that sufficient resources are available for workloads that benefit from larger memory pages.

You can reserve HugeTLB pages at the earliest stage of boot process by using kernel command-line parameters. This method provides the highest chance of successfully reserving the required number and size of huge pages, because memory is allocated during the kernel boot.

Prefer reserving HugeTLB pages by using kernel boot parameters, as this method ensures allocation of larger contiguous memory regions compared to using a systemd unit.

Note

The examples in the procedure demonstrate how to use the command-line options for configuring HugeTLB pages. These examples do not necessarily apply to your system configuration. Review your system requirements and objectives before applying these settings in your environment.

Prerequisites

  • You must have root privileges on your system.

Procedure

  • Update the kernel command line to include HugeLTB options.

    • To reserve HugeTLB pages of the default size for your architecture:

      # grubby --update-kernel=ALL --args="hugepages=10"
      Copy to Clipboard Toggle word wrap

      This command reserves 10 HugeTLB pages of the default pool size. For example, on x86_64, the default pool size is 2 MB. On systems with Non-Uniform Memory Access (NUMA), the allocation is distributed evenly across NUMA nodes. If the system has two NUMA nodes, each node reserves five pages.

    • To reserve different sizes of HugeTLB pages, specify the hugepagesz and hugepages options in the kernel command line:

      # grubby --update-kernel=ALL --args="hugepagesz=2M hugepages=10 hugepagesz=1G hugepages=1"
      Copy to Clipboard Toggle word wrap

      This command reserves 10 pages of 2 MB each and 1 page of 1 GB.

      The system reserves the specified number and size of HugeTLB pages at boot time, ensuring that memory is allocated before the operating system begins normal operation.

      Important

      The order of the options is significant. Each hugepagesz= option must be immediately followed by its corresponding hugepages= option.

You can configure HugeTLB pages during the user-space boot process by using a systemd service unit. This method allows you to reserve large memory pages after the kernel has initialized but before most user-space services start. Although this approach is not as early as kernel command-line configuration, it can still be effective for ensuring that applications have access to the required huge pages during system operation.

Prerequisites

  • You must have root privileges on your system.

Procedure

  1. Create a new file called hugetlb-gigantic-pages.service in the /usr/lib/systemd/system/ directory and add the following content:

    [Unit]
    Description=HugeTLB Gigantic Pages Reservation
    DefaultDependencies=no
    Before=dev-hugepages.mount
    ConditionPathExists=/sys/devices/system/node
    ConditionKernelCommandLine=hugepagesz=1G
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/usr/lib/systemd/hugetlb-reserve-pages.sh
    
    [Install]
    WantedBy=sysinit.target
    Copy to Clipboard Toggle word wrap
  2. Create a new file called hugetlb-reserve-pages.sh in the /usr/lib/systemd/ directory and add the following content:

    #!/bin/sh
    
    nodes_path=/sys/devices/system/node/
    if [ ! -d $nodes_path ]; then
        echo "ERROR: $nodes_path does not exist"
        exit 1
    fi
    
    reserve_pages()
    {
        echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
    }
    
    reserve_pages <number_of_pages> <node>
    Copy to Clipboard Toggle word wrap

    Replace <number_of_pages> with the number of 1GB pages you want to reserve, and <node> with the name of the node on which to reserve these pages. For example, to reserve two 1 GB pages on node0 and one 1GB page on node1, replace <number_of_pages> with 2 for node0 and 1 for node1.

  3. Create an executable script:

    # chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh
    Copy to Clipboard Toggle word wrap
  4. Enable early boot reservation:

    # systemctl enable hugetlb-gigantic-pages.service
    Copy to Clipboard Toggle word wrap
    Note
    • You can try reserving more 1 GB pages at runtime by writing to nr_hugepages at any time. However, to prevent failures due to memory fragmentation, reserve 1 GB pages early during the boot process.
    • Reserving static huge pages can effectively reduce the amount of memory available to the system, and prevents it from properly using its full memory capacity. Although a properly sized pool of reserved huge pages can be beneficial to applications that use it, an oversized or unused pool of reserved huge pages will eventually be detrimental to overall system performance. When setting a reserved huge page pool, ensure that the system can properly use its full memory capacity.

Use the following parameters to influence HugeTLB page behavior at run time.

For more information about how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time.

Expand
Table 36.2. Parameters used to configure HugeTLB pages at run time
ParameterDescriptionFile name

nr_hugepages

Defines the number of huge pages of a specified size assigned to a specified NUMA node.

/sys/devices/system/node/node_id/hugepages/hugepages-size/nr_hugepages

nr_overcommit_hugepages

Defines the maximum number of additional huge pages that can be created and used by the system through overcommitting memory.

Writing any non-zero value into this file indicates that the system obtains that number of huge pages from the kernel’s normal page pool if the persistent huge page pool is exhausted. As these surplus huge pages become unused, they are then freed and returned to the kernel’s normal page pool.

/proc/sys/vm/nr_overcommit_hugepages

36.5. Configuring HugeTLB at run time

This procedure describes how to add 20 2048 kB huge pages to node2.

To reserve pages based on your requirements, replace:

  • 20 with the number of huge pages you wish to reserve,
  • 2048kB with the size of the huge pages,
  • node2 with the node on which you wish to reserve the pages.

Procedure

  1. Display the memory statistics:

    # numastat -cm | egrep 'Node|Huge'
                     Node 0 Node 1 Node 2 Node 3  Total add
    AnonHugePages         0      2      0      8     10
    HugePages_Total       0      0      0      0      0
    HugePages_Free        0      0      0      0      0
    HugePages_Surp        0      0      0      0      0
    Copy to Clipboard Toggle word wrap
  2. Add the number of huge pages of a specified size to the node:

    # echo 20 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
    Copy to Clipboard Toggle word wrap

Verification

  • Ensure that the number of huge pages are added:

    # numastat -cm | egrep 'Node|Huge'
                     Node 0 Node 1 Node 2 Node 3  Total
    AnonHugePages         0      2      0      8     10
    HugePages_Total       0      0     40      0     40
    HugePages_Free        0      0     40      0     40
    HugePages_Surp        0      0      0      0      0
    Copy to Clipboard Toggle word wrap

36.6. Managing transparent hugepages

Transparent hugepages (THP) are enabled by default in Red Hat Enterprise Linux 8. However, you can enable, disable, or set the transparent hugepages to madvise with runtime configuration, TuneD profiles, kernel command line parameters, or systemd unit file.

Transparent hugepages (THP) can be managed at runtime to optimize memory usage. The runtime configuration is not persistent across system reboots.

Procedure

  1. Check the status of THP:

    $ cat /sys/kernel/mm/transparent_hugepage/enabled
    Copy to Clipboard Toggle word wrap
  2. Configure THP.

    • Enabling THP:

      $ echo always > /sys/kernel/mm/transparent_hugepage/enabled
      Copy to Clipboard Toggle word wrap
    • Disabling THP:

      $ echo never > /sys/kernel/mm/transparent_hugepage/enabled
      Copy to Clipboard Toggle word wrap
    • Setting THP to madvise:

      $ echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
      Copy to Clipboard Toggle word wrap

      To prevent applications from allocating more memory resources than necessary, disable the system-wide transparent hugepages and only enable them for the applications that explicitly request it through the madvise system call.

      Note

      Sometimes, providing low latency to short-lived allocations has higher priority than immediately achieving the best performance with long-lived allocations. In such cases, you can disable direct compaction while leaving THP enabled.

      Direct compaction is a synchronous memory compaction during the huge page allocation. Disabling direct compaction provides no guarantee of saving memory, but can decrease the risk of higher latencies during frequent page faults. Also, disabling direct compaction allows synchronous compaction of Virtual Memory Areas (VMAs) highlighted in madvise only. Note that if the workload benefits significantly from THP, the performance decreases. Disable direct compaction:

      $ echo never > /sys/kernel/mm/transparent_hugepage/defrag

You can manage transparent hugepages (THP) by using TuneD profiles. The tuned.conf file provides the configuration of TuneD profiles. This configuration is persistent across system reboots.

Prerequisites

  • TuneD package is installed.
  • TuneD service is enabled.

Procedure

  1. Copy the active profile file to the same directory:

    $ sudo cp -R /usr/lib/tuned/my_profile /usr/lib/tuned/my_copied_profile
    Copy to Clipboard Toggle word wrap
  2. Edit the tune.conf file:

    $ sudo vi /usr/lib/tuned/my_copied_profile/tuned.conf
    Copy to Clipboard Toggle word wrap
    • To enable THP, add the line:

      [bootloader]
      
      cmdline = transparent_hugepage=always
      Copy to Clipboard Toggle word wrap
    • To disable THP, add the line:

      [bootloader]
      
      cmdline = transparent_hugepage=never
      Copy to Clipboard Toggle word wrap
    • To set THP to madvise, add the line:

      [bootloader]
      
      cmdline = transparent_hugepage=madvise
      Copy to Clipboard Toggle word wrap
  3. Restart the TuneD service:

    $ sudo systemctl restart tuned
    Copy to Clipboard Toggle word wrap
  4. Set the new profile active:

    $ sudo tuned-adm profile my_copied_profile
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the new profile is active:

    $ sudo tuned-adm active
    Copy to Clipboard Toggle word wrap
  2. Verify that the required mode of THP is set:

    $ cat /sys/kernel/mm/transparent_hugepage/enabled
    Copy to Clipboard Toggle word wrap

You can manage transparent hugepages (THP) at boot time by modifying kernel parameters. This configuration is persistent across system reboots.

Prerequisites

  • You have root permissions on the system.

Procedure

  1. Get the current kernel command line parameters:

    # grubby --info=$(grubby --default-kernel)
    kernel="/boot/vmlinuz-4.18.0-553.el8_10.x86_64"
    args="ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX console=tty0 console=ttyS0"
    root="UUID=XXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
    initrd="/boot/initramfs-4.18.0-553.el8_10.x86_64.img"
    title="Red Hat Enterprise Linux (4.18.0-553.el8_10.x86_64) 8.10 (Ootpa)"
    id="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-4.18.0-553.el8_10.x86_64"
    Copy to Clipboard Toggle word wrap
  2. Configure THP by adding kernel parameters.

    • To enable THP:

      # grubby --args="transparent_hugepage=always" --update-kernel=DEFAULT
      Copy to Clipboard Toggle word wrap
    • To disable THP:

      # grubby --args="transparent_hugepage=never" --update-kernel=DEFAULT
      Copy to Clipboard Toggle word wrap
    • To set THP to madvise:

      # grubby --args="transparent_hugepage=madvise" --update-kernel=DEFAULT
      Copy to Clipboard Toggle word wrap
  3. Reboot the system for changes to take effect:

    # reboot
    Copy to Clipboard Toggle word wrap

Verification

  • To verify the status of THP, view the following files:

    # cat /sys/kernel/mm/transparent_hugepage/enabled
    always madvise [never]
    Copy to Clipboard Toggle word wrap
    # grep AnonHugePages: /proc/meminfo
    AnonHugePages:         0 kB
    Copy to Clipboard Toggle word wrap
    # grep nr_anon_transparent_hugepages /proc/vmstat
    nr_anon_transparent_hugepages 0
    Copy to Clipboard Toggle word wrap

You can manage transparent hugepages (THP) at system startup by using systemd unit files. By creating a systemd service, you get consistent THP configuration across system reboots.

Prerequisites

  • You have root permissions on the system.

Procedure

  1. Create new systemd service files for enabling, disabling and setting THP to madvise. For example, /etc/systemd/system/disable-thp.service.
  2. Configure THP by adding the following contents to a new systemd service file.

    • To enable THP, add the following content to <new_thp_file>.service file:

      [Unit]
      Description=Enable Transparent Hugepages
      After=local-fs.target
      Before=sysinit.target
      
      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/bin/sh -c 'echo always > /sys/kernel/mm/transparent_hugepage/enabled
      
      [Install]
      WantedBy=multi-user.target
      Copy to Clipboard Toggle word wrap
    • To disable THP, add the following content to <new_thp_file>.service file:

      [Unit]
      Description=Disable Transparent Hugepages
      After=local-fs.target
      Before=sysinit.target
      
      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/bin/sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled
      
      [Install]
      WantedBy=multi-user.target
      Copy to Clipboard Toggle word wrap
    • To set THP to madvise, add the following content to <new_thp_file>.service file:

      [Unit]
      Description=Madvise Transparent Hugepages
      After=local-fs.target
      Before=sysinit.target
      
      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/bin/sh -c 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
      
      [Install]
      WantedBy=multi-user.target
      Copy to Clipboard Toggle word wrap
  3. Enable and start the service:

    # systemctl enable <new_thp_file>.service
    Copy to Clipboard Toggle word wrap
    # systemctl start <new_thp_file>.service
    Copy to Clipboard Toggle word wrap

Verification

  • To verify the status of THP, view the following files:

    $ cat /sys/kernel/mm/transparent_hugepage/enabled
    Copy to Clipboard Toggle word wrap

Reading address mappings from the page table is time-consuming and resource-expensive, so CPUs are built with a cache for recently-used addresses, called the Translation Lookaside Buffer (TLB). However, the default TLB can only cache a certain number of address mappings.

If a requested address mapping is not in the TLB, called a TLB miss, the system still needs to read the page table to determine the physical to virtual address mapping. Because of the relationship between application memory requirements and the size of pages used to cache address mappings, applications with large memory requirements are more likely to suffer performance degradation from TLB misses than applications with minimal memory requirements. It is therefore important to avoid TLB misses wherever possible.

Both HugeTLB and Transparent Huge Page features allow applications to use pages larger than 4 KB. This allows addresses stored in the TLB to reference more memory, which reduces TLB misses and improves application performance.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat