Search

Monitoring and managing system status and performance

download PDF
Red Hat Enterprise Linux 8

Optimizing system throughput, latency, and power consumption

Red Hat Customer Content Services

Abstract

Monitor and optimize the throughput, latency, and power consumption of Red Hat Enterprise Linux 8 in different scenarios.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation. Let us know how we can improve it.

Submitting feedback through Jira (account required)

  1. Log in to the Jira website.
  2. Click Create in the top navigation bar.
  3. Enter a descriptive title in the Summary field.
  4. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
  5. Click Create at the bottom of the dialogue.

Chapter 1. Overview of performance monitoring options

The following are some of the performance monitoring and configuration tools available in Red Hat Enterprise Linux 8:

  • Performance Co-Pilot (pcp) is used for monitoring, visualizing, storing, and analyzing system-level performance measurements. It allows the monitoring and management of real-time data, and logging and retrieval of historical data.
  • Red Hat Enterprise Linux 8 provides several tools that can be used from the command line to monitor a system outside run level 5. The following are the built-in command line tools:

    • top is provided by the procps-ng package. It gives a dynamic view of the processes in a running system. It displays a variety of information, including a system summary and a list of tasks currently being managed by the Linux kernel.
    • ps is provided by the procps-ng package. It captures a snapshot of a select group of active processes. By default, the examined group is limited to processes that are owned by the current user and associated with the terminal where the ps command is executed.
    • Virtual memory statistics (vmstat) is provided by the procps-ng package. It provides instant reports of your system’s processes, memory, paging, block input/output, interrupts, and CPU activity.
    • System activity reporter (sar) is provided by the sysstat package. It collects and reports information about system activity that has occurred so far on the current day.
  • perf uses hardware performance counters and kernel trace-points to track the impact of other commands and applications on a system.
  • bcc-tools is used for BPF Compiler Collection (BCC). It provides over 100 eBPF scripts that monitor kernel activities. For more information about each of this tool, see the man page describing how to use it and what functions it performs.
  • turbostat is provided by the kernel-tools package. It reports on processor topology, frequency, idle power-state statistics, temperature, and power usage on the Intel 64 processors.
  • iostat is provided by the sysstat package. It monitors and reports on system IO device loading to help administrators make decisions about how to balance IO load between physical disks.
  • irqbalance distributes hardware interrupts across processors to improve system performance.
  • ss prints statistical information about sockets, allowing administrators to assess device performance over time. Red Hat recommends using ss over netstat in Red Hat Enterprise Linux 8.
  • numastat is provided by the numactl package. By default, numastat displays per-node NUMA hit an miss system statistics from the kernel memory allocator. Optimal performance is indicated by high numa_hit values and low numa_miss values.
  • numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system that dynamically improves NUMA resource allocation, management, and therefore system performance.
  • SystemTap monitors and analyzes operating system activities, especially the kernel activities.
  • valgrind analyzes applications by running it on a synthetic CPU and instrumenting existing application code as it is executed. It then prints commentary that clearly identifies each process involved in application execution to a user-specified file, file descriptor, or network socket. It is also useful for finding memory leaks.
  • pqos is provided by the intel-cmt-cat package. It monitors and controls CPU cache and memory bandwidth on recent Intel processors.

Additional resources

Chapter 2. Getting started with TuneD

As a system administrator, you can use the TuneD application to optimize the performance profile of your system for a variety of use cases.

2.1. The purpose of TuneD

TuneD is a service that monitors your system and optimizes the performance under certain workloads. The core of TuneD are profiles, which tune your system for different use cases.

TuneD is distributed with a number of predefined profiles for use cases such as:

  • High throughput
  • Low latency
  • Saving power

It is possible to modify the rules defined for each profile and customize how to tune a particular device. When you switch to another profile or deactivate TuneD, all changes made to the system settings by the previous profile revert back to their original state.

You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices.

2.2. TuneD profiles

A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles.

The profiles provided with TuneD are divided into the following categories:

  • Power-saving profiles
  • Performance-boosting profiles

The performance-boosting profiles include profiles that focus on the following aspects:

  • Low latency for storage and network
  • High throughput for storage and network
  • Virtual machine performance
  • Virtualization host performance
Syntax of profile configuration

The tuned.conf file can contain one [main] section and other sections for configuring plug-in instances. However, all sections are optional.

Lines starting with the hash sign (#) are comments.

Additional resources

  • tuned.conf(5) man page on your system

2.3. The default TuneD profile

During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules:

EnvironmentDefault profileGoal

Compute nodes

throughput-performance

The best throughput performance

Virtual machines

virtual-guest

The best performance. If you are not interested in the best performance, you can change it to the balanced or powersave profile.

Other cases

balanced

Balanced performance and power consumption

Additional resources

  • tuned.conf(5) man page on your system

2.4. Merged TuneD profiles

As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load.

If there are conflicts, the settings from the last specified profile takes precedence.

Example 2.1. Low power consumption in a virtual guest

The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:

# tuned-adm profile virtual-guest powersave
Warning

Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile.

Additional resources

  • tuned-adm and tuned.conf(5) man pages on your system

2.5. The location of TuneD profiles

TuneD stores profiles in the following directories:

/usr/lib/tuned/
Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf, and optionally other files, for example helper scripts.
/etc/tuned/
If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in /etc/tuned/ is used.

Additional resources

  • tuned.conf(5) man page on your system

2.6. TuneD profiles distributed with RHEL

The following is a list of profiles that are installed with TuneD on Red Hat Enterprise Linux.

Note

There might be more product-specific or third-party TuneD profiles available. Such profiles are usually provided by separate RPM packages.

balanced

The default power-saving profile. It is intended to be a compromise between performance and power consumption. It uses auto-scaling and auto-tuning whenever possible. The only drawback is the increased latency. In the current TuneD release, it enables the CPU, disk, audio, and video plugins, and activates the conservative CPU governor. The radeon_powersave option uses the dpm-balanced value if it is supported, otherwise it is set to auto.

It changes the energy_performance_preference attribute to the normal energy setting. It also changes the scaling_governor policy attribute to either the conservative or powersave CPU governor.

powersave

A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current TuneD release it enables USB autosuspend, WiFi power saving, and Aggressive Link Power Management (ALPM) power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the ondemand governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. If your system contains a supported Radeon graphics card with enabled KMS, the profile configures it to automatic power saving. On ASUS Eee PCs, a dynamic Super Hybrid Engine is enabled.

It changes the energy_performance_preference attribute to the powersave or power energy setting. It also changes the scaling_governor policy attribute to either the ondemand or powersave CPU governor.

Note

In certain cases, the balanced profile is more efficient compared to the powersave profile.

Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine might consume less energy if the transcoding is done on the full power, because the task is finished quickly, the machine starts to idle, and it can automatically step-down to very efficient power save modes. On the other hand, if you transcode the file with a throttled machine, the machine consumes less power during the transcoding, but the process takes longer and the overall consumed energy can be higher.

That is why the balanced profile can be generally a better option.

throughput-performance

A server profile optimized for high throughput. It disables power savings mechanisms and enables sysctl settings that improve the throughput performance of the disk and network IO. CPU governor is set to performance.

It changes the energy_performance_preference and scaling_governor attribute to the performance profile.

accelerator-performance
The accelerator-performance profile contains the same tuning as the throughput-performance profile. Additionally, it locks the CPU to low C states so that the latency is less than 100us. This improves the performance of certain accelerators, such as GPUs.
latency-performance

A server profile optimized for low latency. It disables power savings mechanisms and enables sysctl settings that improve latency. CPU governor is set to performance and the CPU is locked to the low C states (by PM QoS).

It changes the energy_performance_preference and scaling_governor attribute to the performance profile.

network-latency

A profile for low latency network tuning. It is based on the latency-performance profile. It additionally disables transparent huge pages and NUMA balancing, and tunes several other network-related sysctl parameters.

It inherits the latency-performance profile which changes the energy_performance_preference and scaling_governor attribute to the performance profile.

hpc-compute
A profile optimized for high-performance computing. It is based on the latency-performance profile.
network-throughput

A profile for throughput network tuning. It is based on the throughput-performance profile. It additionally increases kernel network buffers.

It inherits either the latency-performance or throughput-performance profile, and changes the energy_performance_preference and scaling_governor attribute to the performance profile.

virtual-guest

A profile designed for Red Hat Enterprise Linux 8 virtual machines and VMWare guests based on the throughput-performance profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers.

It inherits the throughput-performance profile and changes the energy_performance_preference and scaling_governor attribute to the performance profile.

virtual-host

A profile designed for virtual hosts based on the throughput-performance profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values, and enables a more aggressive value of dirty pages writeback.

It inherits the throughput-performance profile and changes the energy_performance_preference and scaling_governor attribute to the performance profile.

oracle
A profile optimized for Oracle databases loads based on throughput-performance profile. It additionally disables transparent huge pages and modifies other performance-related kernel parameters. This profile is provided by the tuned-profiles-oracle package.
desktop
A profile optimized for desktops, based on the balanced profile. It additionally enables scheduler autogroups for better response of interactive applications.
optimize-serial-console

A profile that tunes down I/O activity to the serial console by reducing the printk value. This should make the serial console more responsive. This profile is intended to be used as an overlay on other profiles. For example:

# tuned-adm profile throughput-performance optimize-serial-console
mssql
A profile provided for Microsoft SQL Server. It is based on the throughput-performance profile.
intel-sst

A profile optimized for systems with user-defined Intel Speed Select Technology configurations. This profile is intended to be used as an overlay on other profiles. For example:

# tuned-adm profile cpu-partitioning intel-sst

2.7. TuneD cpu-partitioning profile

For tuning Red Hat Enterprise Linux 8 for latency-sensitive workloads, Red Hat recommends to use the cpu-partitioning TuneD profile.

Prior to Red Hat Enterprise Linux 8, the low-latency Red Hat documentation described the numerous low-level steps needed to achieve low-latency tuning. In Red Hat Enterprise Linux 8, you can perform low-latency tuning more efficiently by using the cpu-partitioning TuneD profile. This profile is easily customizable according to the requirements for individual low-latency applications.

The following figure is an example to demonstrate how to use the cpu-partitioning profile. This example uses the CPU and node layout.

Figure 2.1. Figure cpu-partitioning

cpu partitioning

You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the following configuration options:

Isolated CPUs with load balancing

In the cpu-partitioning figure, the blocks numbered from 4 to 23, are the default isolated CPUs. The kernel scheduler’s process load balancing is enabled on these CPUs. It is designed for low-latency processes with multiple threads that need the kernel scheduler load balancing.

You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the isolated_cores=cpu-list option, which lists CPUs to isolate that will use the kernel scheduler load balancing.

The list of isolated CPUs is comma-separated or you can specify a range using a dash, such as 3-5. This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU.

Isolated CPUs without load balancing

In the cpu-partitioning figure, the blocks numbered 2 and 3, are the isolated CPUs that do not provide any additional kernel scheduler process load balancing.

You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the no_balance_cores=cpu-list option, which lists CPUs to isolate that will not use the kernel scheduler load balancing.

Specifying the no_balance_cores option is optional, however any CPUs in this list must be a subset of the CPUs listed in the isolated_cores list.

Application threads using these CPUs need to be pinned individually to each CPU.

Housekeeping CPUs
Any CPU not isolated in the cpu-partitioning-variables.conf file is automatically considered a housekeeping CPU. On the housekeeping CPUs, all services, daemons, user processes, movable kernel threads, interrupt handlers, and kernel timers are permitted to execute.

Additional resources

  • tuned-profiles-cpu-partitioning(7) man page on your system

2.8. Using the TuneD cpu-partitioning profile for low-latency tuning

This procedure describes how to tune a system for low-latency using the TuneD’s cpu-partitioning profile. It uses the example of a low-latency application that can use cpu-partitioning and the CPU layout as mentioned in the cpu-partitioning figure.

The application in this case uses:

  • One dedicated reader thread that reads data from the network will be pinned to CPU 2.
  • A large number of threads that process this network data will be pinned to CPUs 4-23.
  • A dedicated writer thread that writes the processed data to the network will be pinned to CPU 3.

Prerequisites

  • You have installed the cpu-partitioning TuneD profile by using the yum install tuned-profiles-cpu-partitioning command as root.

Procedure

  1. Edit /etc/tuned/cpu-partitioning-variables.conf file and add the following information:

    # All isolated CPUs:
    isolated_cores=2-23
    # Isolated CPUs without the kernel’s scheduler load balancing:
    no_balance_cores=2,3
  2. Set the cpu-partitioning TuneD profile:

    # tuned-adm profile cpu-partitioning
  3. Reboot

    After rebooting, the system is tuned for low-latency, according to the isolation in the cpu-partitioning figure. The application can use taskset to pin the reader and writer threads to CPUs 2 and 3, and the remaining application threads on CPUs 4-23.

Additional resources

  • tuned-profiles-cpu-partitioning(7) man page on your system

2.9. Customizing the cpu-partitioning TuneD profile

You can extend the TuneD profile to make additional tuning changes.

For example, the cpu-partitioning profile sets the CPUs to use cstate=1. In order to use the cpu-partitioning profile but to additionally change the CPU cstate from cstate1 to cstate0, the following procedure describes a new TuneD profile named my_profile, which inherits the cpu-partitioning profile and then sets C state-0.

Procedure

  1. Create the /etc/tuned/my_profile directory:

    # mkdir /etc/tuned/my_profile
  2. Create a tuned.conf file in this directory, and add the following content:

    # vi /etc/tuned/my_profile/tuned.conf
    [main]
    summary=Customized tuning on top of cpu-partitioning
    include=cpu-partitioning
    [cpu]
    force_latency=cstate.id:0|1
  3. Use the new profile:

    # tuned-adm profile my_profile
Note

In the shared example, a reboot is not required. However, if the changes in the my_profile profile require a reboot to take effect, then reboot your machine.

Additional resources

  • tuned-profiles-cpu-partitioning(7) man page on your system

2.10. Real-time TuneD profiles distributed with RHEL

Real-time profiles are intended for systems running the real-time kernel. Without a special kernel build, they do not configure the system to be real-time. On RHEL, the profiles are available from additional repositories.

The following real-time profiles are available:

realtime

Use on bare-metal real-time systems.

Provided by the tuned-profiles-realtime package, which is available from the RT or NFV repositories.

realtime-virtual-host

Use in a virtualization host configured for real-time.

Provided by the tuned-profiles-nfv-host package, which is available from the NFV repository.

realtime-virtual-guest

Use in a virtualization guest configured for real-time.

Provided by the tuned-profiles-nfv-guest package, which is available from the NFV repository.

2.11. Static and dynamic tuning in TuneD

Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic, is important when determining which one to use for a given situation or purpose.

Static tuning
Mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools such as ethtool.
Dynamic tuning

Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information.

For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use.

By default, dynamic tuning is disabled. To enable it, edit the /etc/tuned/tuned-main.conf file and change the dynamic_tuning option to 1. TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use the update_interval option.

Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles.

Example 2.2. Static and dynamic tuning on a workstation

On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded.

For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage.

If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high.

This principle is used for other plug-ins for CPU and disks as well.

2.12. TuneD no-daemon mode

You can run TuneD in no-daemon mode, which does not require any resident memory. In this mode, TuneD applies the settings and exits.

By default, no-daemon mode is disabled because a lot of TuneD functionality is missing in this mode, including:

  • D-Bus support
  • Hot-plug support
  • Rollback support for settings

To enable no-daemon mode, include the following line in the /etc/tuned/tuned-main.conf file:

daemon = 0

2.13. Installing and enabling TuneD

This procedure installs and enables the TuneD application, installs TuneD profiles, and presets a default TuneD profile for your system.

Procedure

  1. Install the TuneD package:

    # yum install tuned
  2. Enable and start the TuneD service:

    # systemctl enable --now tuned
  3. Optional: Install TuneD profiles for real-time systems:

    For the TuneD profiles for real-time systems enable rhel-8 repository.

    # subscription-manager repos --enable=rhel-8-for-x86_64-nfv-beta-rpms

    Install it.

    # yum install tuned-profiles-realtime tuned-profiles-nfv
  4. Verify that a TuneD profile is active and applied:

    $ tuned-adm active
    
    Current active profile: throughput-performance
    Note

    The active profile TuneD automatically presets differs based on your machine type and system settings.

    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See tuned log file ('/var/log/tuned/tuned.log') for details.

2.14. Listing available TuneD profiles

This procedure lists all TuneD profiles that are currently available on your system.

Procedure

  • To list all available TuneD profiles on your system, use:

    $ tuned-adm list
    
    Available profiles:
    - accelerator-performance - Throughput performance based tuning with disabled higher latency STOP states
    - balanced                - General non-specialized TuneD profile
    - desktop                 - Optimize for the desktop use-case
    - latency-performance     - Optimize for deterministic performance at the cost of increased power consumption
    - network-latency         - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance
    - network-throughput      - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks
    - powersave               - Optimize for low power consumption
    - throughput-performance  - Broadly applicable tuning that provides excellent performance across a variety of common server workloads
    - virtual-guest           - Optimize for running inside a virtual guest
    - virtual-host            - Optimize for running KVM guests
    Current active profile: balanced
  • To display only the currently active profile, use:

    $ tuned-adm active
    
    Current active profile: throughput-performance

Additional resources

  • tuned-adm(8) man page on your system

2.15. Setting a TuneD profile

This procedure activates a selected TuneD profile on your system.

Prerequisites

Procedure

  1. Optional: You can let TuneD recommend the most suitable profile for your system:

    # tuned-adm recommend
    
    throughput-performance
  2. Activate a profile:

    # tuned-adm profile selected-profile

    Alternatively, you can activate a combination of multiple profiles:

    # tuned-adm profile selected-profile1 selected-profile2

    Example 2.3. A virtual machine optimized for low power consumption

    The following example optimizes the system to run in a virtual machine with the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:

    # tuned-adm profile virtual-guest powersave
  3. View the current active TuneD profile on your system:

    # tuned-adm active
    
    Current active profile: selected-profile
  4. Reboot the system:

    # reboot

Verification

  • Verify that the TuneD profile is active and applied:

    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See tuned log file ('/var/log/tuned/tuned.log') for details.

Additional resources

  • tuned-adm(8) man page on your system

2.16. Using the TuneD D-Bus interface

You can directly communicate with TuneD at runtime through the TuneD D-Bus interface to control a variety of TuneD services.

You can use the busctl or dbus-send commands to access the D-Bus API.

Note

Although you can use either the busctl or dbus-send command, the busctl command is a part of systemd and, therefore, present on most hosts already.

2.16.1. Using the TuneD D-Bus interface to show available TuneD D-Bus API methods

You can see the D-Bus API methods available to use with TuneD by using the TuneD D-Bus interface.

Prerequisites

Procedure

  • To see the available TuneD API methods, run:

    $ busctl introspect com.redhat.tuned /Tuned com.redhat.tuned.control

    The output should look similar to the following:

    NAME                       	TYPE  	SIGNATURE RESULT/VALUE FLAGS
    .active_profile            	method	-     	  s            -
    .auto_profile              	method	-     	  (bs)         -
    .disable                   	method	-      	  b            -
    .get_all_plugins           	method	-     	  a{sa{ss}}    -
    .get_plugin_documentation  	method	s     	  s            -
    .get_plugin_hints          	method	s     	  a{ss}        -
    .instance_acquire_devices  	method	ss    	  (bs)         -
    .is_running                	method	-     	  b            -
    .log_capture_finish        	method	s     	  s            -
    .log_capture_start         	method	ii    	  s            -
    .post_loaded_profile       	method	-     	  s            -
    .profile_info              	method	s     	  (bsss)       -
    .profile_mode              	method	-     	  (ss)         -
    .profiles                  	method	-     	  as           -
    .profiles2                 	method	-     	  a(ss)        -
    .recommend_profile         	method	-     	  s            -
    .register_socket_signal_path    method	s     	  b            -
    .reload                    	method	-     	  b            -
    .start                     	method	-     	  b            -
    .stop                      	method	-     	  b            -
    .switch_profile            	method	s     	  (bs)         -
    .verify_profile            	method	-     	  b            -
    .verify_profile_ignore_missing  method	-     	  b            -
    .profile_changed           	signal	sbs   	  -            -

    You can find descriptions of the different available methods in the TuneD upstream repository.

2.16.2. Using the TuneD D-Bus interface to change the active TuneD profile

You can replace the active TuneD profile with your desired TuneD profile by using the TuneD D-Bus interface.

Prerequisites

Procedure

  • To change the active TuneD profile, run:

    $ busctl call com.redhat.tuned /Tuned com.redhat.tuned.control switch_profile s profile
    (bs) true "OK"

    Replace profile with the name of your desired profile.

Verification

  • To view the current active TuneD profile, run:

    $ busctl call com.redhat.tuned /Tuned com.redhat.tuned.control active_profile
    s "profile"

2.17. Disabling TuneD

This procedure disables TuneD and resets all affected system settings to their original state before TuneD modified them.

Procedure

  • To disable all tunings temporarily:

    # tuned-adm off

    The tunings are applied again after the TuneD service restarts.

  • Alternatively, to stop and disable the TuneD service permanently:

    # systemctl disable --now tuned

Additional resources

  • tuned-adm(8) man page on your system

Chapter 3. Customizing TuneD profiles

You can create or modify TuneD profiles to optimize system performance for your intended use case.

Prerequisites

3.1. TuneD profiles

A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles.

The profiles provided with TuneD are divided into the following categories:

  • Power-saving profiles
  • Performance-boosting profiles

The performance-boosting profiles include profiles that focus on the following aspects:

  • Low latency for storage and network
  • High throughput for storage and network
  • Virtual machine performance
  • Virtualization host performance
Syntax of profile configuration

The tuned.conf file can contain one [main] section and other sections for configuring plug-in instances. However, all sections are optional.

Lines starting with the hash sign (#) are comments.

Additional resources

  • tuned.conf(5) man page on your system

3.2. The default TuneD profile

During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules:

EnvironmentDefault profileGoal

Compute nodes

throughput-performance

The best throughput performance

Virtual machines

virtual-guest

The best performance. If you are not interested in the best performance, you can change it to the balanced or powersave profile.

Other cases

balanced

Balanced performance and power consumption

Additional resources

  • tuned.conf(5) man page on your system

3.3. Merged TuneD profiles

As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load.

If there are conflicts, the settings from the last specified profile takes precedence.

Example 3.1. Low power consumption in a virtual guest

The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:

# tuned-adm profile virtual-guest powersave
Warning

Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile.

Additional resources

  • tuned-adm and tuned.conf(5) man pages on your system

3.4. The location of TuneD profiles

TuneD stores profiles in the following directories:

/usr/lib/tuned/
Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf, and optionally other files, for example helper scripts.
/etc/tuned/
If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in /etc/tuned/ is used.

Additional resources

  • tuned.conf(5) man page on your system

3.5. Inheritance between TuneD profiles

TuneD profiles can be based on other profiles and modify only certain aspects of their parent profile.

The [main] section of TuneD profiles recognizes the include option:

[main]
include=parent

All settings from the parent profile are loaded in this child profile. In the following sections, the child profile can override certain settings inherited from the parent profile or add new settings not present in the parent profile.

You can create your own child profile in the /etc/tuned/ directory based on a pre-installed profile in /usr/lib/tuned/ with only some parameters adjusted.

If the parent profile is updated, such as after a TuneD upgrade, the changes are reflected in the child profile.

Example 3.2. A power-saving profile based on balanced

The following is an example of a custom profile that extends the balanced profile and sets Aggressive Link Power Management (ALPM) for all devices to the maximum powersaving.

[main]
include=balanced

[scsi_host]
alpm=min_power

Additional resources

  • tuned.conf(5) man page on your system

3.6. Static and dynamic tuning in TuneD

Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic, is important when determining which one to use for a given situation or purpose.

Static tuning
Mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools such as ethtool.
Dynamic tuning

Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information.

For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use.

By default, dynamic tuning is disabled. To enable it, edit the /etc/tuned/tuned-main.conf file and change the dynamic_tuning option to 1. TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use the update_interval option.

Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles.

Example 3.3. Static and dynamic tuning on a workstation

On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded.

For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage.

If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high.

This principle is used for other plug-ins for CPU and disks as well.

3.7. TuneD plug-ins

Plug-ins are modules in TuneD profiles that TuneD uses to monitor or optimize different devices on the system.

TuneD uses two types of plug-ins:

Monitoring plug-ins

Monitoring plug-ins are used to get information from a running system. The output of the monitoring plug-ins can be used by tuning plug-ins for dynamic tuning.

Monitoring plug-ins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plug-ins. If two tuning plug-ins require the same data, only one instance of the monitoring plug-in is created and the data is shared.

Tuning plug-ins
Each tuning plug-in tunes an individual subsystem and takes several parameters that are populated from the TuneD profiles. Each subsystem can have multiple devices, such as multiple CPUs or network cards, that are handled by individual instances of the tuning plug-ins. Specific settings for individual devices are also supported.
Syntax for plug-ins in TuneD profiles

Sections describing plug-in instances are formatted in the following way:

[NAME]
type=TYPE
devices=DEVICES
NAME
is the name of the plug-in instance as it is used in the logs. It can be an arbitrary string.
TYPE
is the type of the tuning plug-in.
DEVICES

is the list of devices that this plug-in instance handles.

The devices line can contain a list, a wildcard (*), and negation (!). If there is no devices line, all devices present or later attached on the system of the TYPE are handled by the plug-in instance. This is same as using the devices=* option.

Example 3.4. Matching block devices with a plug-in

The following example matches all block devices starting with sd, such as sda or sdb, and does not disable barriers on them:

[data_disk]
type=disk
devices=sd*
disable_barriers=false

The following example matches all block devices except sda1 and sda2:

[data_disk]
type=disk
devices=!sda1, !sda2
disable_barriers=false

If no instance of a plug-in is specified, the plug-in is not enabled.

If the plug-in supports more options, they can be also specified in the plug-in section. If the option is not specified and it was not previously specified in the included plug-in, the default value is used.

Short plug-in syntax

If you do not need custom names for the plug-in instance and there is only one definition of the instance in your configuration file, TuneD supports the following short syntax:

[TYPE]
devices=DEVICES

In this case, it is possible to omit the type line. The instance is then referred to with a name, same as the type. The previous example could be then rewritten into:

Example 3.5. Matching block devices using the short syntax

[disk]
devices=sdb*
disable_barriers=false
Conflicting plug-in definitions in a profile

If the same section is specified more than once using the include option, the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the previous settings. If you do not know what was previously defined, you can use the replace Boolean option and set it to true. This causes all the previous definitions with the same name to be overwritten and the merge does not happen.

You can also disable the plug-in by specifying the enabled=false option. This has the same effect as if the instance was never defined. Disabling the plug-in is useful if you are redefining the previous definition from the include option and do not want the plug-in to be active in your custom profile.

NOTE

TuneD includes the ability to run any shell command as part of enabling or disabling a tuning profile. This enables you to extend TuneD profiles with functionality that has not been integrated into TuneD yet.

You can specify arbitrary shell commands using the script plug-in.

Additional resources

  • tuned.conf(5) man page on your system

3.8. Available TuneD plug-ins

Monitoring plug-ins

Currently, the following monitoring plug-ins are implemented:

disk
Gets disk load (number of IO operations) per device and measurement interval.
net
Gets network load (number of transferred packets) per network card and measurement interval.
load
Gets CPU load per CPU and measurement interval.
Tuning plug-ins

Currently, the following tuning plug-ins are implemented. Only some of these plug-ins implement dynamic tuning. Options supported by plug-ins are also listed:

cpu

Sets the CPU governor to the value specified by the governor option and dynamically changes the Power Management Quality of Service (PM QoS) CPU Direct Memory Access (DMA) latency according to the CPU load.

If the CPU load is lower than the value specified by the load_threshold option, the latency is set to the value specified by the latency_high option, otherwise it is set to the value specified by latency_low.

You can also force the latency to a specific value and prevent it from dynamically changing further. To do so, set the force_latency option to the required latency value.

eeepc_she

Dynamically sets the front-side bus (FSB) speed according to the CPU load.

This feature can be found on some netbooks and is also known as the ASUS Super Hybrid Engine (SHE).

If the CPU load is lower or equal to the value specified by the load_threshold_powersave option, the plug-in sets the FSB speed to the value specified by the she_powersave option. If the CPU load is higher or equal to the value specified by the load_threshold_normal option, it sets the FSB speed to the value specified by the she_normal option.

Static tuning is not supported and the plug-in is transparently disabled if TuneD does not detect the hardware support for this feature.

net
Configures the Wake-on-LAN functionality to the values specified by the wake_on_lan option. It uses the same syntax as the ethtool utility. It also dynamically changes the interface speed according to the interface utilization.
sysctl

Sets various sysctl settings specified by the plug-in options.

The syntax is name=value, where name is the same as the name provided by the sysctl utility.

Use the sysctl plug-in if you need to change system settings that are not covered by other plug-ins available in TuneD. If the settings are covered by some specific plug-ins, prefer these plug-ins.

usb

Sets autosuspend timeout of USB devices to the value specified by the autosuspend parameter.

The value 0 means that autosuspend is disabled.

vm

Enables or disables transparent huge pages depending on the value of the transparent_hugepages option.

Valid values of the transparent_hugepages option are:

  • "always"
  • "never"
  • "madvise"
audio

Sets the autosuspend timeout for audio codecs to the value specified by the timeout option.

Currently, the snd_hda_intel and snd_ac97_codec codecs are supported. The value 0 means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean option reset_controller to true.

disk

Sets the disk elevator to the value specified by the elevator option.

It also sets:

  • APM to the value specified by the apm option
  • Scheduler quantum to the value specified by the scheduler_quantum option
  • Disk spindown timeout to the value specified by the spindown option
  • Disk readahead to the value specified by the readahead parameter
  • The current disk readahead to a value multiplied by the constant specified by the readahead_multiply option

In addition, this plug-in dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean option dynamic and is enabled by default.

scsi_host

Tunes options for SCSI hosts.

It sets Aggressive Link Power Management (ALPM) to the value specified by the alpm option.

mounts
Enables or disables barriers for mounts according to the Boolean value of the disable_barriers option.
script

Executes an external script or binary when the profile is loaded or unloaded. You can choose an arbitrary executable.

Important

The script plug-in is provided mainly for compatibility with earlier releases. Prefer other TuneD plug-ins if they cover the required functionality.

TuneD calls the executable with one of the following arguments:

  • start when loading the profile
  • stop when unloading the profile

You need to correctly implement the stop action in your executable and revert all settings that you changed during the start action. Otherwise, the roll-back step after changing your TuneD profile will not work.

Bash scripts can import the /usr/lib/tuned/functions Bash library and use the functions defined there. Use these functions only for functionality that is not natively provided by TuneD. If a function name starts with an underscore, such as _wifi_set_power_level, consider the function private and do not use it in your scripts, because it might change in the future.

Specify the path to the executable using the script parameter in the plug-in configuration.

Example 3.6. Running a Bash script from a profile

To run a Bash script named script.sh that is located in the profile directory, use:

[script]
script=${i:PROFILE_DIR}/script.sh
sysfs

Sets various sysfs settings specified by the plug-in options.

The syntax is name=value, where name is the sysfs path to use.

Use this plugin in case you need to change some settings that are not covered by other plug-ins. Prefer specific plug-ins if they cover the required settings.

video

Sets various powersave levels on video cards. Currently, only the Radeon cards are supported.

The powersave level can be specified by using the radeon_powersave option. Supported values are:

  • default
  • auto
  • low
  • mid
  • high
  • dynpm
  • dpm-battery
  • dpm-balanced
  • dpm-perfomance

For details, see www.x.org. Note that this plug-in is experimental and the option might change in future releases.

bootloader

Adds options to the kernel command line. This plug-in supports only the GRUB 2 boot loader.

Customized non-standard location of the GRUB 2 configuration file can be specified by the grub2_cfg_file option.

The kernel options are added to the current GRUB configuration and its templates. The system needs to be rebooted for the kernel options to take effect.

Switching to another profile or manually stopping the TuneD service removes the additional options. If you shut down or reboot the system, the kernel options persist in the grub.cfg file.

The kernel options can be specified by the following syntax:

cmdline=arg1 arg2 ... argN

Example 3.7. Modifying the kernel command line

For example, to add the quiet kernel option to a TuneD profile, include the following lines in the tuned.conf file:

[bootloader]
cmdline=quiet

The following is an example of a custom profile that adds the isolcpus=2 option to the kernel command line:

[bootloader]
cmdline=isolcpus=2
service

Handles various sysvinit, sysv-rc, openrc, and systemd services specified by the plug-in options.

The syntax is service.service_name=command[,file:file].

Supported service-handling commands are:

  • start
  • stop
  • enable
  • disable

Separate multiple commands using either a comma (,) or a semicolon (;). If the directives conflict, the service plugin uses the last listed one.

Use the optional file:file directive to install an overlay configuration file, file, for systemd only. Other init systems ignore this directive. The service plugin copies overlay configuration files to /etc/systemd/system/service_name.service.d/ directories. Once profiles are unloaded, the service plugin removes these directories if they are empty.

Note

The service plugin only operates on the current runlevel with non-systemd init systems.

Example 3.8. Starting and enabling the sendmail sendmail service with an overlay file

[service]
service.sendmail=start,enable,file:${i:PROFILE_DIR}/tuned-sendmail.conf

The internal variable ${i:PROFILE_DIR} points to the directory the plugin loads the profile from.

scheduler
Offers a variety of options for the tuning of scheduling priorities, CPU core isolation, and process, thread, and IRQ affinities.

For specifics of the different options available, see Functionalities of the scheduler TuneD plug-in.

3.9. Functionalities of the scheduler TuneD plugin

Use the scheduler TuneD plugin to control and tune scheduling priorities, CPU core isolation, and process, thread, and IRQ afinities.

CPU isolation

To prevent processes, threads, and IRQs from using certain CPUs, use the isolated_cores option. It changes process and thread affinities, IRQ affinities, and sets the default_smp_affinity parameter for IRQs.

The CPU affinity mask is adjusted for all processes and threads matching the ps_whitelist option, subject to success of the sched_setaffinity() system call. The default setting of the ps_whitelist regular expression is .* to match all processes and thread names. To exclude certain processes and threads, use the ps_blacklist option. The value of this option is also interpreted as a regular expression. Process and thread names are matched against that expression. Profile rollback enables all matching processes and threads to run on all CPUs, and restores the IRQ settings prior to the profile application.

Multiple regular expressions separated by ; for the ps_whitelist and ps_blacklist options are supported. Escaped semicolon \; is taken literally.

Example 3.9. Isolate CPUs 2-4

The following configuration isolates CPUs 2-4. Processes and threads that match the ps_blacklist regular expression can use any CPUs regardless of the isolation:

[scheduler]
isolated_cores=2-4
ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.*

IRQ SMP affinity

The /proc/irq/default_smp_affinity file contains a bitmask representing the default target CPU cores on a system for all inactive interrupt request (IRQ) sources. Once an IRQ is activated or allocated, the value in the /proc/irq/default_smp_affinity file determines the IRQ’s affinity bitmask.

The default_irq_smp_affinity parameter controls what TuneD writes to the /proc/irq/default_smp_affinity file. The default_irq_smp_affinity parameter supports the following values and behaviors:

calc

Calculates the content of the /proc/irq/default_smp_affinity file from the isolated_cores parameter. An inversion of the isolated_cores parameter calculates the non-isolated cores.

The intersection of the non-isolated cores and the previous content of the /proc/irq/default_smp_affinity file is then written to the /proc/irq/default_smp_affinity file.

This is the default behavior if the default_irq_smp_affinity parameter is omitted.

ignore
TuneD does not modify the /proc/irq/default_smp_affinity file.
A CPU list

Takes the form of a single number such as 1, a comma separated list such as 1,3, or a range such as 3-5.

Unpacks the CPU list and writes it directly to the /proc/irq/default_smp_affinity file.

Example 3.10. Setting the default IRQ smp affinity using an explicit CPU list

The following example uses an explicit CPU list to set the default IRQ SMP affinity to CPUs 0 and 2:

[scheduler]
isolated_cores=1,3
default_irq_smp_affinity=0,2

Scheduling policy

To adjust scheduling policy, priority and affinity for a group of processes or threads, use the following syntax:

group.groupname=rule_prio:sched:prio:affinity:regex

where rule_prio defines internal TuneD priority of the rule. Rules are sorted based on priority. This is needed for inheritance to be able to reorder previously defined rules. Equal rule_prio rules should be processed in the order they were defined. However, this is Python interpreter dependent. To disable an inherited rule for groupname, use:

group.groupname=

sched must be one of the following:

f
for first in, first out (FIFO)
b
for batch
r
for round robin
o
for other
*
for do not change

affinity is CPU affinity in hexadecimal. Use * for no change.

prio is scheduling priority (see chrt -m).

regex is Python regular expression. It is matched against the output of the ps -eo cmd command.

Any given process name can match more than one group. In such cases, the last matching regex determines the priority and scheduling policy.

Example 3.11. Setting scheduling policies and priorities

The following example sets the scheduling policy and priorities to kernel threads and watchdog:

[scheduler]
group.kthreads=0:*:1:*:\[.*\]$
group.watchdog=0:f:99:*:\[watchdog.*\]

The scheduler plugin uses a perf event loop to identify newly created processes. By default, it listens to perf.RECORD_COMM and perf.RECORD_EXIT events.

Setting the perf_process_fork parameter to true tells the plug-in to also listen to perf.RECORD_FORK events, meaning that child processes created by the fork() system call are processed.

Note

Processing perf events can pose a significant CPU overhead.

The CPU overhead of the scheduler plugin can be mitigated by using the scheduler runtime option and setting it to 0. This completely disables the dynamic scheduler functionality and the perf events are not monitored and acted upon. The disadvantage of this is that the process and thread tuning will be done only at profile application.

Example 3.12. Disabling the dynamic scheduler functionality

The following example disables the dynamic scheduler functionality while also isolating CPUs 1 and 3:

[scheduler]
runtime=0
isolated_cores=1,3

The mmapped buffer is used for perf events. Under heavy loads, this buffer might overflow and as a result the plugin might start missing events and not processing some newly created processes. In such cases, use the perf_mmap_pages parameter to increase the buffer size. The value of the perf_mmap_pages parameter must be a power of 2. If the perf_mmap_pages parameter is not manually set, a default value of 128 is used.

Confinement using cgroups

The scheduler plugin supports process and thread confinement using cgroups v1.

The cgroup_mount_point option specifies the path to mount the cgroup file system, or, where TuneD expects it to be mounted. If it is unset, /sys/fs/cgroup/cpuset is expected.

If the cgroup_groups_init option is set to 1, TuneD creates and removes all cgroups defined with the cgroup* options. This is the default behavior. If the cgroup_mount_point option is set to 0, the cgroups must be preset by other means.

If the cgroup_mount_point_init option is set to 1, TuneD creates and removes the cgroup mount point. It implies cgroup_groups_init = 1. If the cgroup_mount_point_init option is set to 0, you must preset the cgroups mount point by other means. This is the default behavior.

The cgroup_for_isolated_cores option is the cgroup name for the isolated_cores option functionality. For example, if a system has 4 CPUs, isolated_cores=1 means that Tuned moves all processes and threads to CPUs 0, 2, and 3. The scheduler plug-in isolates the specified core by writing the calculated CPU affinity to the cpuset.cpus control file of the specified cgroup and moves all the matching processes and threads to this group. If this option is unset, classic cpuset affinity using sched_setaffinity() sets the CPU affinity.

The cgroup.cgroup_name option defines affinities for arbitrary cgroups. You can even use hierarchic cgroups, but you must specify the hierarchy in the correct order. TuneD does not do any sanity checks here, with the exception that it forces the cgroup to be in the location specified by the cgroup_mount_point option.

The syntax of the scheduler option starting with group. has been augmented to use cgroup.cgroup_name instead of the hexadecimal affinity. The matching processes are moved to the cgroup cgroup_name. You can also use cgroups not defined by the cgroup. option as described above. For example, cgroups not managed by TuneD.

All cgroup names are sanitized by replacing all periods (.) with slashes (/). This prevents the plugin from writing outside the location specified by the cgroup_mount_point option.

Example 3.13. Using cgroups v1 with the scheduler plug-in

The following example creates 2 cgroups, group1 and group2. It sets the cgroup group1 affinity to CPU 2 and the cgroup group2 to CPUs 0 and 2. Given a 4 CPU setup, the isolated_cores=1 option moves all processes and threads to CPU cores 0, 2, and 3. Processes and threads specified by the ps_blacklist regular expression are not moved.

[scheduler]
cgroup_mount_point=/sys/fs/cgroup/cpuset
cgroup_mount_point_init=1
cgroup_groups_init=1
cgroup_for_isolated_cores=group
cgroup.group1=2
cgroup.group2=0,2

group.ksoftirqd=0:f:2:cgroup.group1:ksoftirqd.*
ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.*
isolated_cores=1

The cgroup_ps_blacklist option excludes processes belonging to the specified cgroups. The regular expression specified by this option is matched against cgroup hierarchies from /proc/PID/cgroups. Commas (,) separate cgroups v1 hierarchies from /proc/PID/cgroups before regular expression matching. The following is an example of content the regular expression is matched against:

10:hugetlb:/,9:perf_event:/,8:blkio:/

Multiple regular expressions can be separated by semicolons (;). The semicolon represents a logical 'or' operator.

Example 3.14. Excluding processes from the scheduler using cgroups

In the following example, the scheduler plug-in moves all processes away from core 1, except for processes which belong to cgroup /daemons. The \b string is a regular expression metacharacter that matches a word boundary.

[scheduler]
isolated_cores=1
cgroup_ps_blacklist=:/daemons\b

In the following example, the scheduler plugin excludes all processes which belong to a cgroup with a hierarchy-ID of 8 and controller-list blkio.

[scheduler]
isolated_cores=1
cgroup_ps_blacklist=\b8:blkio:

Recent kernels moved some sched_ and numa_balancing_ kernel run-time parameters from the /proc/sys/kernel directory managed by the sysctl utility, to debugfs, typically mounted under the /sys/kernel/debug directory. TuneD provides an abstraction mechanism for the following parameters via the scheduler plugin where, based on the kernel used, TuneD writes the specified value to the correct location:

  • sched_min_granularity_ns
  • sched_latency_ns,
  • sched_wakeup_granularity_ns
  • sched_tunable_scaling,
  • sched_migration_cost_ns
  • sched_nr_migrate
  • numa_balancing_scan_delay_ms
  • numa_balancing_scan_period_min_ms
  • numa_balancing_scan_period_max_ms
  • numa_balancing_scan_size_mb

    Example 3.15. Set tasks' "cache hot" value for migration decisions.

    On the old kernels, setting the following parameter meant that sysctl wrote a value of 500000 to the /proc/sys/kernel/sched_migration_cost_ns file:

    [sysctl]
    kernel.sched_migration_cost_ns=500000

    This is, on more recent kernels, equivalent to setting the following parameter via the scheduler plugin:

    [scheduler]
    sched_migration_cost_ns=500000

    Meaning TuneD writes a value of 500000 to the /sys/kernel/debug/sched/migration_cost_ns file.

3.10. Variables in TuneD profiles

Variables expand at run time when a TuneD profile is activated.

Using TuneD variables reduces the amount of necessary typing in TuneD profiles.

There are no predefined variables in TuneD profiles. You can define your own variables by creating the [variables] section in a profile and using the following syntax:

[variables]

variable_name=value

To expand the value of a variable in a profile, use the following syntax:

${variable_name}

Example 3.16. Isolating CPU cores using variables

In the following example, the ${isolated_cores} variable expands to 1,2; hence the kernel boots with the isolcpus=1,2 option:

[variables]
isolated_cores=1,2

[bootloader]
cmdline=isolcpus=${isolated_cores}

The variables can be specified in a separate file. For example, you can add the following lines to tuned.conf:

[variables]
include=/etc/tuned/my-variables.conf

[bootloader]
cmdline=isolcpus=${isolated_cores}

If you add the isolated_cores=1,2 option to the /etc/tuned/my-variables.conf file, the kernel boots with the isolcpus=1,2 option.

Additional resources

  • tuned.conf(5) man page on your system

3.11. Built-in functions in TuneD profiles

Built-in functions expand at run time when a TuneD profile is activated.

You can:

  • Use various built-in functions together with TuneD variables
  • Create custom functions in Python and add them to TuneD in the form of plug-ins

To call a function, use the following syntax:

${f:function_name:argument_1:argument_2}

To expand the directory path where the profile and the tuned.conf file are located, use the PROFILE_DIR function, which requires special syntax:

${i:PROFILE_DIR}

Example 3.17. Isolating CPU cores using variables and built-in functions

In the following example, the ${non_isolated_cores} variable expands to 0,3-5, and the cpulist_invert built-in function is called with the 0,3-5 argument:

[variables]
non_isolated_cores=0,3-5

[bootloader]
cmdline=isolcpus=${f:cpulist_invert:${non_isolated_cores}}

The cpulist_invert function inverts the list of CPUs. For a 6-CPU machine, the inversion is 1,2, and the kernel boots with the isolcpus=1,2 command-line option.

Additional resources

  • tuned.conf(5) man page on your system

3.12. Built-in functions available in TuneD profiles

The following built-in functions are available in all TuneD profiles:

PROFILE_DIR
Returns the directory path where the profile and the tuned.conf file are located.
exec
Executes a process and returns its output.
assertion
Compares two arguments. If they do not match, the function logs text from the first argument and aborts profile loading.
assertion_non_equal
Compares two arguments. If they match, the function logs text from the first argument and aborts profile loading.
kb2s
Converts kilobytes to disk sectors.
s2kb
Converts disk sectors to kilobytes.
strip
Creates a string from all passed arguments and deletes both leading and trailing white space.
virt_check

Checks whether TuneD is running inside a virtual machine (VM) or on bare metal:

  • Inside a VM, the function returns the first argument.
  • On bare metal, the function returns the second argument, even in case of an error.
cpulist_invert
Inverts a list of CPUs to make its complement. For example, on a system with 4 CPUs, numbered from 0 to 3, the inversion of the list 0,2,3 is 1.
cpulist2hex
Converts a CPU list to a hexadecimal CPU mask.
cpulist2hex_invert
Converts a CPU list to a hexadecimal CPU mask and inverts it.
hex2cpulist
Converts a hexadecimal CPU mask to a CPU list.
cpulist_online
Checks whether the CPUs from the list are online. Returns the list containing only online CPUs.
cpulist_present
Checks whether the CPUs from the list are present. Returns the list containing only present CPUs.
cpulist_unpack
Unpacks a CPU list in the form of 1-3,4 to 1,2,3,4.
cpulist_pack
Packs a CPU list in the form of 1,2,3,5 to 1-3,5.

3.13. Creating new TuneD profiles

This procedure creates a new TuneD profile with custom performance rules.

Prerequisites

Procedure

  1. In the /etc/tuned/ directory, create a new directory named the same as the profile that you want to create:

    # mkdir /etc/tuned/my-profile
  2. In the new directory, create a file named tuned.conf. Add a [main] section and plug-in definitions in it, according to your requirements.

    For example, see the configuration of the balanced profile:

    [main]
    summary=General non-specialized TuneD profile
    
    [cpu]
    governor=conservative
    energy_perf_bias=normal
    
    [audio]
    timeout=10
    
    [video]
    radeon_powersave=dpm-balanced, auto
    
    [scsi_host]
    alpm=medium_power
  3. To activate the profile, use:

    # tuned-adm profile my-profile
  4. Verify that the TuneD profile is active and the system settings are applied:

    $ tuned-adm active
    
    Current active profile: my-profile
    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See tuned log file ('/var/log/tuned/tuned.log') for details.

Additional resources

  • tuned.conf(5) man page on your system

3.14. Modifying existing TuneD profiles

This procedure creates a modified child profile based on an existing TuneD profile.

Prerequisites

Procedure

  1. In the /etc/tuned/ directory, create a new directory named the same as the profile that you want to create:

    # mkdir /etc/tuned/modified-profile
  2. In the new directory, create a file named tuned.conf, and set the [main] section as follows:

    [main]
    include=parent-profile

    Replace parent-profile with the name of the profile you are modifying.

  3. Include your profile modifications.

    Example 3.18. Lowering swappiness in the throughput-performance profile

    To use the settings from the throughput-performance profile and change the value of vm.swappiness to 5, instead of the default 10, use:

    [main]
    include=throughput-performance
    
    [sysctl]
    vm.swappiness=5
  4. To activate the profile, use:

    # tuned-adm profile modified-profile
  5. Verify that the TuneD profile is active and the system settings are applied:

    $ tuned-adm active
    
    Current active profile: my-profile
    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See tuned log file ('/var/log/tuned/tuned.log') for details.

Additional resources

  • tuned.conf(5) man page on your system

3.15. Setting the disk scheduler using TuneD

This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots.

In the following commands and configuration, replace:

  • device with the name of the block device, for example sdf
  • selected-scheduler with the disk scheduler that you want to set for the device, for example bfq

Prerequisites

Procedure

  1. Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL.

    To see which profile is currently active, use:

    $ tuned-adm active
  2. Create a new directory to hold your TuneD profile:

    # mkdir /etc/tuned/my-profile
  3. Find the system unique identifier of the selected block device:

    $ udevadm info --query=property --name=/dev/device | grep -E '(WWN|SERIAL)'
    
    ID_WWN=0x5002538d00000000_
    ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0
    ID_SERIAL_SHORT=20120501030900000
    Note

    The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.

  4. Create the /etc/tuned/my-profile/tuned.conf configuration file. In the file, set the following options:

    1. Optional: Include an existing profile:

      [main]
      include=existing-profile
    2. Set the selected disk scheduler for the device that matches the WWN identifier:

      [disk]
      devices_udev_regex=IDNAME=device system unique id
      elevator=selected-scheduler

      Here:

      • Replace IDNAME with the name of the identifier being used (for example, ID_WWN).
      • Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000).

        To match multiple devices in the devices_udev_regex option, enclose the identifiers in parentheses and separate them with vertical bars:

        devices_udev_regex=(ID_WWN=0x5002538d00000000)|(ID_WWN=0x1234567800000000)
  5. Enable your profile:

    # tuned-adm profile my-profile

Verification

  1. Verify that the TuneD profile is active and applied:

    $ tuned-adm active
    
    Current active profile: my-profile
    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See TuneD log file ('/var/log/tuned/tuned.log') for details.
  2. Read the contents of the /sys/block/device/queue/scheduler file:

    # cat /sys/block/device/queue/scheduler
    
    [mq-deadline] kyber bfq none

    In the file name, replace device with the block device name, for example sdc.

    The active scheduler is listed in square brackets ([]).

Additional resources

Chapter 4. Reviewing a system using tuna interface

Use the tuna tool to adjust scheduler tunables, tune thread priority, IRQ handlers, and isolate CPU cores and sockets. Tuna reduces the complexity of performing tuning tasks.

The tuna tool performs the following operations:

  • Lists the CPUs on a system
  • Lists the interrupt requests (IRQs) currently running on a system
  • Changes policy and priority information about threads
  • Displays the current policies and priorities of a system

4.1. Installing the tuna tool

The tuna tool is designed to be used on a running system. This allows application-specific measurement tools to see and analyze system performance immediately after changes have been made.

Procedure

  • Install the tuna tool:

    # yum install tuna

Verification

  • Display the available tuna CLI options:

    # tuna -h

Additional resources

  • tuna(8) man page on your system

4.2. Viewing the system status using tuna tool

This procedure describes how to view the system status using the tuna command-line interface (CLI) tool.

Prerequisites

Procedure

  • To view the current policies and priorities:

    # tuna --show_threads
                thread
    pid   SCHED_ rtpri affinity             cmd
    1      OTHER     0      0,1            init
    2       FIFO    99        0     migration/0
    3      OTHER     0        0     ksoftirqd/0
    4       FIFO    99        0      watchdog/0
  • To view a specific thread corresponding to a PID or matching a command name:

    # tuna --threads=pid_or_cmd_list --show_threads

    The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns.

  • To tune CPUs using the tuna CLI, see Tuning CPUs using tuna tool.
  • To tune the IRQs using the tuna tool, see Tuning IRQs using tuna tool.
  • To save the changed configuration:

    # tuna --save=filename

    This command saves only currently running kernel threads. Processes that are not running are not saved.

Additional resources

  • tuna(8) man page on your system

4.3. Tuning CPUs using tuna tool

The tuna tool commands can target individual CPUs.

Using the tuna tool, you can:

Isolate CPUs
All tasks running on the specified CPU move to the next available CPU. Isolating a CPU makes it unavailable by removing it from the affinity mask of all threads.
Include CPUs
Allows tasks to run on the specified CPU
Restore CPUs
Restores the specified CPU to its previous configuration.

This procedure describes how to tune CPUs using the tuna CLI.

Prerequisites

Procedure

  • To specify the list of CPUs to be affected by a command:

    # tuna --cpus=cpu_list [command]

    The cpu_list argument is a list of comma-separated CPU numbers. For example, --cpus=0,2. CPU lists can also be specified in a range, for example --cpus=”1-3, which would select CPUs 1, 2, and 3.

    To add a specific CPU to the current cpu_list, for example, use --cpus=+0.

    Replace [command] with, for example, --isolate.

  • To isolate a CPU:

    # tuna --cpus=cpu_list --isolate
  • To include a CPU:

    # tuna --cpus=cpu_list --include
  • To use a system with four or more processors, display how to make all the ssh threads run on CPU 0 and 1, and all the http threads on CPU 2 and 3:

    # tuna --cpus=0,1 --threads=ssh\* \
    --move --cpus=2,3 --threads=http\* --move

    This command performs the following operations sequentially:

    1. Selects CPUs 0 and 1.
    2. Selects all threads that begin with ssh.
    3. Moves the selected threads to the selected CPUs. Tuna sets the affinity mask of threads starting with ssh to the appropriate CPUs. The CPUs can be expressed numerically as 0 and 1, in hex mask as 0x3, or in binary as 11.
    4. Resets the CPU list to 2 and 3.
    5. Selects all threads that begin with http.
    6. Moves the selected threads to the specified CPUs. Tuna sets the affinity mask of threads starting with http to the specified CPUs. The CPUs can be expressed numerically as 2 and 3, in hex mask as 0xC, or in binary as 1100.

Verification

  • Display the current configuration and verify that the changes were performed as expected:

    # tuna --threads=gnome-sc\* --show_threads \
    --cpus=0 --move --show_threads --cpus=1 \
    --move --show_threads --cpus=+0 --move --show_threads
    
                           thread       ctxt_switches
         pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
       3861   OTHER     0      0,1     33997           58 gnome-screensav
                           thread       ctxt_switches
         pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
       3861   OTHER     0        0     33997           58 gnome-screensav
                           thread       ctxt_switches
         pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
       3861   OTHER     0        1     33997           58 gnome-screensav
                           thread       ctxt_switches
         pid SCHED_ rtpri affinity voluntary nonvoluntary             cmd
       3861   OTHER     0      0,1     33997           58 gnome-screensav

    This command performs the following operations sequentially:

    1. Selects all threads that begin with the gnome-sc threads.
    2. Displays the selected threads to enable the user to verify their affinity mask and RT priority.
    3. Selects CPU 0.
    4. Moves the gnome-sc threads to the specified CPU, CPU 0.
    5. Shows the result of the move.
    6. Resets the CPU list to CPU 1.
    7. Moves the gnome-sc threads to the specified CPU, CPU 1.
    8. Displays the result of the move.
    9. Adds CPU 0 to the CPU list.
    10. Moves the gnome-sc threads to the specified CPUs, CPUs 0 and 1.
    11. Displays the result of the move.

Additional resources

  • /proc/cpuinfo file
  • tuna(8) man page on your system

4.4. Tuning IRQs using tuna tool

The /proc/interrupts file records the number of interrupts per IRQ, the type of interrupt, and the name of the device that is located at that IRQ.

This procedure describes how to tune the IRQs using the tuna tool.

Prerequisites

Procedure

  • To view the current IRQs and their affinity:

    # tuna --show_irqs
    # users            affinity
    0 timer                   0
    1 i8042                   0
    7 parport0                0
  • To specify the list of IRQs to be affected by a command:

    # tuna --irqs=irq_list [command]

    The irq_list argument is a list of comma-separated IRQ numbers or user-name patterns.

    Replace [command] with, for example, --spread.

  • To move an interrupt to a specified CPU:

    # tuna --irqs=128 --show_irqs
       # users            affinity
     128 iwlwifi           0,1,2,3
    
    # tuna --irqs=128 --cpus=3 --move

    Replace 128 with the irq_list argument and 3 with the cpu_list argument.

    The cpu_list argument is a list of comma-separated CPU numbers, for example, --cpus=0,2. For more information, see Tuning CPUs using tuna tool.

Verification

  • Compare the state of the selected IRQs before and after moving any interrupt to a specified CPU:

    # tuna --irqs=128 --show_irqs
       # users            affinity
     128 iwlwifi                 3

Additional resources

  • /procs/interrupts file
  • tuna(8) man page on your system

Chapter 5. Monitoring performance using RHEL system roles

As a system administrator, you can use the metrics RHEL system role with any Ansible Automation Platform control node to monitor the performance of a system.

5.1. Preparing a control node and managed nodes to use RHEL system roles

Before you can use individual RHEL system roles to manage services and settings, you must prepare the control node and managed nodes.

5.1.1. Preparing a control node on RHEL 8

Before using RHEL system roles, you must configure a control node. This system then configures the managed hosts from the inventory according to the playbooks.

Prerequisites

Procedure

  1. Create a user named ansible to manage and run playbooks:

    [root@control-node]# useradd ansible
  2. Switch to the newly created ansible user:

    [root@control-node]# su - ansible

    Perform the rest of the procedure as this user.

  3. Create an SSH public and private key:

    [ansible@control-node]$ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/ansible/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase): <password>
    Enter same passphrase again: <password>
    ...

    Use the suggested default location for the key file.

  4. Optional: To prevent Ansible from prompting you for the SSH key password each time you establish a connection, configure an SSH agent.
  5. Create the ~/.ansible.cfg file with the following content:

    [defaults]
    inventory = /home/ansible/inventory
    remote_user = ansible
    
    [privilege_escalation]
    become = True
    become_method = sudo
    become_user = root
    become_ask_pass = True
    Note

    Settings in the ~/.ansible.cfg file have a higher priority and override settings from the global /etc/ansible/ansible.cfg file.

    With these settings, Ansible performs the following actions:

    • Manages hosts in the specified inventory file.
    • Uses the account set in the remote_user parameter when it establishes SSH connections to managed nodes.
    • Uses the sudo utility to execute tasks on managed nodes as the root user.
    • Prompts for the root password of the remote user every time you apply a playbook. This is recommended for security reasons.
  6. Create an ~/inventory file in INI or YAML format that lists the hostnames of managed hosts. You can also define groups of hosts in the inventory file. For example, the following is an inventory file in the INI format with three hosts and one host group named US:

    managed-node-01.example.com
    
    [US]
    managed-node-02.example.com ansible_host=192.0.2.100
    managed-node-03.example.com

    Note that the control node must be able to resolve the hostnames. If the DNS server cannot resolve certain hostnames, add the ansible_host parameter next to the host entry to specify its IP address.

  7. Install RHEL system roles:

    • On a RHEL host without Ansible Automation Platform, install the rhel-system-roles package:

      [root@control-node]# yum install rhel-system-roles

      This command installs the collections in the /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/ directory, and the ansible-core package as a dependency.

    • On Ansible Automation Platform, perform the following steps as the ansible user:

      1. Define Red Hat automation hub as the primary source for content in the ~/.ansible.cfg file.
      2. Install the redhat.rhel_system_roles collection from Red Hat automation hub:

        [ansible@control-node]$ ansible-galaxy collection install redhat.rhel_system_roles

        This command installs the collection in the ~/.ansible/collections/ansible_collections/redhat/rhel_system_roles/ directory.

Next step

5.1.2. Preparing a managed node

Managed nodes are the systems listed in the inventory and which will be configured by the control node according to the playbook. You do not have to install Ansible on managed hosts.

Prerequisites

  • You prepared the control node. For more information, see Preparing a control node on RHEL 8.
  • You have SSH access from the control node.

    Important

    Direct SSH access as the root user is a security risk. To reduce this risk, you will create a local user on this node and configure a sudo policy when preparing a managed node. Ansible on the control node can then use the local user account to log in to the managed node and run playbooks as different users, such as root.

Procedure

  1. Create a user named ansible:

    [root@managed-node-01]# useradd ansible

    The control node later uses this user to establish an SSH connection to this host.

  2. Set a password for the ansible user:

    [root@managed-node-01]# passwd ansible
    Changing password for user ansible.
    New password: <password>
    Retype new password: <password>
    passwd: all authentication tokens updated successfully.

    You must enter this password when Ansible uses sudo to perform tasks as the root user.

  3. Install the ansible user’s SSH public key on the managed node:

    1. Log in to the control node as the ansible user, and copy the SSH public key to the managed node:

      [ansible@control-node]$ ssh-copy-id managed-node-01.example.com
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
      The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established.
      ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.
    2. When prompted, connect by entering yes:

      Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    3. When prompted, enter the password:

      ansible@managed-node-01.example.com's password: <password>
      
      Number of key(s) added: 1
      
      Now try logging into the machine, with:   "ssh 'managed-node-01.example.com'"
      and check to make sure that only the key(s) you wanted were added.
    4. Verify the SSH connection by remotely executing a command on the control node:

      [ansible@control-node]$ ssh managed-node-01.example.com whoami
      ansible
  4. Create a sudo configuration for the ansible user:

    1. Create and edit the /etc/sudoers.d/ansible file by using the visudo command:

      [root@managed-node-01]# visudo /etc/sudoers.d/ansible

      The benefit of using visudo over a normal editor is that this utility provides basic checks, such as for parse errors, before installing the file.

    2. Configure a sudoers policy in the /etc/sudoers.d/ansible file that meets your requirements, for example:

      • To grant permissions to the ansible user to run all commands as any user and group on this host after entering the ansible user’s password, use:

        ansible   ALL=(ALL) ALL
      • To grant permissions to the ansible user to run all commands as any user and group on this host without entering the ansible user’s password, use:

        ansible   ALL=(ALL) NOPASSWD: ALL

    Alternatively, configure a more fine-granular policy that matches your security requirements. For further details on sudoers policies, see the sudoers(5) manual page.

Verification

  1. Verify that you can execute commands from the control node on an all managed nodes:

    [ansible@control-node]$ ansible all -m ping
    BECOME password: <password>
    managed-node-01.example.com | SUCCESS => {
        	"ansible_facts": {
        	    "discovered_interpreter_python": "/usr/bin/python3"
        	},
        	"changed": false,
        	"ping": "pong"
    }
    ...

    The hard-coded all group dynamically contains all hosts listed in the inventory file.

  2. Verify that privilege escalation works correctly by running the whoami utility on all managed nodes by using the Ansible command module:

    [ansible@control-node]$ ansible all -m command -a whoami
    BECOME password: <password>
    managed-node-01.example.com | CHANGED | rc=0 >>
    root
    ...

    If the command returns root, you configured sudo on the managed nodes correctly.

Additional resources

5.2. Introduction to the metrics RHEL system role

RHEL system roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The metrics system role configures performance analysis services for the local system and, optionally, includes a list of remote systems to be monitored by the local system. The metrics system role enables you to use pcp to monitor your systems performance without having to configure pcp separately, as the set-up and deployment of pcp is handled by the playbook.

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
  • /usr/share/doc/rhel-system-roles/metrics/ directory

5.3. Using the metrics RHEL system role to monitor your local system with visualization

This procedure describes how to use the metrics RHEL system role to monitor your local system while simultaneously provisioning data visualization via Grafana.

Prerequisites

  • You have prepared the control node and the managed nodes
  • You are logged in to the control node as a user who can run playbooks on the managed nodes.
  • The account you use to connect to the managed nodes has sudo permissions on them.
  • localhost is configured in the inventory file on the control node:

    localhost ansible_connection=local

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Manage metrics
      hosts: localhost
      roles:
        - rhel-system-roles.metrics
      vars:
        metrics_graph_service: yes
        metrics_manage_firewall: true
        metrics_manage_selinux: true

    Because the metrics_graph_service boolean is set to value="yes", Grafana is automatically installed and provisioned with pcp added as a data source. Because metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role uses the firewall and selinux system roles to manage the ports used by the metrics role.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • To view visualization of the metrics being collected on your machine, access the grafana web interface as described in Accessing the Grafana web UI.

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
  • /usr/share/doc/rhel-system-roles/metrics/ directory

5.4. Using the metrics RHEL system role to set up a fleet of individual systems to monitor themselves

This procedure describes how to use the metrics system role to set up a fleet of machines to monitor themselves.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Configure a fleet of machines to monitor themselves
      hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.metrics
      vars:
        metrics_retention_days: 0
        metrics_manage_firewall: true
        metrics_manage_selinux: true

    Because metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role uses the firewall and selinux roles to manage the ports used by the metrics role.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
  • /usr/share/doc/rhel-system-roles/metrics/ directory

5.5. Using the metrics RHEL system role to monitor a fleet of machines centrally using your local machine

This procedure describes how to use the metrics system role to set up your local machine to centrally monitor a fleet of machines while also provisioning visualization of the data via grafana and querying of the data via redis.

Prerequisites

  • You have prepared the control node and the managed nodes
  • You are logged in to the control node as a user who can run playbooks on the managed nodes.
  • The account you use to connect to the managed nodes has sudo permissions on them.
  • localhost is configured in the inventory file on the control node:

    localhost ansible_connection=local

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    - name: Set up your local machine to centrally monitor a fleet of machines
      hosts: localhost
      roles:
        - rhel-system-roles.metrics
      vars:
        metrics_graph_service: yes
        metrics_query_service: yes
        metrics_retention_days: 10
        metrics_monitored_hosts: ["database.example.com", "webserver.example.com"]
        metrics_manage_firewall: yes
        metrics_manage_selinux: yes

    Because the metrics_graph_service and metrics_query_service booleans are set to value="yes", grafana is automatically installed and provisioned with pcp added as a data source with the pcp data recording indexed into redis, allowing the pcp querying language to be used for complex querying of the data. Because metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role uses the firewall and selinux roles to manage the ports used by the metrics role.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • To view a graphical representation of the metrics being collected centrally by your machine and to query the data, access the grafana web interface as described in Accessing the Grafana web UI.

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
  • /usr/share/doc/rhel-system-roles/metrics/ directory

5.6. Setting up authentication while monitoring a system by using the metrics RHEL system role

PCP supports the scram-sha-256 authentication mechanism through the Simple Authentication Security Layer (SASL) framework. The metrics RHEL system role automates the steps to setup authentication by using the scram-sha-256 authentication mechanism. This procedure describes how to setup authentication by using the metrics RHEL system role.

Prerequisites

Procedure

  1. Edit an existing playbook file, for example ~/playbook.yml, and add the authentication-related variables:

    ---
    - name: Set up authentication by using the scram-sha-256 authentication mechanism
      hosts: managed-node-01.example.com
      roles:
        - rhel-system-roles.metrics
      vars:
        metrics_retention_days: 0
        metrics_manage_firewall: true
        metrics_manage_selinux: true
        metrics_username: <username>
        metrics_password: <password>
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Verify the sasl configuration:

    # pminfo -f -h "pcp://managed-node-01.example.com?username=<username>" disk.dev.read
    Password: <password>
    disk.dev.read
    inst [0 or "sda"] value 19540

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
  • /usr/share/doc/rhel-system-roles/metrics/ directory

5.7. Using the metrics RHEL system role to configure and enable metrics collection for SQL Server

This procedure describes how to use the metrics RHEL system role to automate the configuration and enabling of metrics collection for Microsoft SQL Server via pcp on your local system.

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Configure and enable metrics collection for Microsoft SQL Server
      hosts: localhost
      roles:
        - rhel-system-roles.metrics
      vars:
        metrics_from_mssql: true
        metrics_manage_firewall: true
        metrics_manage_selinux: true

    Because metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role uses the firewall and selinux roles to manage the ports used by the metrics role.

  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Verification

  • Use the pcp command to verify that SQL Server PMDA agent (mssql) is loaded and running:

    # pcp
    platform: Linux sqlserver.example.com 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64
     hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM
     timezone: PDT+7
     services: pmcd pmproxy
         pmcd: Version 5.0.2-1, 12 agents, 4 clients
         pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql
               jbd2 dm
     pmlogger: primary logger: /var/log/pcp/pmlogger/sqlserver.example.com/20200326.16.31
         pmie: primary engine: /var/log/pcp/pmie/sqlserver.example.com/pmie.log

Additional resources

Chapter 6. Setting up PCP

Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements.

6.1. Overview of PCP

You can add performance metrics using Python, Perl, C++, and C interfaces. Analysis tools can use the Python, C++, C client APIs directly, and rich web applications can explore all available performance data using a JSON interface.

You can analyze data patterns by comparing live results with archived data.

Features of PCP:

  • Light-weight distributed architecture, which is useful during the centralized analysis of complex systems.
  • It allows the monitoring and management of real-time data.
  • It allows logging and retrieval of historical data.

PCP has the following components:

  • The Performance Metric Collector Daemon (pmcd) collects performance data from the installed Performance Metric Domain Agents (pmda). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host.
  • Various client tools, such as pminfo or pmstat, can retrieve, display, archive, and process this data on the same host or over the network.
  • The pcp package provides the command-line tools and underlying functionality.
  • The pcp-gui package provides the graphical application. Install the pcp-gui package by executing the yum install pcp-gui command. For more information, see Visually tracing PCP log archives with the PCP Charts application.

6.2. Installing and enabling PCP

To begin using PCP, install all the required packages and enable the PCP monitoring services.

This procedure describes how to install PCP using the pcp package. If you want to automate the PCP installation, install it using the pcp-zeroconf package. For more information about installing PCP by using pcp-zeroconf, see Setting up PCP with pcp-zeroconf.

Procedure

  1. Install the pcp package:

    # yum install pcp
  2. Enable and start the pmcd service on the host machine:

    # systemctl enable pmcd
    
    # systemctl start pmcd

Verification

  • Verify if the pmcd process is running on the host:

    # pcp
    
    Performance Co-Pilot configuration on workstation:
    
    platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64
    hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM
    timezone: CEST-2
    services: pmcd
    pmcd: Version 4.3.0-1, 8 agents
    pmda: root pmcd proc xfs linux mmv kvm jbd2

Additional resources

6.3. Deploying a minimal PCP setup

The minimal PCP setup collects performance statistics on Red Hat Enterprise Linux. The setup involves adding the minimum number of packages on a production system needed to gather data for further analysis.

You can analyze the resulting tar.gz file and the archive of the pmlogger output using various PCP tools and compare them with other sources of performance information.

Prerequisites

Procedure

  1. Update the pmlogger configuration:

    # pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
  2. Start the pmcd and pmlogger services:

    # systemctl start pmcd.service
    
    # systemctl start pmlogger.service
  3. Execute the required operations to record the performance data.
  4. Stop the pmcd and pmlogger services:

    # systemctl stop pmcd.service
    
    # systemctl stop pmlogger.service
  5. Save the output and save it to a tar.gz file named based on the host name and the current date and time:

    # cd /var/log/pcp/pmlogger/
    
    # tar -czf $(hostname).$(date +%F-%Hh%M).pcp.tar.gz $(hostname)

    Extract this file and analyze the data using PCP tools.

Additional resources

6.4. System services and tools distributed with PCP

Performance Co-Pilot (PCP) includes various system services and tools you can use for measuring performance. The basic package pcp includes the system services and basic tools. Additional tools are provided with the pcp-system-tools, pcp-gui, and pcp-devel packages.

Roles of system services distributed with PCP

pmcd
The Performance Metric Collector Daemon (PMCD).
pmie
The Performance Metrics Inference Engine.
pmlogger
The performance metrics logger.
pmproxy
The realtime and historical performance metrics proxy, time series query and REST API service.

Tools distributed with base PCP package

pcp
Displays the current status of a Performance Co-Pilot installation.
pcp-vmstat
Provides a high-level system performance overview every 5 seconds. Displays information about processes, memory, paging, block IO, traps, and CPU activity.
pmconfig
Displays the values of configuration parameters.
pmdiff
Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions.
pmdumplog
Displays control, metadata, index, and state information from a Performance Co-Pilot archive file.
pmfind
Finds PCP services on the network.
pmie
An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmieconf
Displays or sets configurable pmie variables.
pmiectl
Manages non-primary instances of pmie.
pminfo
Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmlc
Interactively configures active pmlogger instances.
pmlogcheck
Identifies invalid data in a Performance Co-Pilot archive file.
pmlogconf
Creates and modifies a pmlogger configuration file.
pmlogctl
Manages non-primary instances of pmlogger.
pmloglabel
Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file.
pmlogsummary
Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file.
pmprobe
Determines the availability of performance metrics.
pmsocks
Allows access to a Performance Co-Pilot hosts through a firewall.
pmstat
Periodically displays a brief summary of system performance.
pmstore
Modifies the values of performance metrics.
pmtrace
Provides a command line interface to the trace PMDA.
pmval
Displays the current value of a performance metric.

Tools distributed with the separately installed pcp-system-tools package

pcp-atop
Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network.
pcp-atopsar
Generates a system-level activity report over a variety of system resource utilization. The report is generated from a raw logfile previously recorded using pmlogger or the -w option of pcp-atop.
pcp-dmcache
Displays information about configured Device Mapper Cache targets, such as: device IOPs, cache and metadata device utilization, as well as hit and miss rates and ratios for both reads and writes for each cache device.
pcp-dstat
Displays metrics of one system at a time. To display metrics of multiple systems, use --host option.
pcp-free
Reports on free and used memory in a system.
pcp-htop
Displays all processes running on a system along with their command line arguments in a manner similar to the top command, but allows you to scroll vertically and horizontally as well as interact using a mouse. You can also view processes in a tree format and select and act on multiple processes at once.
pcp-ipcs
Displays information about the inter-process communication (IPC) facilities that the calling process has read access for.
pcp-mpstat
Reports CPU and interrupt-related statistics.
pcp-numastat
Displays NUMA allocation statistics from the kernel memory allocator.
pcp-pidstat
Displays information about individual tasks or processes running on the system, such as CPU percentage, memory and stack usage, scheduling, and priority. Reports live data for the local host by default.
pcp-shping
Samples and reports on the shell-ping service metrics exported by the pmdashping Performance Metrics Domain Agent (PMDA).
pcp-ss
Displays socket statistics collected by the pmdasockets PMDA.
pcp-tapestat
Reports I/O statistics for tape devices.
pcp-uptime
Displays how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
pcp-verify
Inspects various aspects of a Performance Co-Pilot collector installation and reports on whether it is configured correctly for certain modes of operation.
pmiostat
Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x device-mapper option).
pmrep
Reports on selected, easily customizable, performance metrics values.

Tools distributed with the separately installed pcp-gui package

pmchart
Plots performance metrics values available through the facilities of the Performance Co-Pilot.
pmdumptext
Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive.

Tools distributed with the separately installed pcp-devel package

pmclient
Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI).
pmdbg
Displays available Performance Co-Pilot debug control flags and their values.
pmerr
Displays available Performance Co-Pilot error codes and their corresponding error messages.

6.5. PCP deployment architectures

Performance Co-Pilot (PCP) supports multiple deployment architectures, based on the scale of the PCP deployment, and offers many options to accomplish advanced setups.

Available scaling deployment setup variants based on the recommended deployment set up by Red Hat, sizing factors, and configuration options include:

Note

Since the PCP version 5.3.0 is unavailable in Red Hat Enterprise Linux 8.4 and the prior minor versions of Red Hat Enterprise Linux 8, Red Hat recommends localhost and pmlogger farm architectures.

For more information about known memory leaks in pmproxy in PCP versions before 5.3.0, see Memory leaks in pmproxy in PCP.

Localhost

Each service runs locally on the monitored machine. When you start a service without any configuration changes, this is the default deployment. Scaling beyond the individual node is not possible in this case.

By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor.

Decentralized

The only difference between localhost and decentralized setup is the centralized Redis service. In this model, the host executes pmlogger service on each monitored host and retrieves metrics from a local pmcd instance. A local pmproxy service then exports the performance metrics to a central Redis instance.

Figure 6.1. Decentralized logging

Decentralized logging
Centralized logging - pmlogger farm

When the resource usage on the monitored hosts is constrained, another deployment option is a pmlogger farm, which is also known as centralized logging. In this setup, a single logger host executes multiple pmlogger processes, and each is configured to retrieve performance metrics from a different remote pmcd host. The centralized logger host is also configured to execute the pmproxy service, which discovers the resulting PCP archives logs and loads the metric data into a Redis instance.

Figure 6.2. Centralized logging - pmlogger farm

Centralized logging - pmlogger farm
Federated - multiple pmlogger farms

For large scale deployments, Red Hat recommends to deploy multiple pmlogger farms in a federated fashion. For example, one pmlogger farm per rack or data center. Each pmlogger farm loads the metrics into a central Redis instance.

Figure 6.3. Federated - multiple pmlogger farms

Federated - multiple pmlogger farms
Note

By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor.

Additional resources

6.7. Sizing factors

The following are the sizing factors required for scaling:

Remote system size
The number of CPUs, disks, network interfaces, and other hardware resources affects the amount of data collected by each pmlogger on the centralized logging host.
Logged Metrics
The number and types of logged metrics play an important role. In particular, the per-process proc.* metrics require a large amount of disk space, for example, with the standard pcp-zeroconf setup, 10s logging interval, 11 MB without proc metrics versus 155 MB with proc metrics - a factor of 10 times more. Additionally, the number of instances for each metric, for example the number of CPUs, block devices, and network interfaces also impacts the required storage capacity.
Logging Interval
The interval how often metrics are logged, affects the storage requirements. The expected daily PCP archive file sizes are written to the pmlogger.log file for each pmlogger instance. These values are uncompressed estimates. Since PCP archives compress very well, approximately 10:1, the actual long term disk space requirements can be determined for a particular site.
pmlogrewrite
After every PCP upgrade, the pmlogrewrite tool is executed and rewrites old archives if there were changes in the metric metadata from the previous version and the new version of PCP. This process duration scales linear with the number of archives stored.

Additional resources

  • pmlogrewrite(1) and pmlogger(1) man pages on your system

6.8. Configuration options for PCP scaling

The following are the configuration options, which are required for scaling:

sysctl and rlimit settings
When archive discovery is enabled, pmproxy requires four descriptors for every pmlogger that it is monitoring or log-tailing, along with the additional file descriptors for the service logs and pmproxy client sockets, if any. Each pmlogger process uses about 20 file descriptors for the remote pmcd socket, archive files, service logs, and others. In total, this can exceed the default 1024 soft limit on a system running around 200 pmlogger processes. The pmproxy service in pcp-5.3.0 and later automatically increases the soft limit to the hard limit. On earlier versions of PCP, tuning is required if a high number of pmlogger processes are to be deployed, and this can be accomplished by increasing the soft or hard limits for pmlogger. For more information, see How to set limits (ulimit) for services run by systemd.
Local Archives
The pmlogger service stores metrics of local and remote pmcds in the /var/log/pcp/pmlogger/ directory. To control the logging interval of the local system, update the /etc/pcp/pmlogger/control.d/configfile file and add -t X in the arguments, where X is the logging interval in seconds. To configure which metrics should be logged, execute pmlogconf /var/lib/pcp/config/pmlogger/config.clienthostname. This command deploys a configuration file with a default set of metrics, which can optionally be further customized. To specify retention settings, that is when to purge old PCP archives, update the /etc/sysconfig/pmlogger_timers file and specify PMLOGGER_DAILY_PARAMS="-E -k X", where X is the amount of days to keep PCP archives.
Redis

The pmproxy service sends logged metrics from pmlogger to a Redis instance. The following are the available two options to specify the retention settings in the /etc/pcp/pmproxy/pmproxy.conf configuration file:

  • stream.expire specifies the duration when stale metrics should be removed, that is metrics which were not updated in a specified amount of time in seconds.
  • stream.maxlen specifies the maximum number of metric values for one metric per host. This setting should be the retention time divided by the logging interval, for example 20160 for 14 days of retention and 60s logging interval (60*60*24*14/60)

Additional resources

  • pmproxy(1), pmlogger(1), and sysctl(8) man pages on your system

6.9. Example: Analyzing the centralized logging deployment

The following results were gathered on a centralized logging setup, also known as pmlogger farm deployment, with a default pcp-zeroconf 5.3.0 installation, where each remote host is an identical container instance running pmcd on a server with 64 CPU cores, 376 GB RAM, and one disk attached.

The logging interval is 10s, proc metrics of remote nodes are not included, and the memory values refer to the Resident Set Size (RSS) value.

Table 6.2. Detailed utilization statistics for 10s logging interval
Number of Hosts1050

PCP Archives Storage per Day

91 MB

522 MB

pmlogger Memory

160 MB

580 MB

pmlogger Network per Day (In)

2 MB

9 MB

pmproxy Memory

1.4 GB

6.3 GB

Redis Memory per Day

2.6 GB

12 GB

Table 6.3. Used resources depending on monitored hosts for 60s logging interval
Number of Hosts1050100

PCP Archives Storage per Day

20 MB

120 MB

271 MB

pmlogger Memory

104 MB

524 MB

1049 MB

pmlogger Network per Day (In)

0.38 MB

1.75 MB

3.48 MB

pmproxy Memory

2.67 GB

5.5GB

9 GB

Redis Memory per Day

0.54 GB

2.65 GB

5.3 GB

Note

The pmproxy queues Redis requests and employs Redis pipelining to speed up Redis queries. This can result in high memory usage. For troubleshooting this issue, see Troubleshooting high memory usage.

6.10. Example: Analyzing the federated setup deployment

The following results were observed on a federated setup, also known as multiple pmlogger farms, consisting of three centralized logging (pmlogger farm) setups, where each pmlogger farm was monitoring 100 remote hosts, that is 300 hosts in total.

This setup of the pmlogger farms is identical to the configuration mentioned in the

Example: Analyzing the centralized logging deployment for 60s logging interval, except that the Redis servers were operating in cluster mode.

Table 6.4. Used resources depending on federated hosts for 60s logging interval
PCP Archives Storage per Daypmlogger MemoryNetwork per Day (In/Out)pmproxy MemoryRedis Memory per Day

277 MB

1058 MB

15.6 MB / 12.3 MB

6-8 GB

5.5 GB

Here, all values are per host. The network bandwidth is higher due to the inter-node communication of the Redis cluster.

6.11. Troubleshooting high memory usage

The following scenarios can result in high memory usage:

  • The pmproxy process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses.
  • The Redis node or cluster is overloaded and cannot process incoming requests on time.

The pmproxy service daemon uses Redis streams and supports the configuration parameters, which are PCP tuning parameters and affects Redis memory usage and key retention. The /etc/pcp/pmproxy/pmproxy.conf file lists the available configuration options for pmproxy and the associated APIs.

The following procedure describes how to troubleshoot high memory usage issue.

Prerequisites

  1. Install the pcp-pmda-redis package:

    # yum install pcp-pmda-redis
  2. Install the redis PMDA:

    # cd /var/lib/pcp/pmdas/redis && ./Install

Procedure

  • To troubleshoot high memory usage, execute the following command and observe the inflight column:

    $ pmrep :pmproxy
             backlog  inflight  reqs/s  resp/s   wait req err  resp err  changed  throttled
              byte     count   count/s  count/s  s/s  count/s   count/s  count/s   count/s
    14:59:08   0         0       N/A       N/A   N/A    N/A      N/A      N/A        N/A
    14:59:09   0         0    2268.9    2268.9    28     0        0       2.0        4.0
    14:59:10   0         0       0.0       0.0     0     0        0       0.0        0.0
    14:59:11   0         0       0.0       0.0     0     0        0       0.0        0.0

    This column shows how many Redis requests are in-flight, which means they are queued or sent, and no reply was received so far.

    A high number indicates one of the following conditions:

    • The pmproxy process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses.
    • The Redis node or cluster is overloaded and cannot process incoming requests on time.
  • To troubleshoot the high memory usage issue, reduce the number of pmlogger processes for this farm, and add another pmlogger farm. Use the federated - multiple pmlogger farms setup.

    If the Redis node is using 100% CPU for an extended amount of time, move it to a host with better performance or use a clustered Redis setup instead.

  • To view the pmproxy.redis.* metrics, use the following command:

    $ pminfo -ftd pmproxy.redis
    pmproxy.redis.responses.wait [wait time for responses]
        Data Type: 64-bit unsigned int  InDom: PM_INDOM_NULL 0xffffffff
        Semantics: counter  Units: microsec
        value 546028367374
    pmproxy.redis.responses.error [number of error responses]
        Data Type: 64-bit unsigned int  InDom: PM_INDOM_NULL 0xffffffff
        Semantics: counter  Units: count
        value 1164
    [...]
    pmproxy.redis.requests.inflight.bytes [bytes allocated for inflight requests]
        Data Type: 64-bit int  InDom: PM_INDOM_NULL 0xffffffff
        Semantics: discrete  Units: byte
        value 0
    
    pmproxy.redis.requests.inflight.total [inflight requests]
        Data Type: 64-bit unsigned int  InDom: PM_INDOM_NULL 0xffffffff
        Semantics: discrete  Units: count
        value 0
    [...]

    To view how many Redis requests are inflight, see the pmproxy.redis.requests.inflight.total metric and pmproxy.redis.requests.inflight.bytes metric to view how many bytes are occupied by all current inflight Redis requests.

    In general, the redis request queue would be zero but can build up based on the usage of large pmlogger farms, which limits scalability and can cause high latency for pmproxy clients.

  • Use the pminfo command to view information about performance metrics. For example, to view the redis.* metrics, use the following command:

    $ pminfo -ftd redis
    redis.redis_build_id [Build ID]
        Data Type: string  InDom: 24.0 0x6000000
        Semantics: discrete  Units: count
        inst [0 or "localhost:6379"] value "87e335e57cffa755"
    redis.total_commands_processed [Total number of commands processed by the server]
        Data Type: 64-bit unsigned int  InDom: 24.0 0x6000000
        Semantics: counter  Units: count
        inst [0 or "localhost:6379"] value 595627069
    [...]
    
    redis.used_memory_peak [Peak memory consumed by Redis (in bytes)]
        Data Type: 32-bit unsigned int  InDom: 24.0 0x6000000
        Semantics: instant  Units: count
        inst [0 or "localhost:6379"] value 572234920
    [...]

    To view the peak memory usage, see the redis.used_memory_peak metric.

Additional resources

Chapter 7. Logging performance data with pmlogger

With the PCP tool you can log the performance metric values and replay them later. This allows you to perform a retrospective performance analysis.

Using the pmlogger tool, you can:

  • Create the archived logs of selected metrics on the system
  • Specify which metrics are recorded on the system and how often

7.1. Modifying the pmlogger configuration file with pmlogconf

When the pmlogger service is running, PCP logs a default set of metrics on the host.

Use the pmlogconf utility to check the default configuration. If the pmlogger configuration file does not exist, pmlogconf creates it with a default metric values.

Prerequisites

Procedure

  1. Create or modify the pmlogger configuration file:

    # pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
  2. Follow pmlogconf prompts to enable or disable groups of related performance metrics and to control the logging interval for each enabled group.

Additional resources

7.2. Editing the pmlogger configuration file manually

To create a tailored logging configuration with specific metrics and given intervals, edit the pmlogger configuration file manually. The default pmlogger configuration file is /var/lib/pcp/config/pmlogger/config.default. The configuration file specifies which metrics are logged by the primary logging instance.

In manual configuration, you can:

  • Record metrics which are not listed in the automatic configuration.
  • Choose custom logging frequencies.
  • Add PMDA with the application metrics.

Prerequisites

Procedure

  • Open and edit the /var/lib/pcp/config/pmlogger/config.default file to add specific metrics:

    # It is safe to make additions from here on ...
    #
    
    log mandatory on every 5 seconds {
        xfs.write
        xfs.write_bytes
        xfs.read
        xfs.read_bytes
    }
    
    log mandatory on every 10 seconds {
        xfs.allocs
        xfs.block_map
        xfs.transactions
        xfs.log
    
    }
    
    [access]
    disallow * : all;
    allow localhost : enquire;

Additional resources

7.3. Enabling the pmlogger service

The pmlogger service must be started and enabled to log the metric values on the local machine.

This procedure describes how to enable the pmlogger service.

Prerequisites

Procedure

  • Start and enable the pmlogger service:

    # systemctl start pmlogger
    
    # systemctl enable pmlogger

Verification

  • Verify if the pmlogger service is enabled:

    # pcp
    
    Performance Co-Pilot configuration on workstation:
    
    platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64
    hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM
    timezone: CEST-2
    services: pmcd
    pmcd: Version 4.3.0-1, 8 agents, 1 client
    pmda: root pmcd proc xfs linux mmv kvm jbd2
    pmlogger: primary logger: /var/log/pcp/pmlogger/workstation/20190827.15.54

Additional resources

7.4. Setting up a client system for metrics collection

This procedure describes how to set up a client system so that a central server can collect metrics from clients running PCP.

Prerequisites

Procedure

  1. Install the pcp-system-tools package:

    # yum install pcp-system-tools
  2. Configure an IP address for pmcd:

    # echo "-i 192.168.4.62" >>/etc/pcp/pmcd/pmcd.options

    Replace 192.168.4.62 with the IP address, the client should listen on.

    By default, pmcd is listening on the localhost.

  3. Configure the firewall to add the public zone permanently:

    # firewall-cmd --permanent --zone=public --add-port=44321/tcp
    success
    
    # firewall-cmd --reload
    success
  4. Set an SELinux boolean:

    # setsebool -P pcp_bind_all_unreserved_ports on
  5. Enable the pmcd and pmlogger services:

    # systemctl enable pmcd pmlogger
    # systemctl restart pmcd pmlogger

Verification

  • Verify if the pmcd is correctly listening on the configured IP address:

    # ss -tlp | grep 44321
    LISTEN   0   5     127.0.0.1:44321   0.0.0.0:*   users:(("pmcd",pid=151595,fd=6))
    LISTEN   0   5  192.168.4.62:44321   0.0.0.0:*   users:(("pmcd",pid=151595,fd=0))
    LISTEN   0   5         [::1]:44321      [::]:*   users:(("pmcd",pid=151595,fd=7))

Additional resources

7.5. Setting up a central server to collect data

This procedure describes how to create a central server to collect metrics from clients running PCP.

Prerequisites

Procedure

  1. Install the pcp-system-tools package:

    # yum install pcp-system-tools
  2. Create the /etc/pcp/pmlogger/control.d/remote file with the following content:

    # DO NOT REMOVE OR EDIT THE FOLLOWING LINE
    $version=1.1
    
    192.168.4.13 n n PCP_ARCHIVE_DIR/rhel7u4a -r -T24h10m -c config.rhel7u4a
    192.168.4.14 n n PCP_ARCHIVE_DIR/rhel6u10a -r -T24h10m -c config.rhel6u10a
    192.168.4.62 n n PCP_ARCHIVE_DIR/rhel8u1a -r -T24h10m -c config.rhel8u1a

    Replace 192.168.4.13, 192.168.4.14 and 192.168.4.62 with the client IP addresses.

    Note

    In Red Hat Enterpirse Linux 8.0, 8.1 and 8.2 use the following format for remote hosts in the control file: PCP_LOG_DIR/pmlogger/host_name.

  3. Enable the pmcd and pmlogger services:

    # systemctl enable pmcd pmlogger
    # systemctl restart pmcd pmlogger

Verification

  • Ensure that you can access the latest archive file from each directory:

    # for i in /var/log/pcp/pmlogger/rhel*/*.0; do pmdumplog -L $i; done
    Log Label (Log Format Version 2)
    Performance metrics from host rhel6u10a.local
      commencing Mon Nov 25 21:55:04.851 2019
      ending     Mon Nov 25 22:06:04.874 2019
    Archive timezone: JST-9
    PID for pmlogger: 24002
    Log Label (Log Format Version 2)
    Performance metrics from host rhel7u4a
      commencing Tue Nov 26 06:49:24.954 2019
      ending     Tue Nov 26 07:06:24.979 2019
    Archive timezone: CET-1
    PID for pmlogger: 10941
    [..]

    The archive files from the /var/log/pcp/pmlogger/ directory can be used for further analysis and graphing.

Additional resources

7.6. Systemd units and pmlogger

When you deploy the pmlogger service, either as a single host monitoring itself or a pmlogger farm with a single host collecting metrics from several remote hosts, there are several associated systemd service and timer units that are automatically deployed. These services and timers provide routine checks to ensure that your pmlogger instances are running, restart any missing instances, and perform archive management such as file compression.

The checking and housekeeping services typically deployed by pmlogger are:

pmlogger_daily.service
Runs daily, soon after midnight by default, to aggregate, compress, and rotate one or more sets of PCP archives. Also culls archives older than the limit, 2 weeks by default. Triggered by the pmlogger_daily.timer unit, which is required by the pmlogger.service unit.
pmlogger_check
Performs half-hourly checks that pmlogger instances are running. Restarts any missing instances and performs any required compression tasks. Triggered by the pmlogger_check.timer unit, which is required by the pmlogger.service unit.
pmlogger_farm_check
Checks the status of all configured pmlogger instances. Restarts any missing instances. Migrates all non–primary instances to the pmlogger_farm service. Triggered by the pmlogger_farm_check.timer, which is required by the pmlogger_farm.service unit that is itself required by the pmlogger.service unit.

These services are managed through a series of positive dependencies, meaning that they are all enabled upon activating the primary pmlogger instance. Note that while pmlogger_daily.service is disabled by default, pmlogger_daily.timer being active via the dependency with pmlogger.service will trigger pmlogger_daily.service to run.

pmlogger_daily is also integrated with pmlogrewrite for automatically rewriting archives before merging. This helps to ensure metadata consistency amid changing production environments and PMDAs. For example, if pmcd on one monitored host is updated during the logging interval, the semantics for some metrics on the host might be updated, thus making the new archives incompatible with the previously recorded archives from that host. For more information see the pmlogrewrite(1) man page.

Managing systemd services triggered by pmlogger

You can create an automated custom archive management system for data collected by your pmlogger instances. This is done using control files. These control files are:

  • For the primary pmlogger instance:

    • etc/pcp/pmlogger/control
    • /etc/pcp/pmlogger/control.d/local
  • For the remote hosts:

    • /etc/pcp/pmlogger/control.d/remote

      Replace remote with your desired file name.

      NOTE
      The primary pmlogger instance must be running on the same host as the pmcd it connects to. You do not need to have a primary instance and you might not need it in your configuration if one central host is collecting data on several pmlogger instances connected to pmcd instances running on remote host

The file should contain one line for each host to be logged. The default format of the primary logger instance that is automatically created looks similar to:

# === LOGGER CONTROL SPECIFICATIONS ===
#
#Host   	 P?  S?    directory   		 args

# local primary logger
LOCALHOSTNAME    y   n    PCP_ARCHIVE_DIR/LOCALHOSTNAME    -r -T24h10m -c config.default -v 100Mb

The fields are:

Host
The name of the host to be logged
P?
Stands for “Primary?” This field indicates if the host is the primary logger instance, y, or not, n. There can only be one primary logger across all the files in your configuration and it must be running on the same host as the pmcd it connects to.
S?
Stands for “Socks?” This field indicates if this logger instance needs to use the SOCKS protocol to connect to pmcd through a firewall, y, or not, n.
directory
All archives associated with this line are created in this directory.
args

Arguments passed to pmlogger.

The default values for the args field are:

-r
Report the archive sizes and growth rate.
T24h10m
Specifies when to end logging for each day. This is typically the time when pmlogger_daily.service runs. The default value of 24h10m indicates that logging should end 24 hours and 10 minutes after it begins, at the latest.
-c config.default
Specifies which configuration file to use. This essentially defines what metrics to record.
-v 100Mb
Specifies the size at which point one data volume is filled and another is created. After it switches to the new archive, the previously recorded one will be compressed by either pmlogger_daily or pmlogger_check.

Additional resources

  • pmlogger(1) and pmlogrewrite(1) man pages on your system
  • pmlogger_daily(1), pmlogger_check(1), and pmlogger.control(5) man pages on your system

7.7. Replaying the PCP log archives with pmrep

After recording the metric data, you can replay the PCP log archives. To export the logs to text files and import them into spreadsheets, use PCP utilities such as pcp2csv, pcp2xml, pmrep or pmlogsummary.

Using the pmrep tool, you can:

  • View the log files
  • Parse the selected PCP log archive and export the values into an ASCII table
  • Extract the entire archive log or only select metric values from the log by specifying individual metrics on the command line

Prerequisites

Procedure

  • Display the data on the metric:

    $ pmrep --start @3:00am --archive 20211128 --interval 5seconds --samples 10 --output csv disk.dev.write
    Time,"disk.dev.write-sda","disk.dev.write-sdb"
    2021-11-28 03:00:00,,
    2021-11-28 03:00:05,4.000,5.200
    2021-11-28 03:00:10,1.600,7.600
    2021-11-28 03:00:15,0.800,7.100
    2021-11-28 03:00:20,16.600,8.400
    2021-11-28 03:00:25,21.400,7.200
    2021-11-28 03:00:30,21.200,6.800
    2021-11-28 03:00:35,21.000,27.600
    2021-11-28 03:00:40,12.400,33.800
    2021-11-28 03:00:45,9.800,20.600

    The mentioned example displays the data on the disk.dev.write metric collected in an archive at a 5 second interval in comma-separated-value format.

    Note

    Replace 20211128 in this example with a filename containing the pmlogger archive you want to display data for.

Additional resources

Chapter 8. Monitoring performance with Performance Co-Pilot

Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements.

As a system administrator, you can monitor the system’s performance using the PCP application in Red Hat Enterprise Linux 8.

8.1. Monitoring postfix with pmda-postfix

This procedure describes how to monitor performance metrics of the postfix mail server with pmda-postfix. It helps to check how many emails are received per second.

Prerequisites

Procedure

  1. Install the following packages:

    1. Install the pcp-system-tools:

      # yum install pcp-system-tools
    2. Install the pmda-postfix package to monitor postfix:

      # yum install pcp-pmda-postfix postfix
    3. Install the logging daemon:

      # yum install rsyslog
    4. Install the mail client for testing:

      # yum install mutt
  2. Enable the postfix and rsyslog services:

    # systemctl enable postfix rsyslog
    # systemctl restart postfix rsyslog
  3. Enable the SELinux boolean, so that pmda-postfix can access the required log files:

    # setsebool -P pcp_read_generic_logs=on
  4. Install the PMDA:

    # cd /var/lib/pcp/pmdas/postfix/
    
    # ./Install
    
    Updating the Performance Metrics Name Space (PMNS) ...
    Terminate PMDA if already installed ...
    Updating the PMCD control file, and notifying PMCD ...
    Waiting for pmcd to terminate ...
    Starting pmcd ...
    Check postfix metrics have appeared ... 7 metrics and 58 values

Verification

  • Verify the pmda-postfix operation:

    echo testmail | mutt root
  • Verify the available metrics:

    # pminfo postfix
    
    postfix.received
    postfix.sent
    postfix.queues.incoming
    postfix.queues.maildrop
    postfix.queues.hold
    postfix.queues.deferred
    postfix.queues.active

Additional resources

8.2. Visually tracing PCP log archives with the PCP Charts application

After recording metric data, you can replay the PCP log archives as graphs. The metrics are sourced from one or more live hosts with alternative options to use metric data from PCP log archives as a source of historical data. To customize the PCP Charts application interface to display the data from the performance metrics, you can use line plot, bar graphs, or utilization graphs.

Using the PCP Charts application, you can:

  • Replay the data in the PCP Charts application application and use graphs to visualize the retrospective data alongside live data of the system.
  • Plot performance metric values into graphs.
  • Display multiple charts simultaneously.

Prerequisites

Procedure

  1. Launch the PCP Charts application from the command line:

    # pmchart

    Figure 8.1. PCP Charts application

    pmchart started

    The pmtime server settings are located at the bottom. The start and pause button allows you to control:

    • The interval in which PCP polls the metric data
    • The date and time for the metrics of historical data
  2. Click File and then New Chart to select metric from both the local machine and remote machines by specifying their host name or address. Advanced configuration options include the ability to manually set the axis values for the chart, and to manually choose the color of the plots.
  3. Record the views created in the PCP Charts application:

    Following are the options to take images or record the views created in the PCP Charts application:

    • Click File and then Export to save an image of the current view.
    • Click Record and then Start to start a recording. Click Record and then Stop to stop the recording. After stopping the recording, the recorded metrics are archived to be viewed later.
  4. Optional: In the PCP Charts application, the main configuration file, known as the view, allows the metadata associated with one or more charts to be saved. This metadata describes all chart aspects, including the metrics used and the chart columns. Save the custom view configuration by clicking File and then Save View, and load the view configuration later.

    The following example of the PCP Charts application view configuration file describes a stacking chart graph showing the total number of bytes read and written to the given XFS file system loop1:

    #kmchart
    version 1
    
    chart title "Filesystem Throughput /loop1" style stacking antialiasing off
        plot legend "Read rate"   metric xfs.read_bytes   instance  "loop1"
        plot legend "Write rate"  metric xfs.write_bytes  instance  "loop1"

Additional resources

8.3. Collecting data from SQL server using PCP

With Red Hat Enterprise Linux 8.2 or later, the SQL Server agent is available in Performance Co-Pilot (PCP), which helps you to monitor and analyze database performance issues.

This procedure describes how to collect data for Microsoft SQL Server via pcp on your system.

Prerequisites

  • You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a 'trusted' connection to an SQL server.
  • You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux.

Procedure

  1. Install PCP:

    # yum install pcp-zeroconf
  2. Install packages required for the pyodbc driver:

    # yum install gcc-c++ python3-devel unixODBC-devel
    
    # yum install python3-pyodbc
  3. Install the mssql agent:

    1. Install the Microsoft SQL Server domain agent for PCP:

      # yum install pcp-pmda-mssql
    2. Edit the /etc/pcp/mssql/mssql.conf file to configure the SQL server account’s username and password for the mssql agent. Ensure that the account you configure has access rights to performance data.

      username: user_name
      password: user_password

      Replace user_name with the SQL Server account and user_password with the SQL Server user password for this account.

  4. Install the agent:

    # cd /var/lib/pcp/pmdas/mssql
    # ./Install
    Updating the Performance Metrics Name Space (PMNS) ...
    Terminate PMDA if already installed ...
    Updating the PMCD control file, and notifying PMCD ...
    Check mssql metrics have appeared ... 168 metrics and 598 values
    [...]

Verification

  • Using the pcp command, verify if the SQL Server PMDA (mssql) is loaded and running:

    $ pcp
    Performance Co-Pilot configuration on rhel.local:
    
    platform: Linux rhel.local 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64
     hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM
     timezone: PDT+7
     services: pmcd pmproxy
         pmcd: Version 5.0.2-1, 12 agents, 4 clients
         pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql
               jbd2 dm
     pmlogger: primary logger: /var/log/pcp/pmlogger/rhel.local/20200326.16.31
         pmie: primary engine: /var/log/pcp/pmie/rhel.local/pmie.log
  • View the complete list of metrics that PCP can collect from the SQL Server:

    # pminfo mssql
  • After viewing the list of metrics, you can report the rate of transactions. For example, to report on the overall transaction count per second, over a five second time window:

    # pmval -t 1 -T 5 mssql.databases.transactions
  • View the graphical chart of these metrics on your system by using the pmchart command. For more information, see Visually tracing PCP log archives with the PCP Charts application.

Additional resources

Chapter 9. Performance analysis of XFS with PCP

The XFS PMDA ships as part of the pcp package and is enabled by default during the installation. It is used to gather performance metric data of XFS file systems in Performance Co-Pilot (PCP).

You can use PCP to analyze XFS file system’s performance.

9.1. Installing XFS PMDA manually

If the XFS PMDA is not listed in the pcp configuration output, install the PMDA agent manually.

This procedure describes how to manually install the PMDA agent.

Prerequisites

Procedure

  1. Navigate to the xfs directory:

    # cd /var/lib/pcp/pmdas/xfs/
  2. Install the XFS PMDA manually:

    xfs]# ./Install
    
    You will need to choose an appropriate configuration for install of
    the “xfs” Performance Metrics Domain Agent (PMDA).
    
      collector     collect performance statistics on this system
      monitor       allow this system to monitor local and/or remote systems
      both          collector and monitor configuration for this system
    
    Please enter c(ollector) or m(onitor) or (both) [b]
    Updating the Performance Metrics Name Space (PMNS) ...
    Terminate PMDA if already installed ...
    Updating the PMCD control file, and notifying PMCD ...
    Waiting for pmcd to terminate ...
    Starting pmcd ...
    Check xfs metrics have appeared ... 149 metrics and 149 values
  3. Select the intended PMDA role by entering c for collector, m for monitor, or b for both. The PMDA installation script prompts you to specify one of the following PMDA roles:

    • The collector role allows the collection of performance metrics on the current system
    • The monitor role allows the system to monitor local systems, remote systems, or both

      The default option is both collector and monitor, which allows the XFS PMDA to operate correctly in most scenarios.

Verification

  • Verify that the pmcd process is running on the host and the XFS PMDA is listed as enabled in the configuration:

    # pcp
    
    Performance Co-Pilot configuration on workstation:
    
    platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64
    hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM
    timezone: CEST-2
    services: pmcd
    pmcd: Version 4.3.0-1, 8 agents
    pmda: root pmcd proc xfs linux mmv kvm jbd2

Additional resources

9.2. Examining XFS performance metrics with pminfo

PCP enables XFS PMDA to allow the reporting of certain XFS metrics per each of the mounted XFS file systems. This makes it easier to pinpoint specific mounted file system issues and evaluate performance.

The pminfo command provides per-device XFS metrics for each mounted XFS file system.

This procedure displays a list of all available metrics provided by the XFS PMDA.

Prerequisites

Procedure

  • Display the list of all available metrics provided by the XFS PMDA:

    # pminfo xfs
  • Display information for the individual metrics. The following examples examine specific XFS read and write metrics using the pminfo tool:

    • Display a short description of the xfs.write_bytes metric:

      # pminfo --oneline xfs.write_bytes
      
      xfs.write_bytes [number of bytes written in XFS file system write operations]
    • Display a long description of the xfs.read_bytes metric:

      # pminfo --helptext xfs.read_bytes
      
      xfs.read_bytes
      Help:
      This is the number of bytes read via read(2) system calls to files in
      XFS file systems. It can be used in conjunction with the read_calls
      count to calculate the average size of the read operations to file in
      XFS file systems.
    • Obtain the current performance value of the xfs.read_bytes metric:

      # pminfo --fetch xfs.read_bytes
      
      xfs.read_bytes
          value 4891346238
    • Obtain per-device XFS metrics with pminfo:

      # pminfo --fetch --oneline xfs.perdev.read xfs.perdev.write
      
      xfs.perdev.read [number of XFS file system read operations]
      inst [0 or "loop1"] value 0
      inst [0 or "loop2"] value 0
      
      xfs.perdev.write [number of XFS file system write operations]
      inst [0 or "loop1"] value 86
      inst [0 or "loop2"] value 0

Additional resources

9.3. Resetting XFS performance metrics with pmstore

With PCP, you can modify the values of certain metrics, especially if the metric acts as a control variable, such as the xfs.control.reset metric. To modify a metric value, use the pmstore tool.

This procedure describes how to reset XFS metrics using the pmstore tool.

Prerequisites

Procedure

  1. Display the value of a metric:

    $ pminfo -f xfs.write
    
    xfs.write
        value 325262
  2. Reset all the XFS metrics:

    # pmstore xfs.control.reset 1
    
    xfs.control.reset old value=0 new value=1

Verification

  • View the information after resetting the metric:

    $ pminfo --fetch xfs.write
    
    xfs.write
        value 0

Additional resources

9.4. PCP metric groups for XFS

The following table describes the available PCP metric groups for XFS.

Table 9.1. Metric groups for XFS

Metric Group

Metrics provided

xfs.*

General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster.

xfs.allocs.*

xfs.alloc_btree.*

Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree.

xfs.block_map.*

xfs.bmap_btree.*

Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap.

xfs.dir_ops.*

Counters for directory operations on XFS file systems for creation, entry deletions, count of “getdent” operations.

xfs.transactions.*

Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions.

xfs.inode_ops.*

Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on.

xfs.log.*

xfs.log_tail.*

Counters for the number of log buffer writes over XFS file sytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning.

xfs.xstrat.*

Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk.

xfs.attr.*

Counts for the number of attribute get, set, remove and list operations over all XFS file systems.

xfs.quota.*

Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims.

xfs.buffer.*

Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages.

xfs.btree.*

Metrics regarding the operations of the XFS btree.

xfs.control.reset

Configuration metrics which are used to reset the metric counters for the XFS stats. Control metrics are toggled by means of the pmstore tool.

9.5. Per-device PCP metric groups for XFS

The following table describes the available per-device PCP metric group for XFS.

Table 9.2. Per-device PCP metric groups for XFS

Metric Group

Metrics provided

xfs.perdev.*

General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster.

xfs.perdev.allocs.*

xfs.perdev.alloc_btree.*

Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree.

xfs.perdev.block_map.*

xfs.perdev.bmap_btree.*

Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap.

xfs.perdev.dir_ops.*

Counters for directory operations of XFS file systems for creation, entry deletions, count of “getdent” operations.

xfs.perdev.transactions.*

Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions.

xfs.perdev.inode_ops.*

Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on.

xfs.perdev.log.*

xfs.perdev.log_tail.*

Counters for the number of log buffer writes over XFS filesytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning.

xfs.perdev.xstrat.*

Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk.

xfs.perdev.attr.*

Counts for the number of attribute get, set, remove and list operations over all XFS file systems.

xfs.perdev.quota.*

Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims.

xfs.perdev.buffer.*

Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages.

xfs.perdev.btree.*

Metrics regarding the operations of the XFS btree.

Chapter 10. Setting up graphical representation of PCP metrics

Using a combination of pcp, grafana, pcp redis, pcp bpftrace, and pcp vector provides provides graphical representation of the live data or data collected by Performance Co-Pilot (PCP).

10.1. Setting up PCP with pcp-zeroconf

This procedure describes how to set up PCP on a system with the pcp-zeroconf package. Once the pcp-zeroconf package is installed, the system records the default set of metrics into archived files.

Procedure

  • Install the pcp-zeroconf package:

    # yum install pcp-zeroconf

Verification

  • Ensure that the pmlogger service is active, and starts archiving the metrics:

    # pcp | grep pmlogger
     pmlogger: primary logger: /var/log/pcp/pmlogger/localhost.localdomain/20200401.00.12

Additional resources

10.2. Setting up a grafana-server

Grafana generates graphs that are accessible from a browser. The grafana-server is a back-end server for the Grafana dashboard. It listens, by default, on all interfaces, and provides web services accessed through the web browser. The grafana-pcp plugin interacts with the pmproxy protocol in the backend.

This procedure describes how to set up a grafana-server.

Prerequisites

Procedure

  1. Install the following packages:

    # yum install grafana grafana-pcp
  2. Restart and enable the following service:

    # systemctl restart grafana-server
    # systemctl enable grafana-server
  3. Open the server’s firewall for network traffic to the Grafana service.

    # firewall-cmd --permanent --add-service=grafana
    success
    
    # firewall-cmd --reload
    success

Verification

  • Ensure that the grafana-server is listening and responding to requests:

    # ss -ntlp | grep 3000
    LISTEN  0  128  *:3000  *:*  users:(("grafana-server",pid=19522,fd=7))
  • Ensure that the grafana-pcp plugin is installed:

    # grafana-cli plugins ls | grep performancecopilot-pcp-app
    
    performancecopilot-pcp-app @ 3.1.0

Additional resources

  • pmproxy(1) and grafana-server man pages on your system

10.3. Accessing the Grafana web UI

This procedure describes how to access the Grafana web interface.

Using the Grafana web interface, you can:

  • add PCP Redis, PCP bpftrace, and PCP Vector data sources
  • create dashboard
  • view an overview of any useful metrics
  • create alerts in PCP Redis

Prerequisites

  1. PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
  2. The grafana-server is configured. For more information, see Setting up a grafana-server.

Procedure

  1. On the client system, open a browser and access the grafana-server on port 3000, using http://192.0.2.0:3000 link.

    Replace 192.0.2.0 with your machine IP.

  2. For the first login, enter admin in both the Email or username and Password field.

    Grafana prompts to set a New password to create a secured account. If you want to set it later, click Skip.

  3. From the menu, hover over the    grafana gear icon    Configuration icon and then click Plugins.
  4. In the Plugins tab, type performance co-pilot in the Search by name or type text box and then click Performance Co-Pilot (PCP) plugin.
  5. In the Plugins / Performance Co-Pilot pane, click Enable.
  6. Click Grafana    grafana home page whirl icon    icon. The Grafana Home page is displayed.

    Figure 10.1. Home Dashboard

    grafana home dashboard
    Note

    The top corner of the screen has a similar    grafana top corner settings icon    icon, but it controls the general Dashboard settings.

  7. In the Grafana Home page, click Add your first data source to add PCP Redis, PCP bpftrace, and PCP Vector data sources. For more information about adding data source, see:

  8. Optional: From the menu, hover over the admin profile    grafana logout option icon    icon to change the Preferences including Edit Profile, Change Password, or to Sign out.

Additional resources

  • grafana-cli and grafana-server man pages on your system

10.4. Configuring PCP Redis

Use the PCP Redis data source to:

  • View data archives
  • Query time series using pmseries language
  • Analyze data across multiple hosts

Prerequisites

  1. PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
  2. The grafana-server is configured. For more information, see Setting up a grafana-server.
  3. Mail transfer agent, for example, sendmail or postfix is installed and configured.

Procedure

  1. Install the redis package:

    # yum module install redis:6
    Note

    From Red Hat Enterprise Linux 8.4, Redis 6 is supported but the yum update command does not update Redis 5 to Redis 6. To update from Redis 5 to Redis 6, run:

    # yum module switch-to redis:6

  2. Start and enable the following services:

    # systemctl start pmproxy redis
    # systemctl enable pmproxy redis
  3. Restart the grafana-server:

    # systemctl restart grafana-server

Verification

  • Ensure that the pmproxy and redis are working:

    # pmseries disk.dev.read
    2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df

    This command does not return any data if the redis package is not installed.

Additional resources

  • pmseries(1) man page on your system

10.5. Creating panels and alert in PCP Redis data source

After adding the PCP Redis data source, you can view the dashboard with an overview of useful metrics, add a query to visualize the load graph, and create alerts that help you to view the system issues after they occur.

Prerequisites

  1. The PCP Redis is configured. For more information, see Configuring PCP Redis.
  2. The grafana-server is accessible. For more information, see Accessing the Grafana web UI.

Procedure

  1. Log into the Grafana web UI.
  2. In the Grafana Home page, click Add your first data source.
  3. In the Add data source pane, type redis in the Filter by name or type text box and then click PCP Redis.
  4. In the Data Sources / PCP Redis pane, perform the following:

    1. Add http://localhost:44322 in the URL field and then click Save & Test.
    2. Click Dashboards tabImportPCP Redis: Host Overview to see a dashboard with an overview of any useful metrics.

      Figure 10.2. PCP Redis: Host Overview

      pcp redis host overview
  5. Add a new panel:

    1. From the menu, hover over the    grafana plus sign    Create iconDashboardAdd new panel icon to add a panel.
    2. In the Query tab, select the PCP Redis from the query list instead of the selected default option and in the text field of A, enter metric, for example, kernel.all.load to visualize the kernel load graph.
    3. Optional: Add Panel title and Description, and update other options from the Settings.
    4. Click Save to apply changes and save the dashboard. Add Dashboard name.
    5. Click Apply to apply changes and go back to the dashboard.

      Figure 10.3. PCP Redis query panel

      pcp redis query panel
  6. Create an alert rule:

    1. In the PCP Redis query panel, click    redis alert icon    Alert and then click Create Alert.
    2. Edit the Name, Evaluate query, and For fields from the Rule, and specify the Conditions for your alert.
    3. Click Save to apply changes and save the dashboard. Click Apply to apply changes and go back to the dashboard.

      Figure 10.4. Creating alerts in the PCP Redis panel

      pcp redis query alert panel
    4. Optional: In the same panel, scroll down and click Delete icon to delete the created rule.
    5. Optional: From the menu, click    alerting bell icon    Alerting icon to view the created alert rules with different alert statuses, to edit the alert rule, or to pause the existing rule from the Alert Rules tab.

      To add a notification channel for the created alert rule to receive an alert notification from Grafana, see Adding notification channels for alerts.

10.6. Adding notification channels for alerts

By adding notification channels, you can receive an alert notification from Grafana whenever the alert rule conditions are met and the system needs further monitoring.

You can receive these alerts after selecting any one type from the supported list of notifiers, which includes DingDing, Discord, Email, Google Hangouts Chat, HipChat, Kafka REST Proxy, LINE, Microsoft Teams, OpsGenie, PagerDuty, Prometheus Alertmanager, Pushover, Sensu, Slack, Telegram, Threema Gateway, VictorOps, and webhook.

Prerequisites

  1. The grafana-server is accessible. For more information, see Accessing the Grafana web UI.
  2. An alert rule is created. For more information, see Creating panels and alert in PCP Redis data source.
  3. Configure SMTP and add a valid sender’s email address in the grafana/grafana.ini file:

    # vi /etc/grafana/grafana.ini
    
    [smtp]
    enabled = true
    from_address = abc@gmail.com

    Replace abc@gmail.com by a valid email address.

  4. Restart grafana-server

    # systemctl restart grafana-server.service

Procedure

  1. From the menu, hover over the    alerting bell icon    Alerting iconclick Notification channelsAdd channel.
  2. In the Add notification channel details pane, perform the following:

    1. Enter your name in the Name text box
    2. Select the communication Type, for example, Email and enter the email address. You can add multiple email addresses using the ; separator.
    3. Optional: Configure Optional Email settings and Notification settings.
  3. Click Save.
  4. Select a notification channel in the alert rule:

    1. From the menu, hover over the    alerting bell icon    Alerting icon and then click Alert rules.
    2. From the Alert Rules tab, click the created alert rule.
    3. On the Notifications tab, select your notification channel name from the Send to option, and then add an alert message.
    4. Click Apply.

10.7. Setting up authentication between PCP components

You can setup authentication using the scram-sha-256 authentication mechanism, which is supported by PCP through the Simple Authentication Security Layer (SASL) framework.

Note

From Red Hat Enterprise Linux 8.3, PCP supports the scram-sha-256 authentication mechanism.

Procedure

  1. Install the sasl framework for the scram-sha-256 authentication mechanism:

    # yum install cyrus-sasl-scram cyrus-sasl-lib
  2. Specify the supported authentication mechanism and the user database path in the pmcd.conf file:

    # vi /etc/sasl2/pmcd.conf
    
    mech_list: scram-sha-256
    
    sasldb_path: /etc/pcp/passwd.db
  3. Create a new user:

    # useradd -r metrics

    Replace metrics by your user name.

  4. Add the created user in the user database:

    # saslpasswd2 -a pmcd metrics
    
    Password:
    Again (for verification):

    To add the created user, you are required to enter the metrics account password.

  5. Set the permissions of the user database:

    # chown root:pcp /etc/pcp/passwd.db
    # chmod 640 /etc/pcp/passwd.db
  6. Restart the pmcd service:

    # systemctl restart pmcd

Verification

  • Verify the sasl configuration:

    # pminfo -f -h "pcp://127.0.0.1?username=metrics" disk.dev.read
    Password:
    disk.dev.read
    inst [0 or "sda"] value 19540

Additional resources

10.8. Installing PCP bpftrace

Install the PCP bpftrace agent to introspect a system and to gather metrics from the kernel and user-space tracepoints.

The bpftrace agent uses bpftrace scripts to gather the metrics. The bpftrace scripts use the enhanced Berkeley Packet Filter (eBPF).

This procedure describes how to install a pcp bpftrace.

Prerequisites

  1. PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
  2. The grafana-server is configured. For more information, see Setting up a grafana-server.
  3. The scram-sha-256 authentication mechanism is configured. For more information, see Setting up authentication between PCP components.

Procedure

  1. Install the pcp-pmda-bpftrace package:

    # yum install pcp-pmda-bpftrace
  2. Edit the bpftrace.conf file and add the user that you have created in the {setting-up-authentication-between-pcp-components}:

    # vi /var/lib/pcp/pmdas/bpftrace/bpftrace.conf
    
    [dynamic_scripts]
    enabled = true
    auth_enabled = true
    allowed_users = root,metrics

    Replace metrics by your user name.

  3. Install bpftrace PMDA:

    # cd /var/lib/pcp/pmdas/bpftrace/
    # ./Install
    Updating the Performance Metrics Name Space (PMNS) ...
    Terminate PMDA if already installed ...
    Updating the PMCD control file, and notifying PMCD ...
    Check bpftrace metrics have appeared ... 7 metrics and 6 values

    The pmda-bpftrace is now installed, and can only be used after authenticating your user. For more information, see Viewing the PCP bpftrace System Analysis dashboard.

Additional resources

  • pmdabpftrace(1) and bpftrace man pages on your system

10.9. Viewing the PCP bpftrace System Analysis dashboard

Using the PCP bpftrace data source, you can access the live data from sources which are not available as normal data from the pmlogger or archives

In the PCP bpftrace data source, you can view the dashboard with an overview of useful metrics.

Prerequisites

  1. The PCP bpftrace is installed. For more information, see Installing PCP bpftrace.
  2. The grafana-server is accessible. For more information, see Accessing the Grafana web UI.

Procedure

  1. Log into the Grafana web UI.
  2. In the Grafana Home page, click Add your first data source.
  3. In the Add data source pane, type bpftrace in the Filter by name or type text box and then click PCP bpftrace.
  4. In the Data Sources / PCP bpftrace pane, perform the following:

    1. Add http://localhost:44322 in the URL field.
    2. Toggle the Basic Auth option and add the created user credentials in the User and Password field.
    3. Click Save & Test.

      Figure 10.5. Adding PCP bpftrace in the data source

      bpftrace auth
    4. Click Dashboards tabImportPCP bpftrace: System Analysis to see a dashboard with an overview of any useful metrics.

      Figure 10.6. PCP bpftrace: System Analysis

      pcp bpftrace bpftrace system analysis

10.10. Installing PCP Vector

This procedure describes how to install a pcp vector.

Prerequisites

  1. PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
  2. The grafana-server is configured. For more information, see Setting up a grafana-server.

Procedure

  1. Install the pcp-pmda-bcc package:

    # yum install pcp-pmda-bcc
  2. Install the bcc PMDA:

    # cd /var/lib/pcp/pmdas/bcc
    # ./Install
    [Wed Apr  1 00:27:48] pmdabcc(22341) Info: Initializing, currently in 'notready' state.
    [Wed Apr  1 00:27:48] pmdabcc(22341) Info: Enabled modules:
    [Wed Apr  1 00:27:48] pmdabcc(22341) Info: ['biolatency', 'sysfork',
    [...]
    Updating the Performance Metrics Name Space (PMNS) ...
    Terminate PMDA if already installed ...
    Updating the PMCD control file, and notifying PMCD ...
    Check bcc metrics have appeared ... 1 warnings, 1 metrics and 0 values

Additional resources

  • pmdabcc(1) man page on your system

10.11. Viewing the PCP Vector Checklist

The PCP Vector data source displays live metrics and uses the pcp metrics. It analyzes data for individual hosts.

After adding the PCP Vector data source, you can view the dashboard with an overview of useful metrics and view the related troubleshooting or reference links in the checklist.

Prerequisites

  1. The PCP Vector is installed. For more information, see Installing PCP Vector.
  2. The grafana-server is accessible. For more information, see Accessing the Grafana web UI.

Procedure

  1. Log into the Grafana web UI.
  2. In the Grafana Home page, click Add your first data source.
  3. In the Add data source pane, type vector in the Filter by name or type text box and then click PCP Vector.
  4. In the Data Sources / PCP Vector pane, perform the following:

    1. Add http://localhost:44322 in the URL field and then click Save & Test.
    2. Click Dashboards tabImportPCP Vector: Host Overview to see a dashboard with an overview of any useful metrics.

      Figure 10.7. PCP Vector: Host Overview

      pcp vector host overview
  5. From the menu, hover over the    pcp plugin in grafana    Performance Co-Pilot plugin and then click PCP Vector Checklist.

    In the PCP checklist, click    pcp vector checklist troubleshooting doc    help or    pcp vector checklist warning    warning icon to view the related troubleshooting or reference links.

    Figure 10.8. Performance Co-Pilot / PCP Vector Checklist

    pcp vector checklist

10.12. Using heatmaps in Grafana

You can use heatmaps in Grafana to view histograms of your data over time, identify trends and patterns in your data, and see how they change over time. Each column within a heatmap represents a single histogram with different colored cells representing the different densities of observation of a given value within that histogram.

Important

This specific workflow is for the heatmaps in Grafana version 9.2.10 and later on RHEL8.

Prerequisites

Procedure

  1. Hover the cursor over the Dashboards tab and click + New dashboard.
  2. In the Add panel menu, click Add a new panel.
  3. In the Query tab:

    1. Select PCP Redis from the query list instead of the selected default option.
    2. In the text field of A, enter a metric, for example, kernel.all.load to visualize the kernel load graph.
  4. Click the visualization dropdown menu, which is set to Time series by default, and then click Heatmap.
  5. Optional: In the Panel Options dropdown menu, add a Panel Title and Description.
  6. In the Heatmap dropdown menu, under the Calculate from data setting, click Yes.

    Heatmap

    A configured Grafana heatmap

  7. Optional: In the Colors dropdown menu, change the Scheme from the default Orange and select the number of steps (color shades).
  8. Optional: In the Tooltip dropdown menu, under the Show histogram (Y Axis) setting, click the toggle to display a cell’s position within its specific histogram when hovering your cursor over a cell in the heatmap. For example:

    Show histogram (Y Axis) cell display

    A cell’s specific position within its histogram

10.13. Troubleshooting Grafana issues

It is sometimes neccesary to troubleshoot Grafana issues, such as, Grafana does not display any data, the dashboard is black, or similar issues.

Procedure

  • Verify that the pmlogger service is up and running by executing the following command:

    $ systemctl status pmlogger
  • Verify if files were created or modified to the disk by executing the following command:

    $ ls /var/log/pcp/pmlogger/$(hostname)/ -rlt
    total 4024
    -rw-r--r--. 1 pcp pcp   45996 Oct 13  2019 20191013.20.07.meta.xz
    -rw-r--r--. 1 pcp pcp     412 Oct 13  2019 20191013.20.07.index
    -rw-r--r--. 1 pcp pcp   32188 Oct 13  2019 20191013.20.07.0.xz
    -rw-r--r--. 1 pcp pcp   44756 Oct 13  2019 20191013.20.30-00.meta.xz
    [..]
  • Verify that the pmproxy service is running by executing the following command:

    $ systemctl status pmproxy
  • Verify that pmproxy is running, time series support is enabled, and a connection to Redis is established by viewing the /var/log/pcp/pmproxy/pmproxy.log file and ensure that it contains the following text:

    pmproxy(1716) Info: Redis slots, command keys, schema version setup

    Here, 1716 is the PID of pmproxy, which will be different for every invocation of pmproxy.

  • Verify if the Redis database contains any keys by executing the following command:

    $ redis-cli dbsize
    (integer) 34837
  • Verify if any PCP metrics are in the Redis database and pmproxy is able to access them by executing the following commands:

    $ pmseries disk.dev.read
    2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    
    $ pmseries "disk.dev.read[count:10]"
    2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
        [Mon Jul 26 12:21:10.085468000 2021] 117971 70e83e88d4e1857a3a31605c6d1333755f2dd17c
        [Mon Jul 26 12:21:00.087401000 2021] 117758 70e83e88d4e1857a3a31605c6d1333755f2dd17c
        [Mon Jul 26 12:20:50.085738000 2021] 116688 70e83e88d4e1857a3a31605c6d1333755f2dd17c
    [...]
    $ redis-cli --scan --pattern "*$(pmseries 'disk.dev.read')"
    
    pcp:metric.name:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    pcp:values:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    pcp:desc:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    pcp:labelvalue:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    pcp:instances:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
    pcp:labelflags:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
  • Verify if there are any errors in the Grafana logs by executing the following command:

    $ journalctl -e -u grafana-server
    -- Logs begin at Mon 2021-07-26 11:55:10 IST, end at Mon 2021-07-26 12:30:15 IST. --
    Jul 26 11:55:17 localhost.localdomain systemd[1]: Starting Grafana instance...
    Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Starting Grafana" logger=server version=7.3.6 c>
    Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Config loaded from" logger=settings file=/usr/s>
    Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Config loaded from" logger=settings file=/etc/g>
    [...]

Chapter 11. Optimizing the system performance using the web console

Learn how to set a performance profile in the RHEL web console to optimize the performance of the system for a selected task.

11.1. Performance tuning options in the web console

Red Hat Enterprise Linux 8 provides several performance profiles that optimize the system for the following tasks:

  • Systems using the desktop
  • Throughput performance
  • Latency performance
  • Network performance
  • Low power consumption
  • Virtual machines

The TuneD service optimizes system options to match the selected profile.

In the web console, you can set which performance profile your system uses.

Additional resources

11.2. Setting a performance profile in the web console

Depending on the task you want to perform, you can use the web console to optimize system performance by setting a suitable performance profile.

Prerequisites

Procedure

  1. Log in to the RHEL 8 web console.

    For details, see Logging in to the web console.

  2. Click Overview.
  3. In the Configuration section, click the current performance profile.

    Image displaying the Overview pane of the cockpit interface.

  4. In the Change Performance Profile dialog box, set the required profile.

    Image displaying the Change performance profile dialog box.

  5. Click Change Profile.

Verification

  • The Overview tab now shows the selected performance profile in the Configuration section.

11.3. Monitoring performance on the local system by using the web console

Red Hat Enterprise Linux web console uses the Utilization Saturation and Errors (USE) Method for troubleshooting. The new performance metrics page has a historical view of your data organized chronologically with the newest data at the top.

In the Metrics and history page, you can view events, errors, and graphical representation for resource utilization and saturation.

Prerequisites

  • You have installed the RHEL 8 web console.

    For instructions, see Installing and enabling the web console.

  • The cockpit-pcp package, which enables collecting the performance metrics, is installed.
  • The Performance Co-Pilot (PCP) service is enabled:

    # systemctl enable --now pmlogger.service pmproxy.service

Procedure

  1. Log in to the RHEL 8 web console.

    For details, see Logging in to the web console.

  2. Click Overview.
  3. In the Usage section, click View metrics and history.

    Image displaying the Overview pane of the cockpit interface.

    The Metrics and history section opens:

    • The current system configuration and usage: Image displaying the current system configuration and usage
    • The performance metrics in a graphical form over a user-specified time interval: Image displaying the performance metrics of the CPU

11.4. Monitoring performance on several systems by using the web console and Grafana

Grafana enables you to collect data from several systems at once and review a graphical representation of their collected Performance Co-Pilot (PCP) metrics. You can set up performance metrics monitoring and export for several systems in the web console interface.

Prerequisites

  • You have installed the RHEL 8 web console.

    For instructions, see Installing and enabling the web console.

  • You have installed the cockpit-pcp package.
  • You have enabled the PCP service:

    # systemctl enable --now pmlogger.service pmproxy.service
  • You have set up the Grafana dashboard. For more information, see Setting up a grafana-server.
  • You have installed the redis package.

    Alternatively, you can install the package from the web console interface later in the procedure.

Procedure

  1. Log in to the RHEL 8 web console.

    For details, see Logging in to the web console.

  2. In the Overview page, click View metrics and history in the Usage table.
  3. Click the Metrics settings button.
  4. Move the Export to network slider to active position.

    Metrics settings

    If you do not have the redis package installed, the web console prompts you to install it.

  5. To open the pmproxy service, select a zone from a drop-down list and click the Add pmproxy button.
  6. Click Save.

Verification

  1. Click Networking.
  2. In the Firewall table, click the Edit rules and zones button.
  3. Search for pmproxy in your selected zone.
Important

Repeat this procedure on all the systems you want to watch.

Chapter 12. Setting the disk scheduler

The disk scheduler is responsible for ordering the I/O requests submitted to a storage device.

You can configure the scheduler in several different ways:

Note

In Red Hat Enterprise Linux 8, block devices support only multi-queue scheduling. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems.

The traditional, single-queue schedulers, which were available in Red Hat Enterprise Linux 7 and earlier versions, have been removed.

12.1. Available disk schedulers

The following multi-queue disk schedulers are supported in Red Hat Enterprise Linux 8:

none
Implements a first-in first-out (FIFO) scheduling algorithm. It merges requests at the generic block layer through a simple last-hit cache.
mq-deadline

Attempts to provide a guaranteed latency for requests from the point at which requests reach the scheduler.

The mq-deadline scheduler sorts queued I/O requests into a read or write batch and then schedules them for execution in increasing logical block addressing (LBA) order. By default, read batches take precedence over write batches, because applications are more likely to block on read I/O operations. After mq-deadline processes a batch, it checks how long write operations have been starved of processor time and schedules the next read or write batch as appropriate.

This scheduler is suitable for most use cases, but particularly those in which the write operations are mostly asynchronous.

bfq

Targets desktop systems and interactive tasks.

The bfq scheduler ensures that a single application is never using all of the bandwidth. In effect, the storage device is always as responsive as if it was idle. In its default configuration, bfq focuses on delivering the lowest latency rather than achieving the maximum throughput.

bfq is based on cfq code. It does not grant the disk to each process for a fixed time slice but assigns a budget measured in number of sectors to the process.

This scheduler is suitable while copying large files and the system does not become unresponsive in this case.

kyber

The scheduler tunes itself to achieve a latency goal by calculating the latencies of every I/O request submitted to the block I/O layer. You can configure the target latencies for read, in the case of cache-misses, and synchronous write requests.

This scheduler is suitable for fast devices, for example NVMe, SSD, or other low latency devices.

12.2. Different disk schedulers for different use cases

Depending on the task that your system performs, the following disk schedulers are recommended as a baseline prior to any analysis and tuning tasks:

Table 12.1. Disk schedulers for different use cases
Use caseDisk scheduler

Traditional HDD with a SCSI interface

Use mq-deadline or bfq.

High-performance SSD or a CPU-bound system with fast storage

Use none, especially when running enterprise applications. Alternatively, use kyber.

Desktop or interactive tasks

Use bfq.

Virtual guest

Use mq-deadline. With a host bus adapter (HBA) driver that is multi-queue capable, use none.

12.3. The default disk scheduler

Block devices use the default disk scheduler unless you specify another scheduler.

Note

For non-volatile Memory Express (NVMe) block devices specifically, the default scheduler is none and Red Hat recommends not changing this.

The kernel selects a default disk scheduler based on the type of device. The automatically selected scheduler is typically the optimal setting. If you require a different scheduler, Red Hat recommends to use udev rules or the TuneD application to configure it. Match the selected devices and switch the scheduler only for those devices.

12.4. Determining the active disk scheduler

This procedure determines which disk scheduler is currently active on a given block device.

Procedure

  • Read the content of the /sys/block/device/queue/scheduler file:

    # cat /sys/block/device/queue/scheduler
    
    [mq-deadline] kyber bfq none

    In the file name, replace device with the block device name, for example sdc.

    The active scheduler is listed in square brackets ([ ]).

12.5. Setting the disk scheduler using TuneD

This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots.

In the following commands and configuration, replace:

  • device with the name of the block device, for example sdf
  • selected-scheduler with the disk scheduler that you want to set for the device, for example bfq

Prerequisites

Procedure

  1. Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL.

    To see which profile is currently active, use:

    $ tuned-adm active
  2. Create a new directory to hold your TuneD profile:

    # mkdir /etc/tuned/my-profile
  3. Find the system unique identifier of the selected block device:

    $ udevadm info --query=property --name=/dev/device | grep -E '(WWN|SERIAL)'
    
    ID_WWN=0x5002538d00000000_
    ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0
    ID_SERIAL_SHORT=20120501030900000
    Note

    The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.

  4. Create the /etc/tuned/my-profile/tuned.conf configuration file. In the file, set the following options:

    1. Optional: Include an existing profile:

      [main]
      include=existing-profile
    2. Set the selected disk scheduler for the device that matches the WWN identifier:

      [disk]
      devices_udev_regex=IDNAME=device system unique id
      elevator=selected-scheduler

      Here:

      • Replace IDNAME with the name of the identifier being used (for example, ID_WWN).
      • Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000).

        To match multiple devices in the devices_udev_regex option, enclose the identifiers in parentheses and separate them with vertical bars:

        devices_udev_regex=(ID_WWN=0x5002538d00000000)|(ID_WWN=0x1234567800000000)
  5. Enable your profile:

    # tuned-adm profile my-profile

Verification

  1. Verify that the TuneD profile is active and applied:

    $ tuned-adm active
    
    Current active profile: my-profile
    $ tuned-adm verify
    
    Verification succeeded, current system settings match the preset profile.
    See TuneD log file ('/var/log/tuned/tuned.log') for details.
  2. Read the contents of the /sys/block/device/queue/scheduler file:

    # cat /sys/block/device/queue/scheduler
    
    [mq-deadline] kyber bfq none

    In the file name, replace device with the block device name, for example sdc.

    The active scheduler is listed in square brackets ([]).

Additional resources

12.6. Setting the disk scheduler using udev rules

This procedure sets a given disk scheduler for specific block devices using udev rules. The setting persists across system reboots.

In the following commands and configuration, replace:

  • device with the name of the block device, for example sdf
  • selected-scheduler with the disk scheduler that you want to set for the device, for example bfq

Procedure

  1. Find the system unique identifier of the block device:

    $ udevadm info --name=/dev/device | grep -E '(WWN|SERIAL)'
    E: ID_WWN=0x5002538d00000000
    E: ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0
    E: ID_SERIAL_SHORT=20120501030900000
    Note

    The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.

  2. Configure the udev rule. Create the /etc/udev/rules.d/99-scheduler.rules file with the following content:

    ACTION=="add|change", SUBSYSTEM=="block", ENV{IDNAME}=="device system unique id", ATTR{queue/scheduler}="selected-scheduler"

    Here:

    • Replace IDNAME with the name of the identifier being used (for example, ID_WWN).
    • Replace device system unique id with the value of the chosen identifier (for example, 0x5002538d00000000).
  3. Reload udev rules:

    # udevadm control --reload-rules
  4. Apply the scheduler configuration:

    # udevadm trigger --type=devices --action=change

Verification

  • Verify the active scheduler:

    # cat /sys/block/device/queue/scheduler

12.7. Temporarily setting a scheduler for a specific disk

This procedure sets a given disk scheduler for specific block devices. The setting does not persist across system reboots.

Procedure

  • Write the name of the selected scheduler to the /sys/block/device/queue/scheduler file:

    # echo selected-scheduler > /sys/block/device/queue/scheduler

    In the file name, replace device with the block device name, for example sdc.

Verification

  • Verify that the scheduler is active on the device:

    # cat /sys/block/device/queue/scheduler

Chapter 13. Tuning the performance of a Samba server

Learn what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact.

Parts of this section were adopted from the Performance Tuning documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.

Prerequisites

13.1. Setting the SMB protocol version

Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version.

Note

To always have the latest stable SMB protocol version enabled, do not set the server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled.

The following procedure explains how to use the default value in the server max protocol parameter.

Procedure

  1. Remove the server max protocol parameter from the [global] section in the /etc/samba/smb.conf file.
  2. Reload the Samba configuration

    # smbcontrol all reload-config

13.2. Tuning shares with directories that contain a large number of files

Linux supports case-sensitive file names. For this reason, Samba needs to scan directories for uppercase and lowercase file names when searching or accessing a file. You can configure a share to create new files only in lowercase or uppercase, which improves the performance.

Prerequisites

  • Samba is configured as a file server

Procedure

  1. Rename all files on the share to lowercase.

    Note

    Using the settings in this procedure, files with names other than in lowercase will no longer be displayed.

  2. Set the following parameters in the share’s section:

    case sensitive = true
    default case = lower
    preserve case = no
    short preserve case = no

    For details about the parameters, see their descriptions in the smb.conf(5) man page on your system.

  3. Verify the /etc/samba/smb.conf file:

    # testparm
  4. Reload the Samba configuration:

    # smbcontrol all reload-config

After you applied these settings, the names of all newly created files on this share use lowercase. Because of these settings, Samba no longer needs to scan the directory for uppercase and lowercase, which improves the performance.

13.3. Settings that can have a negative performance impact

By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases.

To use the optimized settings from the Kernel, remove the socket options parameter from the [global] section in the /etc/samba/smb.conf.

Chapter 14. Optimizing virtual machine performance

Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. The following sections explain the reasons for this deterioration and provide instructions on how to minimize the performance impact of virtualization in RHEL 8, so that your hardware infrastructure resources can be used as efficiently as possible.

14.1. What influences virtual machine performance

VMs are run as user-space processes on the host. The hypervisor therefore needs to convert the host’s system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM therefore cannot achieve the same performance efficiency as the host.

The impact of virtualization on system performance

More specific reasons for VM performance loss include:

  • Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler.
  • VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel.
  • Disk and network I/O settings of the host might have a significant performance impact on the VM.
  • Network traffic typically travels to a VM through a software-based bridge.
  • Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware.

The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include:

  • The number of concurrently running VMs.
  • The amount of virtual devices used by each VM.
  • The device types used by the VMs.
Reducing VM performance loss

RHEL 8 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably:

Important

Tuning VM performance can have negative effects on other virtualization functions. For example, it can make migrating the modified VM more difficult.

14.2. Optimizing virtual machine performance by using TuneD

The TuneD utility is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments.

To optimize RHEL 8 for virtualization, use the following profiles:

  • For RHEL 8 virtual machines, use the virtual-guest profile. It is based on the generally applicable throughput-performance profile, but also decreases the swappiness of virtual memory.
  • For RHEL 8 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance.

Prerequisites

Procedure

To enable a specific TuneD profile:

  1. List the available TuneD profiles.

    # tuned-adm list
    
    Available profiles:
    - balanced             - General non-specialized TuneD profile
    - desktop              - Optimize for the desktop use-case
    [...]
    - virtual-guest        - Optimize for running inside a virtual guest
    - virtual-host         - Optimize for running KVM guests
    Current active profile: balanced
  2. Optional: Create a new TuneD profile or edit an existing TuneD profile.

    For more information, see Customizing TuneD profiles.

  3. Activate a TuneD profile.

    # tuned-adm profile selected-profile
    • To optimize a virtualization host, use the virtual-host profile.

      # tuned-adm profile virtual-host
    • On a RHEL guest operating system, use the virtual-guest profile.

      # tuned-adm profile virtual-guest

Verification

  1. Display the active profile for TuneD.

    # tuned-adm active
    Current active profile: virtual-host
  2. Ensure that the TuneD profile settings have been applied on your system.

    # tuned-adm verify
    Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.

14.3. Configuring virtual machine memory

To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks.

To perform these actions, you can use the web console or the command-line interface.

14.3.1. Adding and removing virtual machine memory by using the web console

To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the web console to adjust amount of memory allocated to the VM.

Prerequisites

  • You have installed the RHEL 8 web console.

    For instructions, see Installing and enabling the web console.

  • The guest OS is running the memory balloon drivers. To verify this is the case:

    1. Ensure the VM’s configuration includes the memballoon device:

      # virsh dumpxml testguest | grep memballoon
      <memballoon model='virtio'>
          </memballoon>

      If this commands displays any output and the model is not set to none, the memballoon device is present.

    2. Ensure the balloon drivers are running in the guest OS.

  • The web console VM plug-in is installed on your system.

Procedure

  1. Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification.

    # virsh dominfo testguest
    Max memory:     2097152 KiB
    Used memory:    2097152 KiB
  1. Log in to the RHEL 8 web console.

    For details, see Logging in to the web console.

  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Click edit next to the Memory line in the Overview pane.

    The Memory Adjustment dialog appears.

    Image displaying the VM memory adjustment dialog box.
  4. Configure the virtual memory for the selected VM.

    • Maximum allocation - Sets the maximum amount of host memory that the VM can use for its processes. You can specify the maximum memory when creating the VM or increase it later. You can specify memory as multiples of MiB or GiB.

      Adjusting maximum memory allocation is only possible on a shut-off VM.

    • Current allocation - Sets the actual amount of memory allocated to the VM. This value can be less than the Maximum allocation but cannot exceed it. You can adjust the value to regulate the memory available to the VM for its processes. You can specify memory as multiples of MiB or GiB.

      If you do not specify this value, the default allocation is the Maximum allocation value.

  5. Click Save.

    The memory allocation of the VM is adjusted.

14.3.2. Adding and removing virtual machine memory by using the command-line interface

To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the CLI to adjust amount of memory allocated to the VM.

Prerequisites

  • The guest OS is running the memory balloon drivers. To verify this is the case:

    1. Ensure the VM’s configuration includes the memballoon device:

      # virsh dumpxml testguest | grep memballoon
      <memballoon model='virtio'>
          </memballoon>

      If this commands displays any output and the model is not set to none, the memballoon device is present.

    2. Ensure the ballon drivers are running in the guest OS.

Procedure

  1. Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification.

    # virsh dominfo testguest
    Max memory:     2097152 KiB
    Used memory:    2097152 KiB
  2. Adjust the maximum memory allocated to a VM. Increasing this value improves the performance potential of the VM, and reducing the value lowers the performance footprint the VM has on your host. Note that this change can only be performed on a shut-off VM, so adjusting a running VM requires a reboot to take effect.

    For example, to change the maximum memory that the testguest VM can use to 4096 MiB:

    # virt-xml testguest --edit --memory memory=4096,currentMemory=4096
    Domain 'testguest' defined successfully.
    Changes will take effect after the domain is fully powered off.

    To increase the maximum memory of a running VM, you can attach a memory device to the VM. This is also referred to as memory hot plug. For details, see Attaching devices to virtual machines,

    Warning

    Removing memory devices from a running VM (also referred as a memory hot unplug) is not supported, and highly discouraged by Red Hat.

  3. Optional: You can also adjust the memory currently used by the VM, up to the maximum allocation. This regulates the memory load that the VM has on the host until the next reboot, without changing the maximum VM allocation.

    # virsh setmem testguest --current 2048

Verification

  1. Confirm that the memory used by the VM has been updated:

    # virsh dominfo testguest
    Max memory:     4194304 KiB
    Used memory:    2097152 KiB
  2. Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use.

     # virsh domstats --balloon testguest
    Domain: 'testguest'
      balloon.current=365624
      balloon.maximum=4194304
      balloon.swap_in=0
      balloon.swap_out=0
      balloon.major_fault=306
      balloon.minor_fault=156117
      balloon.unused=3834448
      balloon.available=4035008
      balloon.usable=3746340
      balloon.last-update=1587971682
      balloon.disk_caches=75444
      balloon.hugetlb_pgalloc=0
      balloon.hugetlb_pgfail=0
      balloon.rss=1005456

14.3.3. Additional resources

14.4. Optimizing virtual machine I/O performance

The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM’s overall efficiency. To address this, you can optimize a VM’s I/O by configuring block I/O parameters.

14.4.1. Tuning block I/O in virtual machines

When multiple block devices are being used by one or more VMs, it might be important to adjust the I/O priority of specific virtual devices by modifying their I/O weights.

Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device’s weight makes it consume less host resources.

Note

Each device’s weight value must be within the 100 to 1000 range. Alternatively, the value can be 0, which removes that device from per-device listings.

Procedure

To display and set a VM’s block I/O parameters:

  1. Display the current <blkio> parameters for a VM:

    # virsh dumpxml VM-name

    <domain>
      [...]
      <blkiotune>
        <weight>800</weight>
        <device>
          <path>/dev/sda</path>
          <weight>1000</weight>
        </device>
        <device>
          <path>/dev/sdb</path>
          <weight>500</weight>
        </device>
      </blkiotune>
      [...]
    </domain>
  2. Edit the I/O weight of a specified device:

    # virsh blkiotune VM-name --device-weights device, I/O-weight

    For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500.

    # virsh blkiotune testguest1 --device-weights /dev/sda, 500

14.4.2. Disk I/O throttling in virtual machines

When several VMs are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM virtualization provides the ability to set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs.

To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine.

Procedure

  1. Use the virsh domblklist command to list the names of all the disk devices on a specified VM.

    # virsh domblklist rollin-coal
    Target     Source
    ------------------------------------------------
    vda        /var/lib/libvirt/images/rollin-coal.qcow2
    sda        -
    sdb        /home/horridly-demanding-processes.iso
  2. Find the host block device where the virtual disk that you want to throttle is mounted.

    For example, if you want to throttle the sdb virtual disk from the previous step, the following output shows that the disk is mounted on the /dev/nvme0n1p3 partition.

    $ lsblk
    NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    zram0                                         252:0    0     4G  0 disk  [SWAP]
    nvme0n1                                       259:0    0 238.5G  0 disk
    ├─nvme0n1p1                                   259:1    0   600M  0 part  /boot/efi
    ├─nvme0n1p2                                   259:2    0     1G  0 part  /boot
    └─nvme0n1p3                                   259:3    0 236.9G  0 part
      └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0    0 236.9G  0 crypt /home
  3. Set I/O limits for the block device by using the virsh blkiotune command.

    # virsh blkiotune VM-name --parameter device,limit

    The following example throttles the sdb disk on the rollin-coal VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput.

    # virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800

Additional information

  • Disk I/O throttling can be useful in various situations, for example when VMs belonging to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks.
  • I/O throttling can be applied independently to each block device attached to a VM and supports limits on throughput and I/O operations.
  • Red Hat does not support using the virsh blkdeviotune command to configure I/O throttling in VMs. For more information about unsupported features when using RHEL 8 as a VM host, see Unsupported features in RHEL 8 virtualization.

14.4.3. Enabling multi-queue virtio-scsi

When using virtio-scsi storage devices in your virtual machines (VMs), the multi-queue virtio-scsi feature provides improved storage performance and scalability. It enables each virtual CPU (vCPU) to have a separate queue and interrupt to use without affecting other vCPUs.

Procedure

  • To enable multi-queue virtio-scsi support for a specific VM, add the following to the VM’s XML configuration, where N is the total number of vCPU queues:

    <controller type='scsi' index='0' model='virtio-scsi'>
       <driver queues='N' />
    </controller>

14.5. Optimizing virtual machine CPU performance

Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU:

  1. Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web console.
  2. Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host:

    # virt-xml testguest1 --edit --cpu host-model
  3. Deactivate kernel same-page merging (KSM).
  4. If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness.

    For details, see Configuring NUMA in a virtual machine and Sample vCPU performance tuning scenario.

14.5.1. Adding and removing virtual CPUs by using the command-line interface

To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM.

When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 8, and Red Hat highly discourages its use.

Prerequisites

  • Optional: View the current state of the vCPUs in the targeted VM. For example, to display the number of vCPUs on the testguest VM:

    # virsh vcpucount testguest
    maximum      config         4
    maximum      live           2
    current      config         2
    current      live           1

    This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.

Procedure

  1. Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the VM’s next boot.

    For example, to increase the maximum vCPU count for the testguest VM to 8:

    # virsh setvcpus testguest 8 --maximum --config

    Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors.

  2. Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the previous step. For example:

    • To increase the number of vCPUs attached to the running testguest VM to 4:

      # virsh setvcpus testguest 4 --live

      This increases the VM’s performance and host load footprint of testguest until the VM’s next boot.

    • To permanently decrease the number of vCPUs attached to the testguest VM to 1:

      # virsh setvcpus testguest 1 --config

      This decreases the VM’s performance and host load footprint of testguest after the VM’s next boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance.

Verification

  • Confirm that the current state of vCPU for the VM reflects your changes.

    # virsh vcpucount testguest
    maximum      config         8
    maximum      live           4
    current      config         1
    current      live           4

14.5.2. Managing virtual CPUs by using the web console

By using the RHEL 8 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected.

Prerequisites

Procedure

  1. Log in to the RHEL 8 web console.

    For details, see Logging in to the web console.

  2. In the Virtual Machines interface, click the VM whose information you want to see.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Click edit next to the number of vCPUs in the Overview pane.

    The vCPU details dialog appears.

    Image displaying the VM CPU details dialog box.
  1. Configure the virtual CPUs for the selected VM.

    • vCPU Count - The number of vCPUs currently in use.

      Note

      The vCPU count cannot be greater than the vCPU Maximum.

    • vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the VM.
    • Sockets - The number of sockets to expose to the VM.
    • Cores per socket - The number of cores for each socket to expose to the VM.
    • Threads per core - The number of threads for each core to expose to the VM.

      Note that the Sockets, Cores per socket, and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values.

  2. Click Apply.

    The virtual CPUs for the VM are configured.

    Note

    Changes to virtual CPU settings only take effect after the VM is restarted.

14.5.3. Configuring NUMA in a virtual machine

The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 8 host.

Prerequisites

  • The host is a NUMA-compatible machine. To detect whether this is the case, use the virsh nodeinfo command and see the NUMA cell(s) line:

    # virsh nodeinfo
    CPU model:           x86_64
    CPU(s):              48
    CPU frequency:       1200 MHz
    CPU socket(s):       1
    Core(s) per socket:  12
    Thread(s) per core:  2
    NUMA cell(s):        2
    Memory size:         67012964 KiB

    If the value of the line is 2 or greater, the host is NUMA-compatible.

Procedure

For ease of use, you can set up a VM’s NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement.

Automatic methods

  • Set the VM’s NUMA policy to Preferred. For example, to do so for the testguest5 VM:

    # virt-xml testguest5 --edit --vcpus placement=auto
    # virt-xml testguest5 --edit --numatune mode=preferred
  • Enable automatic NUMA balancing on the host:

    # echo 1 > /proc/sys/kernel/numa_balancing
  • Start the numad service to automatically align the VM CPU with memory resources.

    # systemctl start numad

Manual methods

  1. Pin specific vCPU threads to a specific host CPU or range of CPUs. This is also possible on non-NUMA hosts and VMs, and is recommended as a safe method of vCPU performance improvement.

    For example, the following commands pin vCPU threads 0 to 5 of the testguest6 VM to host CPUs 1, 3, 5, 7, 9, and 11, respectively:

    # virsh vcpupin testguest6 0 1
    # virsh vcpupin testguest6 1 3
    # virsh vcpupin testguest6 2 5
    # virsh vcpupin testguest6 3 7
    # virsh vcpupin testguest6 4 9
    # virsh vcpupin testguest6 5 11

    Afterwards, you can verify whether this was successful:

    # virsh vcpupin testguest6
    VCPU   CPU Affinity
    ----------------------
    0      1
    1      3
    2      5
    3      7
    4      9
    5      11
  2. After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. For example, the following commands pin the QEMU process thread of testguest6 to CPUs 13 and 15, and verify this was successful:

    # virsh emulatorpin testguest6 13,15
    # virsh emulatorpin testguest6
    emulator: CPU Affinity
    ----------------------------------
           *: 13,15
  3. Finally, you can also specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM’s vCPU. For example, the following commands set testguest6 to use host NUMA nodes 3 to 5, and verify this was successful:

    # virsh numatune testguest6 --nodeset 3-5
    # virsh numatune testguest6
Note

For best performance results, it is recommended to use all of the manual tuning methods listed above

14.5.4. Sample vCPU performance tuning scenario

To obtain the best vCPU performance possible, Red Hat recommends by using manual vcpupin, emulatorpin, and numatune settings together, for example like in the following scenario.

Starting scenario

  • Your host has the following hardware specifics:

    • 2 NUMA nodes
    • 3 CPU cores on each node
    • 2 threads on each core

    The output of virsh nodeinfo of such a machine would look similar to:

    # virsh nodeinfo
    CPU model:           x86_64
    CPU(s):              12
    CPU frequency:       3661 MHz
    CPU socket(s):       2
    Core(s) per socket:  3
    Thread(s) per core:  2
    NUMA cell(s):        2
    Memory size:         31248692 KiB
  • You intend to modify an existing VM to have 8 vCPUs, which means that it will not fit in a single NUMA node.

    Therefore, you should distribute 4 vCPUs on each NUMA node and make the vCPU topology resemble the host topology as closely as possible. This means that vCPUs that run as sibling threads of a given physical CPU should be pinned to host threads on the same core. For details, see the Solution below:

Solution

  1. Obtain the information about the host topology:

    # virsh capabilities

    The output should include a section that looks similar to the following:

    <topology>
      <cells num="2">
        <cell id="0">
          <memory unit="KiB">15624346</memory>
          <pages unit="KiB" size="4">3906086</pages>
          <pages unit="KiB" size="2048">0</pages>
          <pages unit="KiB" size="1048576">0</pages>
          <distances>
            <sibling id="0" value="10" />
            <sibling id="1" value="21" />
          </distances>
          <cpus num="6">
            <cpu id="0" socket_id="0" core_id="0" siblings="0,3" />
            <cpu id="1" socket_id="0" core_id="1" siblings="1,4" />
            <cpu id="2" socket_id="0" core_id="2" siblings="2,5" />
            <cpu id="3" socket_id="0" core_id="0" siblings="0,3" />
            <cpu id="4" socket_id="0" core_id="1" siblings="1,4" />
            <cpu id="5" socket_id="0" core_id="2" siblings="2,5" />
          </cpus>
        </cell>
        <cell id="1">
          <memory unit="KiB">15624346</memory>
          <pages unit="KiB" size="4">3906086</pages>
          <pages unit="KiB" size="2048">0</pages>
          <pages unit="KiB" size="1048576">0</pages>
          <distances>
            <sibling id="0" value="21" />
            <sibling id="1" value="10" />
          </distances>
          <cpus num="6">
            <cpu id="6" socket_id="1" core_id="3" siblings="6,9" />
            <cpu id="7" socket_id="1" core_id="4" siblings="7,10" />
            <cpu id="8" socket_id="1" core_id="5" siblings="8,11" />
            <cpu id="9" socket_id="1" core_id="3" siblings="6,9" />
            <cpu id="10" socket_id="1" core_id="4" siblings="7,10" />
            <cpu id="11" socket_id="1" core_id="5" siblings="8,11" />
          </cpus>
        </cell>
      </cells>
    </topology>
  2. Optional: Test the performance of the VM by using the applicable tools and utilities.
  3. Set up and mount 1 GiB huge pages on the host:

    Note

    1 GiB huge pages might not be available on some architectures and configurations, such as ARM 64 hosts.

    1. Add the following line to the host’s kernel command line:

      default_hugepagesz=1G hugepagesz=1G
    2. Create the /etc/systemd/system/hugetlb-gigantic-pages.service file with the following content:

      [Unit]
      Description=HugeTLB Gigantic Pages Reservation
      DefaultDependencies=no
      Before=dev-hugepages.mount
      ConditionPathExists=/sys/devices/system/node
      ConditionKernelCommandLine=hugepagesz=1G
      
      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/etc/systemd/hugetlb-reserve-pages.sh
      
      [Install]
      WantedBy=sysinit.target
    3. Create the /etc/systemd/hugetlb-reserve-pages.sh file with the following content:

      #!/bin/sh
      
      nodes_path=/sys/devices/system/node/
      if [ ! -d $nodes_path ]; then
      	echo "ERROR: $nodes_path does not exist"
      	exit 1
      fi
      
      reserve_pages()
      {
      	echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages
      }
      
      reserve_pages 4 node1
      reserve_pages 4 node2

      This reserves four 1GiB huge pages from node1 and four 1GiB huge pages from node2.

    4. Make the script created in the previous step executable:

      # chmod +x /etc/systemd/hugetlb-reserve-pages.sh
    5. Enable huge page reservation on boot:

      # systemctl enable hugetlb-gigantic-pages
  4. Use the virsh edit command to edit the XML configuration of the VM you wish to optimize, in this example super-VM:

    # virsh edit super-vm
  5. Adjust the XML configuration of the VM in the following way:

    1. Set the VM to use 8 static vCPUs. Use the <vcpu/> element to do this.
    2. Pin each of the vCPU threads to the corresponding host CPU threads that it mirrors in the topology. To do so, use the <vcpupin/> elements in the <cputune> section.

      Note that, as shown by the virsh capabilities utility above, host CPU threads are not ordered sequentially in their respective cores. In addition, the vCPU threads should be pinned to the highest available set of host cores on the same NUMA node. For a table illustration, see the Sample topology section below.

      The XML configuration for steps a. and b. can look similar to:

      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='4'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='5'/>
        <vcpupin vcpu='4' cpuset='7'/>
        <vcpupin vcpu='5' cpuset='10'/>
        <vcpupin vcpu='6' cpuset='8'/>
        <vcpupin vcpu='7' cpuset='11'/>
        <emulatorpin cpuset='6,9'/>
      </cputune>
    3. Set the VM to use 1 GiB huge pages:

      <memoryBacking>
        <hugepages>
          <page size='1' unit='GiB'/>
        </hugepages>
      </memoryBacking>
    4. Configure the VM’s NUMA nodes to use memory from the corresponding NUMA nodes on the host. To do so, use the <memnode/> elements in the <numatune/> section:

      <numatune>
        <memory mode="preferred" nodeset="1"/>
        <memnode cellid="0" mode="strict" nodeset="0"/>
        <memnode cellid="1" mode="strict" nodeset="1"/>
      </numatune>
    5. Ensure the CPU mode is set to host-passthrough, and that the CPU uses cache in passthrough mode:

      <cpu mode="host-passthrough">
        <topology sockets="2" cores="2" threads="2"/>
        <cache mode="passthrough"/>
  6. Confirm that the resulting XML configuration of the VM includes a section similar to the following:

    [...]
      <memoryBacking>
        <hugepages>
          <page size='1' unit='GiB'/>
        </hugepages>
      </memoryBacking>
      <vcpu placement='static'>8</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='4'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='5'/>
        <vcpupin vcpu='4' cpuset='7'/>
        <vcpupin vcpu='5' cpuset='10'/>
        <vcpupin vcpu='6' cpuset='8'/>
        <vcpupin vcpu='7' cpuset='11'/>
        <emulatorpin cpuset='6,9'/>
      </cputune>
      <numatune>
        <memory mode="preferred" nodeset="1"/>
        <memnode cellid="0" mode="strict" nodeset="0"/>
        <memnode cellid="1" mode="strict" nodeset="1"/>
      </numatune>
      <cpu mode="host-passthrough">
        <topology sockets="2" cores="2" threads="2"/>
        <cache mode="passthrough"/>
        <numa>
          <cell id="0" cpus="0-3" memory="2" unit="GiB">
            <distances>
              <sibling id="0" value="10"/>
              <sibling id="1" value="21"/>
            </distances>
          </cell>
          <cell id="1" cpus="4-7" memory="2" unit="GiB">
            <distances>
              <sibling id="0" value="21"/>
              <sibling id="1" value="10"/>
            </distances>
          </cell>
        </numa>
      </cpu>
    </domain>
  7. Optional: Test the performance of the VM by using the applicable tools and utilities to evaluate the impact of the VM’s optimization.

Sample topology

  • The following tables illustrate the connections between the vCPUs and the host CPUs they should be pinned to:

    Table 14.1. Host topology

    CPU threads

    0

    3

    1

    4

    2

    5

    6

    9

    7

    10

    8

    11

    Cores

    0

    1

    2

    3

    4

    5

    Sockets

    0

    1

    NUMA nodes

    0

    1

    Table 14.2. VM topology

    vCPU threads

    0

    1

    2

    3

    4

    5

    6

    7

    Cores

    0

    1

    2

    3

    Sockets

    0

    1

    NUMA nodes

    0

    1

    Table 14.3. Combined host and VM topology

    vCPU threads

     

    0

    1

    2

    3

     

    4

    5

    6

    7

    Host CPU threads

    0

    3

    1

    4

    2

    5

    6

    9

    7

    10

    8

    11

    Cores

    0

    1

    2

    3

    4

    5

    Sockets

    0

    1

    NUMA nodes

    0

    1

    In this scenario, there are 2 NUMA nodes and 8 vCPUs. Therefore, 4 vCPU threads should be pinned to each node.

    In addition, Red Hat recommends leaving at least a single CPU thread available on each node for host system operations.

    Because in this example, each NUMA node houses 3 cores, each with 2 host CPU threads, the set for node 0 translates as follows:

    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='5'/>

14.5.5. Deactivating kernel same-page merging

Although kernel same-page merging (KSM) improves memory density, it increases CPU utilization, and might adversely affect overall performance depending on the workload. In such cases, you can improve the virtual machine (VM) performance by deactivating KSM.

Depending on your requirements, you can either deactivate KSM for a single session or persistently.

Procedure

  • To deactivate KSM for a single session, use the systemctl utility to stop ksm and ksmtuned services.

    # systemctl stop ksm
    
    # systemctl stop ksmtuned
  • To deactivate KSM persistently, use the systemctl utility to disable ksm and ksmtuned services.

    # systemctl disable ksm
    Removed /etc/systemd/system/multi-user.target.wants/ksm.service.
    # systemctl disable ksmtuned
    Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.
Note

Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM pages in the system by using the following command:

# echo 2 > /sys/kernel/mm/ksm/run

After anonymous pages replace the KSM pages, the khugepaged kernel service will rebuild transparent hugepages on the VM’s physical memory.

14.6. Optimizing virtual machine network performance

Due to the virtual nature of a VM’s network interface card (NIC), the VM loses a portion of its allocated host network bandwidth, which can reduce the overall workload efficiency of the VM. The following tips can minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput.

Procedure

Use any of the following methods and observe if it has a beneficial effect on your VM network performance:

Enable the vhost_net module

On the host, ensure the vhost_net kernel feature is enabled:

# lsmod | grep vhost
vhost_net              32768  1
vhost                  53248  1 vhost_net
tap                    24576  1 vhost_net
tun                    57344  6 vhost_net

If the output of this command is blank, enable the vhost_net kernel module:

# modprobe vhost_net
Set up multi-queue virtio-net

To set up the multi-queue virtio-net feature for a VM, use the virsh edit command to edit to the XML configuration of the VM. In the XML, add the following to the <devices> section, and replace N with the number of vCPUs in the VM, up to 16:

<interface type='network'>
      <source network='default'/>
      <model type='virtio'/>
      <driver name='vhost' queues='N'/>
</interface>

If the VM is running, restart it for the changes to take effect.

Batching network packets

In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use:

# ethtool -C tap0 rx-frames 64
SR-IOV
If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more information, see Managing SR-IOV devices.

Additional resources

14.7. Virtual machine performance monitoring tools

To identify what consumes the most VM resources and which aspect of VM performance needs optimization, performance diagnostic tools, both general and VM-specific, can be used.

Default OS performance monitoring tools

For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems:

  • On your RHEL 8 host, as root, use the top utility or the system monitor application, and look for qemu and virt in the output. This shows how much host system resources your VMs are consuming.

    • If the monitoring tool displays that any of the qemu or virt processes consume a large portion of the host CPU or memory capacity, use the perf utility to investigate. For details, see below.
    • In addition, if a vhost_net thread process, named for example vhost_net-1234, is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features, such as multi-queue virtio-net.
  • On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources.

    • On Linux systems, you can use the top utility.
    • On Windows systems, you can use the Task Manager application.

perf kvm

You can use the perf utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 8 host. To do so:

  1. On the host, install the perf package:

    # yum install perf
  2. Use one of the perf kvm stat commands to display perf statistics for your virtualization host:

    • For real-time monitoring of your hypervisor, use the perf kvm stat live command.
    • To log the perf data of your hypervisor over a period of time, activate the logging by using the perf kvm stat record command. After the command is canceled or interrupted, the data is saved in the perf.data.guest file, which can be analyzed by using the perf kvm stat report command.
  3. Analyze the perf output for types of VM-EXIT events and their distribution. For example, the PAUSE_INSTRUCTION events should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs.

    # perf kvm stat report
    
    Analyze events for all VMs, all VCPUs:
    
    
                 VM-EXIT    Samples  Samples%     Time%    Min Time    Max Time         Avg time
    
      EXTERNAL_INTERRUPT     365634    31.59%    18.04%      0.42us  58780.59us    204.08us ( +-   0.99% )
               MSR_WRITE     293428    25.35%     0.13%      0.59us  17873.02us      1.80us ( +-   4.63% )
        PREEMPTION_TIMER     276162    23.86%     0.23%      0.51us  21396.03us      3.38us ( +-   5.19% )
       PAUSE_INSTRUCTION     189375    16.36%    11.75%      0.72us  29655.25us    256.77us ( +-   0.70% )
                     HLT      20440     1.77%    69.83%      0.62us  79319.41us  14134.56us ( +-   0.79% )
                  VMCALL      12426     1.07%     0.03%      1.02us   5416.25us      8.77us ( +-   7.36% )
           EXCEPTION_NMI         27     0.00%     0.00%      0.69us      1.34us      0.98us ( +-   3.50% )
           EPT_MISCONFIG          5     0.00%     0.00%      5.15us     10.85us      7.88us ( +-  11.67% )
    
    Total Samples:1157497, Total events handled time:413728274.66us.

    Other event types that can signal problems in the output of perf kvm stat include:

For more information about using perf to monitor virtualization performance, see the perf-kvm man page on your system.

numastat

To see the current NUMA configuration of your system, you can use the numastat utility, which is provided by installing the numactl package.

The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting:

# numastat -c qemu-kvm

Per-node process memory usage (in MBs)
PID              Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total
---------------  ------ ------ ------ ------ ------ ------ ------ ------ -----
51722 (qemu-kvm)     68     16    357   6936      2      3    147    598  8128
51747 (qemu-kvm)    245     11      5     18   5172   2532      1     92  8076
53736 (qemu-kvm)     62    432   1661    506   4851    136     22    445  8116
53773 (qemu-kvm)   1393      3      1      2     12      0      0   6702  8114
---------------  ------ ------ ------ ------ ------ ------ ------ ------ -----
Total              1769    463   2024   7462  10037   2672    169   7837 32434

In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient.

# numastat -c qemu-kvm

Per-node process memory usage (in MBs)
PID              Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total
---------------  ------ ------ ------ ------ ------ ------ ------ ------ -----
51747 (qemu-kvm)      0      0      7      0   8072      0      1      0  8080
53736 (qemu-kvm)      0      0      7      0      0      0   8113      0  8120
53773 (qemu-kvm)      0      0      7      0      0      0      1   8110  8118
59065 (qemu-kvm)      0      0   8050      0      0      0      0      0  8051
---------------  ------ ------ ------ ------ ------ ------ ------ ------ -----
Total                 0      0   8072      0   8072      0   8114   8110 32368

Chapter 15. Importance of power management

Reducing the overall power consumption of computer systems helps to save cost. Effectively optimizing energy consumption of each system component includes studying different tasks that your system performs, and configuring each component to ensure that its performance is correct for that job. Lowering the power consumption of a specific component or of the system as a whole leads to lower heat and performance.

Proper power management results in:

  • heat reduction for servers and computing centers
  • reduced secondary costs, including cooling, space, cables, generators, and uninterruptible power supplies (UPS)
  • extended battery life for laptops
  • lower carbon dioxide output
  • meeting government regulations or legal requirements regarding Green IT, for example, Energy Star
  • meeting company guidelines for new systems

This section describes the information regarding power management of your Red Hat Enterprise Linux systems.

15.1. Power management basics

Effective power management is built on the following principles:

An idle CPU should only wake up when needed

Since Red Hat Enterprise Linux 6, the kernel runs tickless, which means the previous periodic timer interrupts have been replaced with on-demand interrupts. Therefore, idle CPUs are allowed to remain idle until a new task is queued for processing, and CPUs that have entered lower power states can remain in these states longer. However, benefits from this feature can be offset if your system has applications that create unnecessary timer events. Polling events, such as checks for volume changes or mouse movement, are examples of such events.

Red Hat Enterprise Linux includes tools using which you can identify and audit applications on the basis of their CPU usage. For more information see, Audit and analysis overview and Tools for auditing.

Unused hardware and devices should be disabled completely
This is true for devices that have moving parts, for example, hard disks. In addition to this, some applications may leave an unused but enabled device "open"; when this occurs, the kernel assumes that the device is in use, which can prevent the device from going into a power saving state.
Low activity should translate to low wattage

In many cases, however, this depends on modern hardware and correct BIOS configuration or UEFI on modern systems, including non-x86 architectures. Make sure that you are using the latest official firmware for your systems and that in the power management or device configuration sections of the BIOS the power management features are enabled. Some features to look for include:

  • Collaborative Processor Performance Controls (CPPC) support for ARM64
  • PowerNV support for IBM Power Systems
  • SpeedStep
  • PowerNow!
  • Cool’n’Quiet
  • ACPI (C-state)
  • Smart

    If your hardware has support for these features and they are enabled in the BIOS, Red Hat Enterprise Linux uses them by default.

Different forms of CPU states and their effects

Modern CPUs together with Advanced Configuration and Power Interface (ACPI) provide different power states. The three different states are:

  • Sleep (C-states)
  • Frequency and voltage (P-states)
  • Heat output (T-states or thermal states)

    A CPU running on the lowest sleep state, consumes the least amount of watts, but it also takes considerably more time to wake it up from that state when needed. In very rare cases this can lead to the CPU having to wake up immediately every time it just went to sleep. This situation results in an effectively permanently busy CPU and loses some of the potential power saving if another state had been used.

A turned off machine uses the least amount of power
One of the best ways to save power is to turn off systems. For example, your company can develop a corporate culture focused on "green IT" awareness with a guideline to turn off machines during lunch break or when going home. You also might consolidate several physical servers into one bigger server and virtualize them using the virtualization technology, which is shipped with Red Hat Enterprise Linux.

15.2. Audit and analysis overview

The detailed manual audit, analysis, and tuning of a single system is usually the exception because the time and cost spent to do so typically outweighs the benefits gained from these last pieces of system tuning.

However, performing these tasks once for a large number of nearly identical systems where you can reuse the same settings for all systems can be very useful. For example, consider the deployment of thousands of desktop systems, or an HPC cluster where the machines are nearly identical. Another reason to do auditing and analysis is to provide a basis for comparison against which you can identify regressions or changes in system behavior in the future. The results of this analysis can be very helpful in cases where hardware, BIOS, or software updates happen regularly and you want to avoid any surprises with regard to power consumption. Generally, a thorough audit and analysis gives you a much better idea of what is really happening on a particular system.

Auditing and analyzing a system with regard to power consumption is relatively hard, even with the most modern systems available. Most systems do not provide the necessary means to measure power use via software. Exceptions exist though:

  • iLO management console of Hewlett Packard server systems has a power management module that you can access through the web.
  • IBM provides a similar solution in their BladeCenter power management module.
  • On some Dell systems, the IT Assistant offers power monitoring capabilities as well.

Other vendors are likely to offer similar capabilities for their server platforms, but as can be seen there is no single solution available that is supported by all vendors. Direct measurements of power consumption are often only necessary to maximize savings as far as possible.

15.3. Tools for auditing

Red Hat Enterprise Linux 8 offers tools using which you can perform system auditing and analysis. Most of them can be used as supplementary sources of information in case you want to verify what you have discovered already or in case you need more in-depth information about certain parts.

Many of these tools are used for performance tuning as well, which include:

PowerTOP
It identifies specific components of kernel and user-space applications that frequently wake up the CPU. Use the powertop command as root to start the PowerTop tool and powertop --calibrate to calibrate the power estimation engine. For more information about PowerTop, see Managing power consumption with PowerTOP.
Diskdevstat and netdevstat

They are SystemTap tools that collect detailed information about the disk activity and network activity of all applications running on a system. Using the collected statistics by these tools, you can identify applications that waste power with many small I/O operations rather than fewer, larger operations. Using the yum install tuned-utils-systemtap kernel-debuginfo command as root, install the diskdevstat and netdevstat tool.

To view the detailed information about the disk and network activity, use:

# diskdevstat

PID   UID   DEV   WRITE_CNT   WRITE_MIN   WRITE_MAX   WRITE_AVG   READ_CNT   READ_MIN   READ_MAX   READ_AVG   COMMAND

3575  1000  dm-2   59          0.000      0.365        0.006        5         0.000        0.000      0.000      mozStorage #5
3575  1000  dm-2    7          0.000      0.000        0.000        0         0.000        0.000      0.000      localStorage DB
[...]


# netdevstat

PID   UID   DEV       XMIT_CNT   XMIT_MIN   XMIT_MAX   XMIT_AVG   RECV_CNT   RECV_MIN   RECV_MAX   RECV_AVG   COMMAND
3572  991  enp0s31f6    40       0.000      0.882       0.108        0         0.000       0.000       0.000     openvpn
3575  1000 enp0s31f6    27       0.000      1.363       0.160        0         0.000       0.000       0.000     Socket Thread
[...]

With these commands, you can specify three parameters: update_interval, total_duration, and display_histogram.

TuneD
It is a profile-based system tuning tool that uses the udev device manager to monitor connected devices, and enables both static and dynamic tuning of system settings. You can use the tuned-adm recommend command to determine which profile Red Hat recommends as the most suitable for a particular product. For more information about TuneD, see Getting started with TuneD and Customizing TuneD profiles. Using the powertop2tuned utility, you can create custom TuneD profiles from PowerTOP suggestions. For information about the powertop2tuned utility, see Optimizing power consumption.
Virtual memory statistics (vmstat)

It is provided by the procps-ng package. Using this tool, you can view the detailed information about processes, memory, paging, block I/O, traps, and CPU activity.

To view this information, use:

$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r  b  swpd  free    buff   cache   si   so  bi   bo   in  cs  us  sy id  wa  st
1  0   0   5805576 380856 4852848   0    0  119  73  814  640  2   2 96   0   0

Using the vmstat -a command, you can display active and inactive memory. For more information about other vmstat options, see the vmstat man page on your system.

iostat

It is provided by the sysstat package. This tool is similar to vmstat, but only for monitoring I/O on block devices. It also provides more verbose output and statistics.

To monitor the system I/O, use:

$ iostat
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.05    0.46    1.55    0.26    0.00   95.67

Device     tps     kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
nvme0n1    53.54     899.48     616.99      3445229     2363196
dm-0       42.84     753.72     238.71      2886921      914296
dm-1        0.03       0.60       0.00         2292           0
dm-2       24.15     143.12     379.80       548193     1454712
blktrace

It provides detailed information about how time is spent in the I/O subsystem.

To view this information in human readable format, use:

# blktrace -d /dev/dm-0 -o - | blkparse -i -

253,0   1    1   0.000000000  17694  Q   W 76423384 + 8 [kworker/u16:1]
253,0   2    1   0.001926913     0   C   W 76423384 + 8 [0]
[...]

Here, The first column, 253,0 is the device major and minor tuple. The second column, 1, gives information about the CPU, followed by columns for timestamps and PID of the process issuing the IO process.

The sixth column, Q, shows the event type, the 7th column, W for write operation, the 8th column, 76423384, is the block number, and the + 8 is the number of requested blocks.

The last field, [kworker/u16:1], is the process name.

By default, the blktrace command runs forever until the process is explicitly killed. Use the -w option to specify the run-time duration.

turbostat

It is provided by the kernel-tools package. It reports on processor topology, frequency, idle power-state statistics, temperature, and power usage on x86-64 processors.

To view this summary, use:

# turbostat

CPUID(0): GenuineIntel 0x16 CPUID levels; 0x80000008 xlevels; family:model:stepping 0x6:8e:a (6:142:10)
CPUID(1): SSE3 MONITOR SMX EIST TM2 TSC MSR ACPI-TM HT TM
CPUID(6): APERF, TURBO, DTS, PTM, HWP, HWPnotify, HWPwindow, HWPepp, No-HWPpkg, EPB
[...]

By default, turbostat prints a summary of counter results for the entire screen, followed by counter results every 5 seconds. Specify a different period between counter results with the -i option, for example, execute turbostat -i 10 to print results every 10 seconds instead.

Turbostat is also useful for identifying servers that are inefficient in terms of power usage or idle time. It also helps to identify the rate of system management interrupts (SMIs) occurring on the system. It can also be used to verify the effects of power management tuning.

cpupower

IT is a collection of tools to examine and tune power saving related features of processors. Use the cpupower command with the frequency-info, frequency-set, idle-info, idle-set, set, info, and monitor options to display and set processor related values.

For example, to view available cpufreq governors, use:

$ cpupower frequency-info --governors
analyzing CPU 0:
  available cpufreq governors: performance powersave

For more information about cpupower, see Viewing CPU related information.

GNOME Power Manager
It is a daemon that is installed as part of the GNOME desktop environment. GNOME Power Manager notifies you of changes in your system’s power status; for example, a change from battery to AC power. It also reports battery status, and warns you when battery power is low.

Additional resources

  • powertop(1), diskdevstat(8), netdevstat(8), tuned(8), vmstat(8), iostat(1), blktrace(8), blkparse(8), and turbostat(8) man pages on your system
  • cpupower(1), cpupower-set(1), cpupower-info(1), cpupower-idle(1), cpupower-frequency-set(1), cpupower-frequency-info(1), and cpupower-monitor(1) man pages on your system

Chapter 16. Managing power consumption with PowerTOP

As a system administrator, you can use the PowerTOP tool to analyze and manage power consumption.

16.1. The purpose of PowerTOP

PowerTOP is a program that diagnoses issues related to power consumption and provides suggestions on how to extend battery lifetime.

The PowerTOP tool can provide an estimate of the total power usage of the system and also individual power usage for each process, device, kernel worker, timer, and interrupt handler. The tool can also identify specific components of kernel and user-space applications that frequently wake up the CPU.

Red Hat Enterprise Linux 8 uses version 2.x of PowerTOP.

16.2. Using PowerTOP

Prerequisites

  • To be able to use PowerTOP, make sure that the powertop package has been installed on your system:

    # yum install powertop

16.2.1. Starting PowerTOP

Procedure

  • To run PowerTOP, use the following command:

    # powertop
Important

Laptops should run on battery power when running the powertop command.

16.2.2. Calibrating PowerTOP

Procedure

  1. On a laptop, you can calibrate the power estimation engine by running the following command:

    # powertop --calibrate
  2. Let the calibration finish without interacting with the machine during the process.

    Calibration takes time because the process performs various tests, cycles through brightness levels and switches devices on and off.

  3. When the calibration process is completed, PowerTOP starts as normal. Let it run for approximately an hour to collect data.

    When enough data is collected, power estimation figures will be displayed in the first column of the output table.

Note

Note that powertop --calibrate can only be used on laptops.

16.2.3. Setting the measuring interval

By default, PowerTOP takes measurements in 20 seconds intervals.

If you want to change this measuring frequency, use the following procedure:

Procedure

  • Run the powertop command with the --time option:

    # powertop --time=time in seconds

16.3. PowerTOP statistics

While it runs, PowerTOP gathers statistics from the system.

PowerTOP's output provides multiple tabs:

  • Overview
  • Idle stats
  • Frequency stats
  • Device stats
  • Tunables
  • WakeUp

You can use the Tab and Shift+Tab keys to cycle through these tabs.

16.3.1. The Overview tab

In the Overview tab, you can view a list of the components that either send wakeups to the CPU most frequently or consume the most power. The items within the Overview tab, including processes, interrupts, devices, and other resources, are sorted according to their utilization.

The adjacent columns within the Overview tab provide the following pieces of information:

Usage
Power estimation of how the resource is being used.
Events/s
Wakeups per second. The number of wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means that less power is consumed. Components are ordered by how much further their power usage can be optimized.
Category
Classification of the component; such as process, device, or timer.
Description
Description of the component.

If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well.

Apart from this, the Overview tab includes the line with summary statistics such as:

  • Total power consumption
  • Remaining battery life (only if applicable)
  • Summary of total wakeups per second, GPU operations per second, and virtual file system operations per second

16.3.2. The Idle stats tab

The Idle stats tab shows usage of C-states for all processors and cores, while the Frequency stats tab shows usage of P-states including the Turbo mode, if applicable, for all processors and cores. The duration of C- or P-states is an indication of how well the CPU usage has been optimized. The longer the CPU stays in the higher C- or P-states (for example C4 is higher than C3), the better the CPU usage optimization is. Ideally, residency is 90% or more in the highest C- or P-state when the system is idle.

16.3.3. The Device stats tab

The Device stats tab provides similar information to the Overview tab but only for devices.

16.3.4. The Tunables tab

The Tunables tab contains PowerTOP's suggestions for optimizing the system for lower power consumption.

Use the up and down keys to move through suggestions, and the enter key to toggle the suggestion on or off.

16.3.5. The WakeUp tab

The WakeUp tab displays the device wakeup settings available for users to change as and when required.

Use the up and down keys to move through the available settings, and the enter key to enable or disable a setting.

Figure 16.1. PowerTOP output

powertop2 14

Additional resources

For more details on PowerTOP, see PowerTOP’s home page.

16.4. Why Powertop does not display Frequency stats values in some instances

While using the Intel P-State driver, PowerTOP only displays values in the Frequency Stats tab if the driver is in passive mode. But, even in this case, the values may be incomplete.

In total, there are three possible modes of the Intel P-State driver:

  • Active mode with Hardware P-States (HWP)
  • Active mode without HWP
  • Passive mode

Switching to the ACPI CPUfreq driver results in complete information being displayed by PowerTOP. However, it is recommended to keep your system on the default settings.

To see what driver is loaded and in what mode, run:

# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver
  • intel_pstate is returned if the Intel P-State driver is loaded and in active mode.
  • intel_cpufreq is returned if the Intel P-State driver is loaded and in passive mode.
  • acpi-cpufreq is returned if the ACPI CPUfreq driver is loaded.

While using the Intel P-State driver, add the following argument to the kernel boot command line to force the driver to run in passive mode:

intel_pstate=passive

To disable the Intel P-State driver and use, instead, the ACPI CPUfreq driver, add the following argument to the kernel boot command line:

intel_pstate=disable

16.5. Generating an HTML output

Apart from the powertop’s output in terminal, you can also generate an HTML report.

Procedure

  • Run the powertop command with the --html option:

    # powertop --html=htmlfile.html

    Replace the htmlfile.html parameter with the required name for the output file.

16.6. Optimizing power consumption

To optimize power consumption, you can use either the powertop service or the powertop2tuned utility.

16.6.1. Optimizing power consumption using the powertop service

You can use the powertop service to automatically enable all PowerTOP's suggestions from the Tunables tab on the boot:

Procedure

  • Enable the powertop service:

    # systemctl enable powertop

16.6.2. The powertop2tuned utility

The powertop2tuned utility allows you to create custom TuneD profiles from PowerTOP suggestions.

By default, powertop2tuned creates profiles in the /etc/tuned/ directory, and bases the custom profile on the currently selected TuneD profile. For safety reasons, all PowerTOP tunings are initially disabled in the new profile.

To enable the tunings, you can:

  • Uncomment them in the /etc/tuned/profile_name/tuned.conf file.
  • Use the --enable or -e option to generate a new profile that enables most of the tunings suggested by PowerTOP.

    Certain potentially problematic tunings, such as the USB autosuspend, are disabled by default and need to be uncommented manually.

16.6.3. Optimizing power consumption using the powertop2tuned utility

Prerequisites

  • The powertop2tuned utility is installed on the system:

    # yum install tuned-utils

Procedure

  1. Create a custom profile:

    # powertop2tuned new_profile_name
  2. Activate the new profile:

    # tuned-adm profile new_profile_name

Additional information

  • For a complete list of options that powertop2tuned supports, use:

    $ powertop2tuned --help

16.6.4. Comparison of powertop.service and powertop2tuned

Optimizing power consumption with powertop2tuned is preferred over powertop.service for the following reasons:

  • The powertop2tuned utility represents integration of PowerTOP into TuneD, which enables to benefit of advantages of both tools.
  • The powertop2tuned utility allows for fine-grained control of enabled tuning.
  • With powertop2tuned, potentially dangerous tuning are not automatically enabled.
  • With powertop2tuned, rollback is possible without reboot.

Chapter 17. Tuning CPU frequency to optimize energy consumption

You can optimize the power consumption of your system by using the available cpupower commands to set CPU speed on a system according to your requirements after setting up the required CPUfreq governor.

17.1. Supported cpupower tool commands

The cpupower tool is a collection of tools to examine and tune power saving related features of processors.

The cpupower tool supports the following commands:

idle-info
Displays the available idle states and other statistics for the CPU idle driver using the cpupower idle-info command. For more information, see CPU Idle States.
idle-set
Enables or disables specific CPU idle state using the cpupower idle-set command as root. Use -d to disable and -e to enable a specific CPU idle state.
frequency-info
Displays the current cpufreq driver and available cpufreq governors using the cpupower frequency-info command. For more information, see CPUfreq drivers, Core CPUfreq Governors, and Intel P-state CPUfreq governors.
frequency-set
Sets the cpufreq and governors using the cpupower frequency-set command as root. For more information, see Setting up CPUfreq governor.
set

Sets processor power saving policies using the cpupower set command as root.

Using the --perf-bias option, you can enable software on supported Intel processors to determine the balance between optimum performance and saving power. Assigned values range from 0 to 15, where 0 is optimum performance and 15 is optimum power efficiency. By default, the --perf-bias option applies to all cores. To apply it only to individual cores, add the --cpu cpulist option.

info

Displays processor power related and hardware configurations, which you have enabled using the cpupower set command. For example, if you assign the --perf-bias value as 5:

# cpupower set --perf-bias 5
# cpupower info
analyzing CPU 0:
perf-bias: 5
monitor

Displays the idle statistics and CPU demands using the cpupower monitor command.

# cpupower monitor
 | Nehalem       || Mperf    ||Idle_Stats
 CPU| C3   | C6   | PC3  | PC6  || C0   | Cx   | Freq || POLL | C1   | C1E  | C3   | C6   | C7s  | C8   | C9   | C10
   0|  1.95| 55.12|  0.00|  0.00||  4.21| 95.79|  3875||  0.00|  0.68|  2.07|  3.39| 88.77|  0.00|  0.00|  0.00| 0.00
[...]

Using the -l option, you can list all available monitors on your system and the -m option to display information related to specific monitors. For example, to monitor information related to the Mperf monitor, use the cpupower monitor -m Mperf command as root.

Additional resources

  • cpupower(1), cpupower-idle-info(1), cpupower-idle-set(1), cpupower-frequency-set(1), cpupower-frequency-info(1), cpupower-set(1), cpupower-info(1), and cpupower-monitor(1) man pages on your system

17.2. CPU Idle States

CPUs with the x86 architecture support various states, such as, few parts of the CPU are deactivated or using lower performance settings, known as C-states.

With this state, you can save power by partially deactivating CPUs that are not in use. There is no need to configure the C-state, unlike P-states that require a governor and potentially some set up to avoid undesirable power or performance issues. C-states are numbered from C0 upwards, with higher numbers representing decreased CPU functionality and greater power saving. C-states of a given number are broadly similar across processors, although the exact details of the specific feature sets of the state may vary between processor families. C-states 0–3 are defined as follows:

C0
In this state, the CPU is working and not idle at all.
C1, Halt
In this state, the processor is not executing any instructions but is typically not in a lower power state. The CPU can continue processing with practically no delay. All processors offering C-states need to support this state. Pentium 4 processors support an enhanced C1 state called C1E that actually is a state for lower power consumption.
C2, Stop-Clock
In this state, the clock is frozen for this processor but it keeps the complete state for its registers and caches, so after starting the clock again it can immediately start processing again. This is an optional state.
C3, Sleep
In this state, the processor goes to sleep and does not need to keep its cache up to date. Due to this reason, waking up from this state needs considerably more time than from the C2 state. This is an optional state.

You can view the available idle states and other statistics for the CPUidle driver using the following command:

$ cpupower idle-info
CPUidle governor: menu
analyzing CPU 0:

Number of idle states: 9
Available idle states: POLL C1 C1E C3 C6 C7s C8 C9 C10
[...]

Intel CPUs with the "Nehalem" microarchitecture features a C6 state, which can reduce the voltage supply of a CPU to zero, but typically reduces power consumption by between 80% and 90%. The kernel in Red Hat Enterprise Linux 8 includes optimizations for this new C-state.

Additional resources

  • cpupower(1) and cpupower-idle(1) man pages on your system

17.3. Overview of CPUfreq

One of the most effective ways to reduce power consumption and heat output on your system is CPUfreq, which is supported by x86 and ARM64 architectures in Red Hat Enterprise Linux 8. CPUfreq, also referred to as CPU speed scaling, is the infrastructure in the Linux kernel that enables it to scale the CPU frequency in order to save power.

CPU scaling can be done automatically depending on the system load, in response to Advanced Configuration and Power Interface (ACPI) events, or manually by user-space programs, and it allows the clock speed of the processor to be adjusted on the fly. This enables the system to run at a reduced clock speed to save power. The rules for shifting frequencies, whether to a faster or slower clock speed and when to shift frequencies, are defined by the CPUfreq governor.

You can view the cpufreq information using the cpupower frequency-info command as root.

17.3.1. CPUfreq drivers

Using the cpupower frequency-info --driver command as root, you can view the current CPUfreq driver.

The following are the two available drivers for CPUfreq that can be used:

ACPI CPUfreq
Advanced Configuration and Power Interface (ACPI) CPUfreq driver is a kernel driver that controls the frequency of a particular CPU through ACPI, which ensures the communication between the kernel and the hardware.
Intel P-state

In Red Hat Enterprise Linux 8, Intel P-state driver is supported. The driver provides an interface for controlling the P-state selection on processors based on the Intel Xeon E series architecture or newer architectures.

Currently, Intel P-state is used by default for supported CPUs. You can switch to using ACPI CPUfreq by adding the intel_pstate=disable command to the kernel command line.

Intel P-state implements the setpolicy() callback. The driver decides what P-state to use based on the policy requested from the cpufreq core. If the processor is capable of selecting its next P-state internally, the driver offloads this responsibility to the processor. If not, the driver implements algorithms to select the next P-state.

Intel P-state provides its own sysfs files to control the P-state selection. These files are located in the /sys/devices/system/cpu/intel_pstate/ directory. Any changes made to the files are applicable to all CPUs.

This directory contains the following files that are used for setting P-state parameters:

  • max_perf_pct limits the maximum P-state requested by the driver expressed in a percentage of available performance. The available P-state performance can be reduced by the no_turbo setting.
  • min_perf_pct limits the minimum P-state requested by the driver, expressed in a percentage of the maximum no-turbo performance level.
  • no_turbo limits the driver to selecting P-state below the turbo frequency range.
  • turbo_pct displays the percentage of the total performance supported by hardware that is in the turbo range. This number is independent of whether turbo has been disabled or not.
  • num_pstates displays the number of P-states that are supported by hardware. This number is independent of whether turbo has been disabled or not.

Additional resources

  • cpupower-frequency-info(1) man page on your system

17.3.2. Core CPUfreq governors

A CPUfreq governor defines the power characteristics of the system CPU, which in turn affects the CPU performance. Each governor has its own unique behavior, purpose, and suitability in terms of workload. Using the cpupower frequency-info --governor command as root, you can view the available CPUfreq governors.

Red Hat Enterprise Linux 8 includes multiple core CPUfreq governors:

cpufreq_performance
It forces the CPU to use the highest possible clock frequency. This frequency is statically set and does not change. As such, this particular governor offers no power saving benefit. It is only suitable for hours of a heavy workload, and only during times wherein the CPU is rarely or never idle.
cpufreq_powersave
It forces the CPU to use the lowest possible clock frequency. This frequency is statically set and does not change. This governor offers maximum power savings, but at the cost of the lowest CPU performance. The term "powersave" can sometimes be deceiving though, since in principle a slow CPU on full load consumes more power than a fast CPU that is not loaded. As such, while it may be advisable to set the CPU to use the powersave governor during times of expected low activity, any unexpected high loads during that time can cause the system to actually consume more power. The Powersave governor is more of a speed limiter for the CPU than a power saver. It is most useful in systems and environments where overheating can be a problem.
cpufreq_ondemand
It is a dynamic governor, using which you can enable the CPU to achieve maximum clock frequency when the system load is high, and also minimum clock frequency when the system is idle. While this allows the system to adjust power consumption accordingly with respect to system load, it does so at the expense of latency between frequency switching. As such, latency can offset any performance or power saving benefits offered by the ondemand governor if the system switches between idle and heavy workloads too often. For most systems, the ondemand governor can provide the best compromise between heat emission, power consumption, performance, and manageability. When the system is only busy at specific times of the day, the ondemand governor automatically switches between maximum and minimum frequency depending on the load without any further intervention.
cpufreq_userspace
It allows user-space programs, or any process running as root, to set the frequency. Of all the governors, userspace is the most customizable and depending on how it is configured, it can offer the best balance between performance and consumption for your system.
cpufreq_conservative
Similar to the ondemand governor, the conservative governor also adjusts the clock frequency according to usage. However, the conservative governor switches between frequencies more gradually. This means that the conservative governor adjusts to a clock frequency that it considers best for the load, rather than simply choosing between maximum and minimum. While this can possibly provide significant savings in power consumption, it does so at an ever greater latency than the ondemand governor.
Note

You can enable a governor using cron jobs. This allows you to automatically set specific governors during specific times of the day. As such, you can specify a low-frequency governor during idle times, for example, after work hours, and return to a higher-frequency governor during hours of heavy workload.

For instructions on how to enable a specific governor, see Setting up CPUfreq governor.

17.3.3. Intel P-state CPUfreq governors

By default, the Intel P-state driver operates in active mode with or without Hardware p-state (HWP) depending on whether the CPU supports HWP.

Using the cpupower frequency-info --governor command as root, you can view the available CPUfreq governors.

Note

The functionality of performance and powersave Intel P-state CPUfreq governors is different compared to core CPUfreq governors of the same names.

The Intel P-state driver can operate in the following three different modes:

Active mode with hardware-managed P-states

When active mode with HWP is used, the Intel P-state driver instructs the CPU to perform the P-state selection. The driver can provide frequency hints. However, the final selection depends on CPU internal logic. In active mode with HWP, the Intel P-state driver provides two P-state selection algorithms:

  • performance: With the performance governor, the driver instructs internal CPU logic to be performance-oriented. The range of allowed P-states is restricted to the upper boundary of the range that the driver is allowed to use.
  • powersave: With the powersave governor, the driver instructs internal CPU logic to be powersave-oriented.
Active mode without hardware-managed P-states

When active mode without HWP is used, the Intel P-state driver provides two P-state selection algorithms:

  • performance: With the performance governor, the driver chooses the maximum P-state it is allowed to use.
  • powersave: With the powersave governor, the driver chooses P-states proportional to the current CPU utilization. The behavior is similar to the ondemand CPUfreq core governor.
Passive mode
When the passive mode is used, the Intel P-state driver functions the same as the traditional CPUfreq scaling driver. All available generic CPUFreq core governors can be used.

17.3.4. Setting up CPUfreq governor

All CPUfreq drivers are built in as part of the kernel-tools package, and selected automatically. To set up CPUfreq, you need to select a governor.

Prerequisites

  • To use cpupower, install the kernel-tools package:

    # yum install kernel-tools

Procedure

  1. View which governors are available for use for a specific CPU:

    # cpupower frequency-info --governors
    analyzing CPU 0:
      available cpufreq governors: performance powersave
  2. Enable one of the governors on all CPUs:

    # cpupower frequency-set --governor performance

    Replace the performance governor with the cpufreq governor name as per your requirement.

    To only enable a governor on specific cores, use -c with a range or comma-separated list of CPU numbers. For example, to enable the userspace governor for CPUs 1-3 and 5, use:

    # cpupower -c 1-3,5 frequency-set --governor cpufreq_userspace
Note

If the kernel-tools package is not installed, the CPUfreq settings can be viewed in the /sys/devices/system/cpu/cpuid/cpufreq/ directory. Settings and values can be changed by writing to these tunables. For example, to set the minimum clock speed of cpu0 to 360 MHz, use:

# echo 360000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq

Verification

  • Verify that the governor is enabled:

    # cpupower frequency-info
    analyzing CPU 0:
      driver: intel_pstate
      CPUs which run at the same hardware frequency: 0
      CPUs which need to have their frequency coordinated by software: 0
      maximum transition latency:  Cannot determine or is not supported.
      hardware limits: 400 MHz - 4.20 GHz
      available cpufreq governors: performance powersave
      current policy: frequency should be within 400 MHz and 4.20 GHz.
            The governor "performance" may decide which speed to use within this range.
      current CPU frequency: Unable to call hardware
      current CPU frequency: 3.88 GHz (asserted by call to kernel)
      boost state support:
        Supported: yes
        Active: yes

    The current policy displays the recently enabled cpufreq governor. In this case, it is performance.

Additional resources

  • cpupower-frequency-info(1) and cpupower-frequency-set(1) man pages on your system

Chapter 18. Getting started with perf

As a system administrator, you can use the perf tool to collect and analyze performance data of your system.

18.1. Introduction to perf

The perf user-space tool interfaces with the kernel-based subsystem Performance Counters for Linux (PCL). perf is a powerful tool that uses the Performance Monitoring Unit (PMU) to measure, record, and monitor a variety of hardware and software events. perf also supports tracepoints, kprobes, and uprobes.

18.2. Installing perf

This procedure installs the perf user-space tool.

Procedure

  • Install the perf tool:

    # yum install perf

18.3. Common perf commands

perf stat
This command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. Options allow for selection of events other than the default measurement events.
perf record
This command records performance data into a file, perf.data, which can be later analyzed using the perf report command.
perf report
This command reads and displays the performance data from the perf.data file created by perf record.
perf list
This command lists the events available on a particular machine. These events will vary based on performance monitoring hardware and software configuration of the system.
perf top
This command performs a similar function to the top utility. It generates and displays a performance counter profile in realtime.
perf trace
This command performs a similar function to the strace tool. It monitors the system calls used by a specified thread or process and all signals received by that application.
perf help
This command displays a complete list of perf commands.

Additional resources

  • Add the --help option to a subcommand to open the man page.

Chapter 19. Profiling CPU usage in real time with perf top

You can use the perf top command to measure CPU usage of different functions in real time.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

19.1. The purpose of perf top

The perf top command is used for real time system profiling and functions similarly to the top utility. However, where the top utility generally shows you how much CPU time a given process or thread is using, perf top shows you how much CPU time each specific function uses. In its default state, perf top tells you about functions being used across all CPUs in both the user-space and the kernel-space. To use perf top you need root access.

19.2. Profiling CPU usage with perf top

This procedure activates perf top and profiles CPU usage in real time.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • You have root access

Procedure

  • Start the perf top monitoring interface:

    # perf top

    The monitoring interface looks similar to the following:

    Samples: 8K of event 'cycles', 2000 Hz, Event count (approx.): 4579432780 lost: 0/0 drop: 0/0
    Overhead  Shared Object       Symbol
       2.20%  [kernel]            [k] do_syscall_64
       2.17%  [kernel]            [k] module_get_kallsym
       1.49%  [kernel]            [k] copy_user_enhanced_fast_string
       1.37%  libpthread-2.29.so  [.] pthread_mutex_lock 1.31% [unknown] [.] 0000000000000000 1.07% [kernel] [k] psi_task_change 1.04% [kernel] [k] switch_mm_irqs_off 0.94% [kernel] [k] fget
       0.74%  [kernel]            [k] entry_SYSCALL_64
       0.69%  [kernel]            [k] syscall_return_via_sysret
       0.69%  libxul.so           [.] 0x000000000113f9b0
       0.67%  [kernel]            [k] kallsyms_expand_symbol.constprop.0
       0.65%  firefox             [.] moz_xmalloc
       0.65%  libpthread-2.29.so  [.] __pthread_mutex_unlock_usercnt
       0.60%  firefox             [.] free
       0.60%  libxul.so           [.] 0x000000000241d1cd
       0.60%  [kernel]            [k] do_sys_poll
       0.58%  [kernel]            [k] menu_select
       0.56%  [kernel]            [k] _raw_spin_lock_irqsave
       0.55%  perf                [.] 0x00000000002ae0f3

    In this example, the kernel function do_syscall_64 is using the most CPU time.

Additional resources

  • perf-top(1) man page on your system

19.3. Interpretation of perf top output

The perf top monitoring interface displays the data in several columns:

The "Overhead" column
Displays the percent of CPU a given function is using.
The "Shared Object" column
Displays name of the program or library which is using the function.
The "Symbol" column
Displays the function name or symbol. Functions executed in the kernel-space are identified by [k] and functions executed in the user-space are identified by [.].

19.4. Why perf displays some function names as raw function addresses

For kernel functions, perf uses the information from the /proc/kallsyms file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped.

The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g option in GCC) to display the function names or symbols in such a situation.

Note

It is not necessary to re-run the perf record command after installing the debuginfo associated with an executable. Simply re-run the perf report command.

19.5. Enabling debug and source repositories

A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance.

Procedure

  • Enable the source and debug information package channels:

    # subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-debug-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-source-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-debug-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-source-rpms

    The $(uname -i) part is automatically replaced with a matching value for architecture of your system:

    Architecture nameValue

    64-bit Intel and AMD

    x86_64

    64-bit ARM

    aarch64

    IBM POWER

    ppc64le

    64-bit IBM Z

    s390x

19.6. Getting debuginfo packages for an application or library using GDB

Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package.

Prerequisites

  • The application or library you want to debug must be installed on the system.
  • GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications.
  • Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories.

Procedure

  1. Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run.

    $ gdb -q /bin/ls
    Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done.
    (no debugging symbols found)...done.
    Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64
    (gdb)
  2. Exit GDB: type q and confirm with Enter.

    (gdb) q
  3. Run the command suggested by GDB to install the required debuginfo packages:

    # dnf debuginfo-install coreutils-8.30-6.el8.x86_64

    The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files.

  4. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually.

Additional resources

Chapter 20. Counting events during process execution with perf stat

You can use the perf stat command to count hardware and software events during process execution.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

20.1. The purpose of perf stat

The perf stat command executes a specified command, keeps a running count of hardware and software event occurrences during the commands execution, and generates statistics of these counts. If you do not specify any events, then perf stat counts a set of common hardware and software events.

20.2. Counting events with perf stat

You can use perf stat to count hardware and software event occurrences during command execution and generate statistics of these counts. By default, perf stat operates in per-thread mode.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Count the events.

    • Running the perf stat command without root access will only count events occurring in the user space:

      $ perf stat ls

      Example 20.1. Output of perf stat ran without root access

      Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos
      
       Performance counter stats for 'ls':
      
                    1.28 msec task-clock:u               #    0.165 CPUs utilized
                       0      context-switches:u         #    0.000 M/sec
                       0      cpu-migrations:u           #    0.000 K/sec
                     104      page-faults:u              #    0.081 M/sec
               1,054,302      cycles:u                   #    0.823 GHz
               1,136,989      instructions:u             #    1.08  insn per cycle
                 228,531      branches:u                 #  178.447 M/sec
                  11,331      branch-misses:u            #    4.96% of all branches
      
             0.007754312 seconds time elapsed
      
             0.000000000 seconds user
             0.007717000 seconds sys

      As you can see in the previous example, when perf stat runs without root access the event names are followed by :u, indicating that these events were counted only in the user-space.

    • To count both user-space and kernel-space events, you must have root access when running perf stat:

      # perf stat ls

      Example 20.2. Output of perf stat ran with root access

      Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos
      
       Performance counter stats for 'ls':
      
                    3.09 msec task-clock                #    0.119 CPUs utilized
                      18      context-switches          #    0.006 M/sec
                       3      cpu-migrations            #    0.969 K/sec
                     108      page-faults               #    0.035 M/sec
               6,576,004      cycles                    #    2.125 GHz
               5,694,223      instructions              #    0.87  insn per cycle
               1,092,372      branches                  #  352.960 M/sec
                  31,515      branch-misses             #    2.89% of all branches
      
             0.026020043 seconds time elapsed
      
             0.000000000 seconds user
             0.014061000 seconds sys
      • By default, perf stat operates in per-thread mode. To change to CPU-wide event counting, pass the -a option to perf stat. To count CPU-wide events, you need root access:

        # perf stat -a ls

Additional resources

  • perf-stat(1) man page on your system

20.3. Interpretation of perf stat output

perf stat executes a specified command and counts event occurrences during the commands execution and displays statistics of these counts in three columns:

  1. The number of occurrences counted for a given event
  2. The name of the event that was counted
  3. When related metrics are available, a ratio or percentage is displayed after the hash sign (#) in the right-most column.

    For example, when running in default mode, perf stat counts both cycles and instructions and, therefore, calculates and displays instructions per cycle in the right-most column. You can see similar behavior with regard to branch-misses as a percent of all branches since both events are counted by default.

20.4. Attaching perf stat to a running process

You can attach perf stat to a running process. This will instruct perf stat to count event occurrences only in the specified processes during the execution of a command.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Attach perf stat to a running process:

    $ perf stat -p ID1,ID2 sleep seconds

    The previous example counts events in the processes with the IDs of ID1 and ID2 for a time period of seconds seconds as dictated by using the sleep command.

Additional resources

  • perf-stat(1) man page on your system

Chapter 21. Recording and analyzing performance profiles with perf

The perf tool allows you to record performance data and analyze it at a later time.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

21.1. The purpose of perf record

The perf record command samples performance data and stores it in a file, perf.data, which can be read and visualized with other perf commands. perf.data is generated in the current directory and can be accessed at a later time, possibly on a different machine.

If you do not specify a command for perf record to record during, it will record until you manually stop the process by pressing Ctrl+C. You can attach perf record to specific processes by passing the -p option followed by one or more process IDs. You can run perf record without root access, however, doing so will only sample performance data in the user space. In the default mode, perf record uses CPU cycles as the sampling event and operates in per-thread mode with inherit mode enabled.

21.2. Recording a performance profile without root access

You can use perf record without root access to sample and record performance data in the user-space only.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Sample and record the performance data:

    $ perf record command

    Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl+C.

Additional resources

  • perf-record(1) man page on your system

21.3. Recording a performance profile with root access

You can use perf record with root access to sample and record performance data in both the user-space and the kernel-space simultaneously.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • You have root access.

Procedure

  • Sample and record the performance data:

    # perf record command

    Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl+C.

Additional resources

  • perf-record(1) man page on your system

21.4. Recording a performance profile in per-CPU mode

You can use perf record in per-CPU mode to sample and record performance data in both and user-space and the kernel-space simultaneously across all threads on a monitored CPU. By default, per-CPU mode monitors all online CPUs.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Sample and record the performance data:

    # perf record -a command

    Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl+C.

Additional resources

  • perf-record(1) man page on your system

21.5. Capturing call graph data with perf record

You can configure the perf record tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Sample and record performance data with the --call-graph option:

    $ perf record --call-graph method command
    • Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl+C.
    • Replace method with one of the following unwinding methods:

      fp
      Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option --fomit-frame-pointer, this may not be able to unwind the stack.
      dwarf
      Uses DWARF Call Frame Information to unwind the stack.
      lbr
      Uses the last branch record hardware on Intel processors.

Additional resources

  • perf-record(1) man page on your system

21.6. Analyzing perf.data with perf report

You can use perf report to display and analyze a perf.data file.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • There is a perf.data file in the current directory.
  • If the perf.data file was created with root access, you need to run perf report with root access too.

Procedure

  • Display the contents of the perf.data file for further analysis:

    # perf report

    This command displays output similar to the following:

    Samples: 2K of event 'cycles', Event count (approx.): 235462960
    Overhead  Command          Shared Object                     Symbol
       2.36%  kswapd0          [kernel.kallsyms]                 [k] page_vma_mapped_walk
       2.13%  sssd_kcm         libc-2.28.so                      [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2
       1.17%  gnome-shell      libglib-2.0.so.0.5600.4           [.] g_hash_table_lookup
       0.93%  Xorg             libc-2.28.so                      [.] memmove_avx_unaligned_erms 0.89% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_object_unref 0.87% kswapd0 [kernel.kallsyms] [k] page_referenced_one 0.86% gnome-shell libc-2.28.so [.] memmove_avx_unaligned_erms
       0.83%  Xorg             [kernel.kallsyms]                 [k] alloc_vmap_area
       0.63%  gnome-shell      libglib-2.0.so.0.5600.4           [.] g_slice_alloc
       0.53%  gnome-shell      libgirepository-1.0.so.1.0.0      [.] g_base_info_unref
       0.53%  gnome-shell      ld-2.28.so                        [.] _dl_find_dso_for_object
       0.49%  kswapd0          [kernel.kallsyms]                 [k] vma_interval_tree_iter_next
       0.48%  gnome-shell      libpthread-2.28.so                [.] pthread_getspecific 0.47% gnome-shell libgirepository-1.0.so.1.0.0 [.] 0x0000000000013b1d 0.45% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_free1 0.45% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_type_check_instance_is_fundamentally_a 0.44% gnome-shell libc-2.28.so [.] malloc 0.41% swapper [kernel.kallsyms] [k] apic_timer_interrupt 0.40% gnome-shell ld-2.28.so [.] _dl_lookup_symbol_x 0.39% kswapd0 [kernel.kallsyms] [k] raw_callee_save___pv_queued_spin_unlock

Additional resources

  • perf-report(1) man page on your system

21.7. Interpretation of perf report output

The table displayed by running the perf report command sorts the data into several columns:

The 'Overhead' column
Indicates what percentage of overall samples were collected in that particular function.
The 'Command' column
Tells you which process the samples were collected from.
The 'Shared Object' column
Displays the name of the ELF image where the samples come from (the name [kernel.kallsyms] is used when the samples come from the kernel).
The 'Symbol' column
Displays the function name or symbol.

In default mode, the functions are sorted in descending order with those with the highest overhead displayed first.

21.8. Generating a perf.data file that is readable on a different device

You can use the perf tool to record performance data into a perf.data file to be analyzed on a different device.

Prerequisites

Procedure

  1. Capture performance data you are interested in investigating further:

    # perf record -a --call-graph fp sleep seconds

    This example would generate a perf.data over the entire system for a period of seconds seconds as dictated by the use of the sleep command. It would also capture call graph data using the frame pointer method.

  2. Generate an archive file containing debug symbols of the recorded data:

    # perf archive

Verification

  • Verify that the archive file has been generated in your current active directory:

    # ls perf.data*

    The output will display every file in your current directory that begins with perf.data. The archive file will be named either:

    perf.data.tar.gz

    or

    perf.data.tar.bz2

21.9. Analyzing a perf.data file that was created on a different device

You can use the perf tool to analyze a perf.data file that was generated on a different device.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • A perf.data file and associated archive file generated on a different device are present on the current device being used.

Procedure

  1. Copy both the perf.data file and the archive file into your current active directory.
  2. Extract the archive file into ~/.debug:

    # mkdir -p ~/.debug
    # tar xf perf.data.tar.bz2 -C ~/.debug
    Note

    The archive file might also be named perf.data.tar.gz.

  3. Open the perf.data file for further analysis:

    # perf report

21.10. Why perf displays some function names as raw function addresses

For kernel functions, perf uses the information from the /proc/kallsyms file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped.

The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g option in GCC) to display the function names or symbols in such a situation.

Note

It is not necessary to re-run the perf record command after installing the debuginfo associated with an executable. Simply re-run the perf report command.

21.11. Enabling debug and source repositories

A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance.

Procedure

  • Enable the source and debug information package channels:

    # subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-debug-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-source-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-debug-rpms
    # subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-source-rpms

    The $(uname -i) part is automatically replaced with a matching value for architecture of your system:

    Architecture nameValue

    64-bit Intel and AMD

    x86_64

    64-bit ARM

    aarch64

    IBM POWER

    ppc64le

    64-bit IBM Z

    s390x

21.12. Getting debuginfo packages for an application or library using GDB

Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package.

Prerequisites

  • The application or library you want to debug must be installed on the system.
  • GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications.
  • Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories.

Procedure

  1. Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run.

    $ gdb -q /bin/ls
    Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done.
    (no debugging symbols found)...done.
    Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64
    (gdb)
  2. Exit GDB: type q and confirm with Enter.

    (gdb) q
  3. Run the command suggested by GDB to install the required debuginfo packages:

    # dnf debuginfo-install coreutils-8.30-6.el8.x86_64

    The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files.

  4. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually.

Additional resources

Chapter 22. Investigating busy CPUs with perf

When investigating performance issues on a system, you can use the perf tool to identify and monitor the busiest CPUs in order to focus your efforts.

22.1. Displaying which CPU events were counted on with perf stat

You can use perf stat to display which CPU events were counted on by disabling CPU count aggregation. You must count events in system-wide mode by using the -a flag in order to use this functionality.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Count the events with CPU count aggregation disabled:

    # perf stat -a -A sleep seconds

    The previous example displays counts of a default set of common hardware and software events recorded over a time period of seconds seconds, as dictated by using the sleep command, over each individual CPU in ascending order, starting with CPU0. As such, it may be useful to specify an event such as cycles:

    # perf stat -a -A -e cycles sleep seconds

22.2. Displaying which CPU samples were taken on with perf report

The perf record command samples performance data and stores this data in a perf.data file which can be read with the perf report command. The perf record command always records which CPU samples were taken on. You can configure perf report to display this information.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • There is a perf.data file created with perf record in the current directory. If the perf.data file was created with root access, you need to run perf report with root access too.

Procedure

  • Display the contents of the perf.data file for further analysis while sorting by CPU:

    # perf report --sort cpu
    • You can sort by CPU and command to display more detailed information about where CPU time is being spent:

      # perf report --sort cpu,comm

      This example will list commands from all monitored CPUs by total overhead in descending order of overhead usage and identify the CPU the command was executed on.

22.3. Displaying specific CPUs during profiling with perf top

You can configure perf top to display specific CPUs and their relative usage while profiling your system in real time.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Start the perf top interface while sorting by CPU:

    # perf top --sort cpu

    This example will list CPUs and their respective overhead in descending order of overhead usage in real time.

    • You can sort by CPU and command for more detailed information of where CPU time is being spent:

      # perf top --sort cpu,comm

      This example will list commands by total overhead in descending order of overhead usage and identify the CPU the command was executed on in real time.

22.4. Monitoring specific CPUs with perf record and perf report

You can configure perf record to only sample specific CPUs of interest and analyze the generated perf.data file with perf report for further analysis.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  1. Sample and record the performance data in the specific CPU’s, generating a perf.data file:

    • Using a comma separated list of CPUs:

      # perf record -C 0,1 sleep seconds

      The previous example samples and records data in CPUs 0 and 1 for a period of seconds seconds as dictated by the use of the sleep command.

    • Using a range of CPUs:

      # perf record -C 0-2 sleep seconds

      The previous example samples and records data in all CPUs from CPU 0 to 2 for a period of seconds seconds as dictated by the use of the sleep command.

  2. Display the contents of the perf.data file for further analysis:

    # perf report

    This example will display the contents of perf.data. If you are monitoring several CPUs and want to know which CPU data was sampled on, see Displaying which CPU samples were taken on with perf report.

Chapter 23. Monitoring application performance with perf

You can use the perf tool to monitor and analyze application performance.

23.1. Attaching perf record to a running process

You can attach perf record to a running process. This will instruct perf record to only sample and record performance data in the specified processes.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Attach perf record to a running process:

    $ perf record -p ID1,ID2 sleep seconds

    The previous example samples and records performance data of the processes with the process ID’s ID1 and ID2 for a time period of seconds seconds as dictated by using the sleep command. You can also configure perf to record events in specific threads:

    $ perf record -t ID1,ID2 sleep seconds
    Note

    When using the -t flag and stipulating thread ID’s, perf disables inheritance by default. You can enable inheritance by adding the --inherit option.

23.2. Capturing call graph data with perf record

You can configure the perf record tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  • Sample and record performance data with the --call-graph option:

    $ perf record --call-graph method command
    • Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl+C.
    • Replace method with one of the following unwinding methods:

      fp
      Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option --fomit-frame-pointer, this may not be able to unwind the stack.
      dwarf
      Uses DWARF Call Frame Information to unwind the stack.
      lbr
      Uses the last branch record hardware on Intel processors.

Additional resources

  • perf-record(1) man page on your system

23.3. Analyzing perf.data with perf report

You can use perf report to display and analyze a perf.data file.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • There is a perf.data file in the current directory.
  • If the perf.data file was created with root access, you need to run perf report with root access too.

Procedure

  • Display the contents of the perf.data file for further analysis:

    # perf report

    This command displays output similar to the following:

    Samples: 2K of event 'cycles', Event count (approx.): 235462960
    Overhead  Command          Shared Object                     Symbol
       2.36%  kswapd0          [kernel.kallsyms]                 [k] page_vma_mapped_walk
       2.13%  sssd_kcm         libc-2.28.so                      [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2
       1.17%  gnome-shell      libglib-2.0.so.0.5600.4           [.] g_hash_table_lookup
       0.93%  Xorg             libc-2.28.so                      [.] memmove_avx_unaligned_erms 0.89% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_object_unref 0.87% kswapd0 [kernel.kallsyms] [k] page_referenced_one 0.86% gnome-shell libc-2.28.so [.] memmove_avx_unaligned_erms
       0.83%  Xorg             [kernel.kallsyms]                 [k] alloc_vmap_area
       0.63%  gnome-shell      libglib-2.0.so.0.5600.4           [.] g_slice_alloc
       0.53%  gnome-shell      libgirepository-1.0.so.1.0.0      [.] g_base_info_unref
       0.53%  gnome-shell      ld-2.28.so                        [.] _dl_find_dso_for_object
       0.49%  kswapd0          [kernel.kallsyms]                 [k] vma_interval_tree_iter_next
       0.48%  gnome-shell      libpthread-2.28.so                [.] pthread_getspecific 0.47% gnome-shell libgirepository-1.0.so.1.0.0 [.] 0x0000000000013b1d 0.45% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_free1 0.45% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_type_check_instance_is_fundamentally_a 0.44% gnome-shell libc-2.28.so [.] malloc 0.41% swapper [kernel.kallsyms] [k] apic_timer_interrupt 0.40% gnome-shell ld-2.28.so [.] _dl_lookup_symbol_x 0.39% kswapd0 [kernel.kallsyms] [k] raw_callee_save___pv_queued_spin_unlock

Additional resources

  • perf-report(1) man page on your system

Chapter 24. Creating uprobes with perf

24.1. Creating uprobes at the function level with perf

You can use the perf tool to create dynamic tracepoints at arbitrary points in a process or application. These tracepoints can then be used in conjunction with other perf tools such as perf stat and perf record to better understand the process or applications behavior.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.

Procedure

  1. Create the uprobe in the process or application you are interested in monitoring at a location of interest within the process or application:

    # perf probe -x /path/to/executable -a function
    Added new event:
      probe_executable:function   (on function in /path/to/executable)
    
    You can now use it in all perf tools, such as:
    
            perf record -e probe_executable:function -aR sleep 1

24.2. Creating uprobes on lines within a function with perf

These tracepoints can then be used in conjunction with other perf tools such as perf stat and perf record to better understand the process or applications behavior.

Prerequisites

  • You have the perf user space tool installed as described in Installing perf.
  • You have gotten the debugging symbols for your executable:

    # objdump -t ./your_executable | head
    Note

    To do this, the debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information, the -g option in GCC.

Procedure

  1. View the function lines where you can place a uprobe