Monitoring and managing system status and performance
Optimizing system throughput, latency, and power consumption
Abstract
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Getting started with TuneD
As a system administrator, you can use the TuneD application to optimize the performance profile of your system for a variety of use cases.
1.1. The purpose of TuneD
TuneD is a service that monitors your system and optimizes the performance under certain workloads. The core of TuneD are profiles, which tune your system for different use cases.
TuneD is distributed with a number of predefined profiles for use cases such as:
- High throughput
- Low latency
- Saving power
It is possible to modify the rules defined for each profile and customize how to tune a particular device. When you switch to another profile or deactivate TuneD, all changes made to the system settings by the previous profile revert back to their original state.
You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices.
1.2. TuneD profiles
A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles.
The profiles provided with TuneD are divided into the following categories:
- Power-saving profiles
- Performance-boosting profiles
The performance-boosting profiles include profiles that focus on the following aspects:
- Low latency for storage and network
- High throughput for storage and network
- Virtual machine performance
- Virtualization host performance
Syntax of profile configuration
The tuned.conf
file can contain one [main]
section and other sections for configuring plug-in instances. However, all sections are optional.
Lines starting with the hash sign (#
) are comments.
Additional resources
-
tuned.conf(5)
man page.
1.3. The default TuneD profile
During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules:
Environment | Default profile | Goal |
---|---|---|
Compute nodes |
| The best throughput performance |
Virtual machines |
|
The best performance. If you are not interested in the best performance, you can change it to the |
Other cases |
| Balanced performance and power consumption |
Additional resources
-
tuned.conf(5)
man page.
1.4. Merged TuneD profiles
As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load.
If there are conflicts, the settings from the last specified profile takes precedence.
Example 1.1. Low power consumption in a virtual guest
The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:
# tuned-adm profile virtual-guest powersave
Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance
profile and concurrently setting the disk spindown to the low value by the spindown-disk
profile.
Additional resources
*tuned-adm
man page. * tuned.conf(5)
man page.
1.5. The location of TuneD profiles
TuneD stores profiles in the following directories:
/usr/lib/tuned/
-
Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called
tuned.conf
, and optionally other files, for example helper scripts. /etc/tuned/
-
If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in
/etc/tuned/
is used.
Additional resources
-
tuned.conf(5)
man page.
1.6. TuneD profiles distributed with RHEL
The following is a list of profiles that are installed with TuneD on Red Hat Enterprise Linux.
There might be more product-specific or third-party TuneD profiles available. Such profiles are usually provided by separate RPM packages.
balanced
The default power-saving profile. It is intended to be a compromise between performance and power consumption. It uses auto-scaling and auto-tuning whenever possible. The only drawback is the increased latency. In the current TuneD release, it enables the CPU, disk, audio, and video plugins, and activates the
conservative
CPU governor. Theradeon_powersave
option uses thedpm-balanced
value if it is supported, otherwise it is set toauto
.It changes the
energy_performance_preference
attribute to thenormal
energy setting. It also changes thescaling_governor
policy attribute to either theconservative
orpowersave
CPU governor.powersave
A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current TuneD release it enables USB autosuspend, WiFi power saving, and Aggressive Link Power Management (ALPM) power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the
ondemand
governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. If your system contains a supported Radeon graphics card with enabled KMS, the profile configures it to automatic power saving. On ASUS Eee PCs, a dynamic Super Hybrid Engine is enabled.It changes the
energy_performance_preference
attribute to thepowersave
orpower
energy setting. It also changes thescaling_governor
policy attribute to either theondemand
orpowersave
CPU governor.NoteIn certain cases, the
balanced
profile is more efficient compared to thepowersave
profile.Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine might consume less energy if the transcoding is done on the full power, because the task is finished quickly, the machine starts to idle, and it can automatically step-down to very efficient power save modes. On the other hand, if you transcode the file with a throttled machine, the machine consumes less power during the transcoding, but the process takes longer and the overall consumed energy can be higher.
That is why the
balanced
profile can be generally a better option.throughput-performance
A server profile optimized for high throughput. It disables power savings mechanisms and enables
sysctl
settings that improve the throughput performance of the disk and network IO. CPU governor is set toperformance
.It changes the
energy_performance_preference
andscaling_governor
attribute to theperformance
profile.accelerator-performance
-
The
accelerator-performance
profile contains the same tuning as thethroughput-performance
profile. Additionally, it locks the CPU to low C states so that the latency is less than 100us. This improves the performance of certain accelerators, such as GPUs. latency-performance
A server profile optimized for low latency. It disables power savings mechanisms and enables
sysctl
settings that improve latency. CPU governor is set toperformance
and the CPU is locked to the low C states (by PM QoS).It changes the
energy_performance_preference
andscaling_governor
attribute to theperformance
profile.network-latency
A profile for low latency network tuning. It is based on the
latency-performance
profile. It additionally disables transparent huge pages and NUMA balancing, and tunes several other network-relatedsysctl
parameters.It inherits the
latency-performance
profile which changes theenergy_performance_preference
andscaling_governor
attribute to theperformance
profile.hpc-compute
-
A profile optimized for high-performance computing. It is based on the
latency-performance
profile. network-throughput
A profile for throughput network tuning. It is based on the
throughput-performance
profile. It additionally increases kernel network buffers.It inherits either the
latency-performance
orthroughput-performance
profile, and changes theenergy_performance_preference
andscaling_governor
attribute to theperformance
profile.virtual-guest
A profile designed for Red Hat Enterprise Linux 9 virtual machines and VMWare guests based on the
throughput-performance
profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers.It inherits the
throughput-performance
profile and changes theenergy_performance_preference
andscaling_governor
attribute to theperformance
profile.virtual-host
A profile designed for virtual hosts based on the
throughput-performance
profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values, and enables a more aggressive value of dirty pages writeback.It inherits the
throughput-performance
profile and changes theenergy_performance_preference
andscaling_governor
attribute to theperformance
profile.oracle
-
A profile optimized for Oracle databases loads based on
throughput-performance
profile. It additionally disables transparent huge pages and modifies other performance-related kernel parameters. This profile is provided by thetuned-profiles-oracle
package. desktop
-
A profile optimized for desktops, based on the
balanced
profile. It additionally enables scheduler autogroups for better response of interactive applications. optimize-serial-console
A profile that tunes down I/O activity to the serial console by reducing the printk value. This should make the serial console more responsive. This profile is intended to be used as an overlay on other profiles. For example:
# tuned-adm profile throughput-performance optimize-serial-console
mssql
-
A profile provided for Microsoft SQL Server. It is based on the
throughput-performance
profile. intel-sst
A profile optimized for systems with user-defined Intel Speed Select Technology configurations. This profile is intended to be used as an overlay on other profiles. For example:
# tuned-adm profile cpu-partitioning intel-sst
1.7. TuneD cpu-partitioning profile
For tuning Red Hat Enterprise Linux 9 for latency-sensitive workloads, Red Hat recommends to use the cpu-partitioning
TuneD profile.
Prior to Red Hat Enterprise Linux 9, the low-latency Red Hat documentation described the numerous low-level steps needed to achieve low-latency tuning. In Red Hat Enterprise Linux 9, you can perform low-latency tuning more efficiently by using the cpu-partitioning
TuneD profile. This profile is easily customizable according to the requirements for individual low-latency applications.
The following figure is an example to demonstrate how to use the cpu-partitioning
profile. This example uses the CPU and node layout.
Figure 1.1. Figure cpu-partitioning
You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf
file using the following configuration options:
- Isolated CPUs with load balancing
In the cpu-partitioning figure, the blocks numbered from 4 to 23, are the default isolated CPUs. The kernel scheduler’s process load balancing is enabled on these CPUs. It is designed for low-latency processes with multiple threads that need the kernel scheduler load balancing.
You can configure the cpu-partitioning profile in the
/etc/tuned/cpu-partitioning-variables.conf
file using theisolated_cores=cpu-list
option, which lists CPUs to isolate that will use the kernel scheduler load balancing.The list of isolated CPUs is comma-separated or you can specify a range using a dash, such as
3-5
. This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU.- Isolated CPUs without load balancing
In the cpu-partitioning figure, the blocks numbered 2 and 3, are the isolated CPUs that do not provide any additional kernel scheduler process load balancing.
You can configure the cpu-partitioning profile in the
/etc/tuned/cpu-partitioning-variables.conf
file using theno_balance_cores=cpu-list
option, which lists CPUs to isolate that will not use the kernel scheduler load balancing.Specifying the
no_balance_cores
option is optional, however any CPUs in this list must be a subset of the CPUs listed in theisolated_cores
list.Application threads using these CPUs need to be pinned individually to each CPU.
- Housekeeping CPUs
-
Any CPU not isolated in the
cpu-partitioning-variables.conf
file is automatically considered a housekeeping CPU. On the housekeeping CPUs, all services, daemons, user processes, movable kernel threads, interrupt handlers, and kernel timers are permitted to execute.
Additional resources
-
tuned-profiles-cpu-partitioning(7)
man page
1.8. Using the TuneD cpu-partitioning profile for low-latency tuning
This procedure describes how to tune a system for low-latency using the TuneD’s cpu-partitioning
profile. It uses the example of a low-latency application that can use cpu-partitioning
and the CPU layout as mentioned in the cpu-partitioning figure.
The application in this case uses:
- One dedicated reader thread that reads data from the network will be pinned to CPU 2.
- A large number of threads that process this network data will be pinned to CPUs 4-23.
- A dedicated writer thread that writes the processed data to the network will be pinned to CPU 3.
Prerequisites
-
You have installed the
cpu-partitioning
TuneD profile by using thednf install tuned-profiles-cpu-partitioning
command as root.
Procedure
Edit
/etc/tuned/cpu-partitioning-variables.conf
file and add the following information:# All isolated CPUs: isolated_cores=2-23 # Isolated CPUs without the kernel’s scheduler load balancing: no_balance_cores=2,3
Set the
cpu-partitioning
TuneD profile:# tuned-adm profile cpu-partitioning
Reboot
After rebooting, the system is tuned for low-latency, according to the isolation in the cpu-partitioning figure. The application can use taskset to pin the reader and writer threads to CPUs 2 and 3, and the remaining application threads on CPUs 4-23.
Additional resources
-
tuned-profiles-cpu-partitioning(7)
man page
1.9. Customizing the cpu-partitioning TuneD profile
You can extend the TuneD profile to make additional tuning changes.
For example, the cpu-partitioning
profile sets the CPUs to use cstate=1
. In order to use the cpu-partitioning
profile but to additionally change the CPU cstate from cstate1 to cstate0, the following procedure describes a new TuneD profile named my_profile, which inherits the cpu-partitioning
profile and then sets C state-0.
Procedure
Create the
/etc/tuned/my_profile
directory:# mkdir /etc/tuned/my_profile
Create a
tuned.conf
file in this directory, and add the following content:# vi /etc/tuned/my_profile/tuned.conf [main] summary=Customized tuning on top of cpu-partitioning include=cpu-partitioning [cpu] force_latency=cstate.id:0|1
Use the new profile:
# tuned-adm profile my_profile
In the shared example, a reboot is not required. However, if the changes in the my_profile profile require a reboot to take effect, then reboot your machine.
Additional resources
-
tuned-profiles-cpu-partitioning(7)
man page
1.10. Real-time TuneD profiles distributed with RHEL
Real-time profiles are intended for systems running the real-time kernel. Without a special kernel build, they do not configure the system to be real-time. On RHEL, the profiles are available from additional repositories.
The following real-time profiles are available:
realtime
Use on bare-metal real-time systems.
Provided by the
tuned-profiles-realtime
package, which is available from the RT or NFV repositories.realtime-virtual-host
Use in a virtualization host configured for real-time.
Provided by the
tuned-profiles-nfv-host
package, which is available from the NFV repository.realtime-virtual-guest
Use in a virtualization guest configured for real-time.
Provided by the
tuned-profiles-nfv-guest
package, which is available from the NFV repository.
1.11. Static and dynamic tuning in TuneD
Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic, is important when determining which one to use for a given situation or purpose.
- Static tuning
-
Mainly consists of the application of predefined
sysctl
andsysfs
settings and one-shot activation of several configuration tools such asethtool
. - Dynamic tuning
Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information.
For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use.
By default, dynamic tuning is disabled. To enable it, edit the
/etc/tuned/tuned-main.conf
file and change thedynamic_tuning
option to1
. TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use theupdate_interval
option.Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles.
Example 1.2. Static and dynamic tuning on a workstation
On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded.
For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage.
If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high.
This principle is used for other plug-ins for CPU and disks as well.
1.12. TuneD no-daemon mode
You can run TuneD in no-daemon
mode, which does not require any resident memory. In this mode, TuneD applies the settings and exits.
By default, no-daemon
mode is disabled because a lot of TuneD functionality is missing in this mode, including:
- D-Bus support
- Hot-plug support
- Rollback support for settings
To enable no-daemon
mode, include the following line in the /etc/tuned/tuned-main.conf
file:
daemon = 0
1.13. Installing and enabling TuneD
This procedure installs and enables the TuneD application, installs TuneD profiles, and presets a default TuneD profile for your system.
Procedure
Install the
TuneD
package:# dnf install tuned
Enable and start the
TuneD
service:# systemctl enable --now tuned
Optionally, install TuneD profiles for real-time systems:
For the TuneD profiles for real-time systems enable
rhel-9
repository.# subscription-manager repos --enable=rhel-9-for-x86_64-nfv-beta-rpms
Install it.
# dnf install tuned-profiles-realtime tuned-profiles-nfv
Verify that a TuneD profile is active and applied:
$ tuned-adm active Current active profile: throughput-performance
NoteThe active profile TuneD automatically presets differs based on your machine type and system settings.
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.
1.14. Listing available TuneD profiles
This procedure lists all TuneD profiles that are currently available on your system.
Procedure
To list all available TuneD profiles on your system, use:
$ tuned-adm list Available profiles: - accelerator-performance - Throughput performance based tuning with disabled higher latency STOP states - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case - latency-performance - Optimize for deterministic performance at the cost of increased power consumption - network-latency - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance - network-throughput - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks - powersave - Optimize for low power consumption - throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced
To display only the currently active profile, use:
$ tuned-adm active Current active profile: throughput-performance
Additional resources
-
tuned-adm(8)
man page.
1.15. Setting a TuneD profile
This procedure activates a selected TuneD profile on your system.
Prerequisites
-
The
TuneD
service is running. See Installing and Enabling TuneD for details.
Procedure
Optionally, you can let TuneD recommend the most suitable profile for your system:
# tuned-adm recommend throughput-performance
Activate a profile:
# tuned-adm profile selected-profile
Alternatively, you can activate a combination of multiple profiles:
# tuned-adm profile selected-profile1 selected-profile2
Example 1.3. A virtual machine optimized for low power consumption
The following example optimizes the system to run in a virtual machine with the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:
# tuned-adm profile virtual-guest powersave
View the current active TuneD profile on your system:
# tuned-adm active Current active profile: selected-profile
Reboot the system:
# reboot
Verification
Verify that the TuneD profile is active and applied:
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.
Additional resources
-
tuned-adm(8)
man page
1.16. Using the TuneD D-Bus interface
You can directly communicate with TuneD at runtime through the TuneD D-Bus interface to control a variety of TuneD services.
You can use the busctl
or dbus-send
commands to access the D-Bus API.
Although you can use either the busctl
or dbus-send
command, the busctl
command is a part of systemd
and, therefore, present on most hosts already.
1.16.1. Using the TuneD D-Bus interface to show available TuneD D-Bus API methods
You can see the D-Bus API methods available to use with TuneD by using the TuneD D-Bus interface.
Prerequisites
- The TuneD service is running. See Installing and Enabling TuneD for details.
Procedure
To see the available TuneD API methods, run:
$ busctl introspect com.redhat.tuned /Tuned com.redhat.tuned.control
The output should look similar to the following:
NAME TYPE SIGNATURE RESULT/VALUE FLAGS .active_profile method - s - .auto_profile method - (bs) - .disable method - b - .get_all_plugins method - a{sa{ss}} - .get_plugin_documentation method s s - .get_plugin_hints method s a{ss} - .instance_acquire_devices method ss (bs) - .is_running method - b - .log_capture_finish method s s - .log_capture_start method ii s - .post_loaded_profile method - s - .profile_info method s (bsss) - .profile_mode method - (ss) - .profiles method - as - .profiles2 method - a(ss) - .recommend_profile method - s - .register_socket_signal_path method s b - .reload method - b - .start method - b - .stop method - b - .switch_profile method s (bs) - .verify_profile method - b - .verify_profile_ignore_missing method - b - .profile_changed signal sbs - -
You can find descriptions of the different available methods in the TuneD upstream repository.
1.16.2. Using the TuneD D-Bus interface to change the active TuneD profile
You can replace the active TuneD profile with your desired TuneD profile by using the TuneD D-Bus interface.
Prerequisites
- The TuneD service is running. See Installing and Enabling TuneD for details.
Procedure
To change the active TuneD profile, run:
$ busctl call com.redhat.tuned /Tuned com.redhat.tuned.control switch_profile s profile (bs) true "OK"
Replace profile with the name of your desired profile.
Verification
To view the current active TuneD profile, run:
$ busctl call com.redhat.tuned /Tuned com.redhat.tuned.control active_profile s "profile"
1.17. Disabling TuneD
This procedure disables TuneD and resets all affected system settings to their original state before TuneD modified them.
Procedure
To disable all tunings temporarily:
# tuned-adm off
The tunings are applied again after the
TuneD
service restarts.Alternatively, to stop and disable the
TuneD
service permanently:# systemctl disable --now tuned
Additional resources
-
tuned-adm(8)
man page
Chapter 2. Customizing TuneD profiles
You can create or modify TuneD profiles to optimize system performance for your intended use case.
Prerequisites
- Install and enable TuneD as described in Installing and Enabling TuneD for details.
2.1. TuneD profiles
A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles.
The profiles provided with TuneD are divided into the following categories:
- Power-saving profiles
- Performance-boosting profiles
The performance-boosting profiles include profiles that focus on the following aspects:
- Low latency for storage and network
- High throughput for storage and network
- Virtual machine performance
- Virtualization host performance
Syntax of profile configuration
The tuned.conf
file can contain one [main]
section and other sections for configuring plug-in instances. However, all sections are optional.
Lines starting with the hash sign (#
) are comments.
Additional resources
-
tuned.conf(5)
man page.
2.2. The default TuneD profile
During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules:
Environment | Default profile | Goal |
---|---|---|
Compute nodes |
| The best throughput performance |
Virtual machines |
|
The best performance. If you are not interested in the best performance, you can change it to the |
Other cases |
| Balanced performance and power consumption |
Additional resources
-
tuned.conf(5)
man page.
2.3. Merged TuneD profiles
As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load.
If there are conflicts, the settings from the last specified profile takes precedence.
Example 2.1. Low power consumption in a virtual guest
The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority:
# tuned-adm profile virtual-guest powersave
Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance
profile and concurrently setting the disk spindown to the low value by the spindown-disk
profile.
Additional resources
*tuned-adm
man page. * tuned.conf(5)
man page.
2.4. The location of TuneD profiles
TuneD stores profiles in the following directories:
/usr/lib/tuned/
-
Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called
tuned.conf
, and optionally other files, for example helper scripts. /etc/tuned/
-
If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in
/etc/tuned/
is used.
Additional resources
-
tuned.conf(5)
man page.
2.5. Inheritance between TuneD profiles
TuneD profiles can be based on other profiles and modify only certain aspects of their parent profile.
The [main]
section of TuneD profiles recognizes the include
option:
[main]
include=parent
All settings from the parent profile are loaded in this child profile. In the following sections, the child profile can override certain settings inherited from the parent profile or add new settings not present in the parent profile.
You can create your own child profile in the /etc/tuned/
directory based on a pre-installed profile in /usr/lib/tuned/
with only some parameters adjusted.
If the parent profile is updated, such as after a TuneD upgrade, the changes are reflected in the child profile.
Example 2.2. A power-saving profile based on balanced
The following is an example of a custom profile that extends the balanced
profile and sets Aggressive Link Power Management (ALPM) for all devices to the maximum powersaving.
[main] include=balanced [scsi_host] alpm=min_power
Additional resources
-
tuned.conf(5)
man page
2.6. Static and dynamic tuning in TuneD
Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic, is important when determining which one to use for a given situation or purpose.
- Static tuning
-
Mainly consists of the application of predefined
sysctl
andsysfs
settings and one-shot activation of several configuration tools such asethtool
. - Dynamic tuning
Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information.
For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use.
By default, dynamic tuning is disabled. To enable it, edit the
/etc/tuned/tuned-main.conf
file and change thedynamic_tuning
option to1
. TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use theupdate_interval
option.Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles.
Example 2.3. Static and dynamic tuning on a workstation
On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded.
For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage.
If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high.
This principle is used for other plug-ins for CPU and disks as well.
2.7. TuneD plug-ins
Plug-ins are modules in TuneD profiles that TuneD uses to monitor or optimize different devices on the system.
TuneD uses two types of plug-ins:
- Monitoring plug-ins
Monitoring plug-ins are used to get information from a running system. The output of the monitoring plug-ins can be used by tuning plug-ins for dynamic tuning.
Monitoring plug-ins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plug-ins. If two tuning plug-ins require the same data, only one instance of the monitoring plug-in is created and the data is shared.
- Tuning plug-ins
- Each tuning plug-in tunes an individual subsystem and takes several parameters that are populated from the TuneD profiles. Each subsystem can have multiple devices, such as multiple CPUs or network cards, that are handled by individual instances of the tuning plug-ins. Specific settings for individual devices are also supported.
Syntax for plug-ins in TuneD profiles
Sections describing plug-in instances are formatted in the following way:
[NAME] type=TYPE devices=DEVICES
- NAME
- is the name of the plug-in instance as it is used in the logs. It can be an arbitrary string.
- TYPE
- is the type of the tuning plug-in.
- DEVICES
is the list of devices that this plug-in instance handles.
The
devices
line can contain a list, a wildcard (*
), and negation (!
). If there is nodevices
line, all devices present or later attached on the system of the TYPE are handled by the plug-in instance. This is same as using thedevices=*
option.Example 2.4. Matching block devices with a plug-in
The following example matches all block devices starting with
sd
, such assda
orsdb
, and does not disable barriers on them:[data_disk] type=disk devices=sd* disable_barriers=false
The following example matches all block devices except
sda1
andsda2
:[data_disk] type=disk devices=!sda1, !sda2 disable_barriers=false
If no instance of a plug-in is specified, the plug-in is not enabled.
If the plug-in supports more options, they can be also specified in the plug-in section. If the option is not specified and it was not previously specified in the included plug-in, the default value is used.
Short plug-in syntax
If you do not need custom names for the plug-in instance and there is only one definition of the instance in your configuration file, TuneD supports the following short syntax:
[TYPE] devices=DEVICES
In this case, it is possible to omit the type
line. The instance is then referred to with a name, same as the type. The previous example could be then rewritten into:
Example 2.5. Matching block devices using the short syntax
[disk] devices=sdb* disable_barriers=false
Conflicting plug-in definitions in a profile
If the same section is specified more than once using the include
option, the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the previous settings. If you do not know what was previously defined, you can use the replace
Boolean option and set it to true
. This causes all the previous definitions with the same name to be overwritten and the merge does not happen.
You can also disable the plug-in by specifying the enabled=false
option. This has the same effect as if the instance was never defined. Disabling the plug-in is useful if you are redefining the previous definition from the include
option and do not want the plug-in to be active in your custom profile.
- NOTE
TuneD includes the ability to run any shell command as part of enabling or disabling a tuning profile. This enables you to extend TuneD profiles with functionality that has not been integrated into TuneD yet.
You can specify arbitrary shell commands using the
script
plug-in.
Additional resources
-
tuned.conf(5)
man page
2.8. Available TuneD plug-ins
Monitoring plug-ins
Currently, the following monitoring plug-ins are implemented:
disk
- Gets disk load (number of IO operations) per device and measurement interval.
net
- Gets network load (number of transferred packets) per network card and measurement interval.
load
- Gets CPU load per CPU and measurement interval.
Tuning plug-ins
Currently, the following tuning plug-ins are implemented. Only some of these plug-ins implement dynamic tuning. Options supported by plug-ins are also listed:
cpu
Sets the CPU governor to the value specified by the
governor
option and dynamically changes the Power Management Quality of Service (PM QoS) CPU Direct Memory Access (DMA) latency according to the CPU load.If the CPU load is lower than the value specified by the
load_threshold
option, the latency is set to the value specified by thelatency_high
option, otherwise it is set to the value specified bylatency_low
.You can also force the latency to a specific value and prevent it from dynamically changing further. To do so, set the
force_latency
option to the required latency value.eeepc_she
Dynamically sets the front-side bus (FSB) speed according to the CPU load.
This feature can be found on some netbooks and is also known as the ASUS Super Hybrid Engine (SHE).
If the CPU load is lower or equal to the value specified by the
load_threshold_powersave
option, the plug-in sets the FSB speed to the value specified by theshe_powersave
option. If the CPU load is higher or equal to the value specified by theload_threshold_normal
option, it sets the FSB speed to the value specified by theshe_normal
option.Static tuning is not supported and the plug-in is transparently disabled if TuneD does not detect the hardware support for this feature.
net
-
Configures the Wake-on-LAN functionality to the values specified by the
wake_on_lan
option. It uses the same syntax as theethtool
utility. It also dynamically changes the interface speed according to the interface utilization. sysctl
Sets various
sysctl
settings specified by the plug-in options.The syntax is
name=value
, where name is the same as the name provided by thesysctl
utility.Use the
sysctl
plug-in if you need to change system settings that are not covered by other plug-ins available in TuneD. If the settings are covered by some specific plug-ins, prefer these plug-ins.usb
Sets autosuspend timeout of USB devices to the value specified by the
autosuspend
parameter.The value
0
means that autosuspend is disabled.vm
Enables or disables transparent huge pages depending on the value of the
transparent_hugepages
option.Valid values of the
transparent_hugepages
option are:- "always"
- "never"
- "madvise"
audio
Sets the autosuspend timeout for audio codecs to the value specified by the
timeout
option.Currently, the
snd_hda_intel
andsnd_ac97_codec
codecs are supported. The value0
means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean optionreset_controller
totrue
.disk
Sets the disk elevator to the value specified by the
elevator
option.It also sets:
-
APM to the value specified by the
apm
option -
Scheduler quantum to the value specified by the
scheduler_quantum
option -
Disk spindown timeout to the value specified by the
spindown
option -
Disk readahead to the value specified by the
readahead
parameter -
The current disk readahead to a value multiplied by the constant specified by the
readahead_multiply
option
In addition, this plug-in dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean option
dynamic
and is enabled by default.-
APM to the value specified by the
scsi_host
Tunes options for SCSI hosts.
It sets Aggressive Link Power Management (ALPM) to the value specified by the
alpm
option.mounts
-
Enables or disables barriers for mounts according to the Boolean value of the
disable_barriers
option. script
Executes an external script or binary when the profile is loaded or unloaded. You can choose an arbitrary executable.
ImportantThe
script
plug-in is provided mainly for compatibility with earlier releases. Prefer other TuneD plug-ins if they cover the required functionality.TuneD calls the executable with one of the following arguments:
-
start
when loading the profile -
stop
when unloading the profile
You need to correctly implement the
stop
action in your executable and revert all settings that you changed during thestart
action. Otherwise, the roll-back step after changing your TuneD profile will not work.Bash scripts can import the
/usr/lib/tuned/functions
Bash library and use the functions defined there. Use these functions only for functionality that is not natively provided by TuneD. If a function name starts with an underscore, such as_wifi_set_power_level
, consider the function private and do not use it in your scripts, because it might change in the future.Specify the path to the executable using the
script
parameter in the plug-in configuration.Example 2.6. Running a Bash script from a profile
To run a Bash script named
script.sh
that is located in the profile directory, use:[script] script=${i:PROFILE_DIR}/script.sh
-
sysfs
Sets various
sysfs
settings specified by the plug-in options.The syntax is
name=value
, where name is thesysfs
path to use.Use this plugin in case you need to change some settings that are not covered by other plug-ins. Prefer specific plug-ins if they cover the required settings.
video
Sets various powersave levels on video cards. Currently, only the Radeon cards are supported.
The powersave level can be specified by using the
radeon_powersave
option. Supported values are:-
default
-
auto
-
low
-
mid
-
high
-
dynpm
-
dpm-battery
-
dpm-balanced
-
dpm-perfomance
For details, see www.x.org. Note that this plug-in is experimental and the option might change in future releases.
-
bootloader
Adds options to the kernel command line. This plug-in supports only the GRUB 2 boot loader.
Customized non-standard location of the GRUB 2 configuration file can be specified by the
grub2_cfg_file
option.The kernel options are added to the current GRUB configuration and its templates. The system needs to be rebooted for the kernel options to take effect.
Switching to another profile or manually stopping the
TuneD
service removes the additional options. If you shut down or reboot the system, the kernel options persist in thegrub.cfg
file.The kernel options can be specified by the following syntax:
cmdline=arg1 arg2 ... argN
Example 2.7. Modifying the kernel command line
For example, to add the
quiet
kernel option to a TuneD profile, include the following lines in thetuned.conf
file:[bootloader] cmdline=quiet
The following is an example of a custom profile that adds the
isolcpus=2
option to the kernel command line:[bootloader] cmdline=isolcpus=2
service
Handles various
sysvinit
,sysv-rc
,openrc
, andsystemd
services specified by the plug-in options.The syntax is
service.service_name=command[,file:file]
.Supported service-handling commands are:
-
start
-
stop
-
enable
-
disable
Separate multiple commands using either a comma (
,
) or a semicolon (;
). If the directives conflict, theservice
plugin uses the last listed one.Use the optional
file:file
directive to install an overlay configuration file,file
, forsystemd
only. Other init systems ignore this directive. Theservice
plugin copies overlay configuration files to/etc/systemd/system/service_name.service.d/
directories. Once profiles are unloaded, theservice
plugin removes these directories if they are empty.NoteThe
service
plugin only operates on the current runlevel with non-systemd
init systems.Example 2.8. Starting and enabling the sendmail
sendmail
service with an overlay file[service] service.sendmail=start,enable,file:${i:PROFILE_DIR}/tuned-sendmail.conf
The internal variable
${i:PROFILE_DIR}
points to the directory the plugin loads the profile from.-
scheduler
- Offers a variety of options for the tuning of scheduling priorities, CPU core isolation, and process, thread, and IRQ affinities.
For specifics of the different options available, see Functionalities of the scheduler
TuneD plug-in.
2.9. Functionalities of the scheduler
TuneD plugin
Use the scheduler
TuneD plugin to control and tune scheduling priorities, CPU core isolation, and process, thread, and IRQ afinities.
CPU isolation
To prevent processes, threads, and IRQs from using certain CPUs, use the isolated_cores
option. It changes process and thread affinities, IRQ affinities, and sets the default_smp_affinity
parameter for IRQs.
The CPU affinity mask is adjusted for all processes and threads matching the ps_whitelist
option, subject to success of the sched_setaffinity()
system call. The default setting of the ps_whitelist
regular expression is .*
to match all processes and thread names. To exclude certain processes and threads, use the ps_blacklist
option. The value of this option is also interpreted as a regular expression. Process and thread names are matched against that expression. Profile rollback enables all matching processes and threads to run on all CPUs, and restores the IRQ settings prior to the profile application.
Multiple regular expressions separated by ;
for the ps_whitelist
and ps_blacklist
options are supported. Escaped semicolon \;
is taken literally.
Example 2.9. Isolate CPUs 2-4
The following configuration isolates CPUs 2-4. Processes and threads that match the ps_blacklist
regular expression can use any CPUs regardless of the isolation:
[scheduler] isolated_cores=2-4 ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.*
IRQ SMP affinity
The /proc/irq/default_smp_affinity
file contains a bitmask representing the default target CPU cores on a system for all inactive interrupt request (IRQ) sources. Once an IRQ is activated or allocated, the value in the /proc/irq/default_smp_affinity
file determines the IRQ’s affinity bitmask.
The default_irq_smp_affinity
parameter controls what TuneD writes to the /proc/irq/default_smp_affinity
file. The default_irq_smp_affinity
parameter supports the following values and behaviors:
calc
Calculates the content of the
/proc/irq/default_smp_affinity
file from theisolated_cores
parameter. An inversion of theisolated_cores
parameter calculates the non-isolated cores.The intersection of the non-isolated cores and the previous content of the
/proc/irq/default_smp_affinity
file is then written to the/proc/irq/default_smp_affinity
file.This is the default behavior if the
default_irq_smp_affinity
parameter is omitted.ignore
-
TuneD does not modify the
/proc/irq/default_smp_affinity
file. - A CPU list
Takes the form of a single number such as
1
, a comma separated list such as1,3
, or a range such as3-5
.Unpacks the CPU list and writes it directly to the
/proc/irq/default_smp_affinity
file.
Example 2.10. Setting the default IRQ smp affinity using an explicit CPU list
The following example uses an explicit CPU list to set the default IRQ SMP affinity to CPUs 0 and 2:
[scheduler] isolated_cores=1,3 default_irq_smp_affinity=0,2
Scheduling policy
To adjust scheduling policy, priority and affinity for a group of processes or threads, use the following syntax:
group.groupname=rule_prio:sched:prio:affinity:regex
where rule_prio
defines internal TuneD priority of the rule. Rules are sorted based on priority. This is needed for inheritance to be able to reorder previously defined rules. Equal rule_prio
rules should be processed in the order they were defined. However, this is Python interpreter dependent. To disable an inherited rule for groupname
, use:
group.groupname=
sched
must be one of the following:
f
- for first in, first out (FIFO)
b
- for batch
r
- for round robin
o
- for other
*
- for do not change
affinity
is CPU affinity in hexadecimal. Use *
for no change.
prio
is scheduling priority (see chrt -m
).
regex
is Python regular expression. It is matched against the output of the ps -eo cmd
command.
Any given process name can match more than one group. In such cases, the last matching regex
determines the priority and scheduling policy.
Example 2.11. Setting scheduling policies and priorities
The following example sets the scheduling policy and priorities to kernel threads and watchdog:
[scheduler] group.kthreads=0:*:1:*:\[.*\]$ group.watchdog=0:f:99:*:\[watchdog.*\]
The scheduler
plugin uses a perf
event loop to identify newly created processes. By default, it listens to perf.RECORD_COMM
and perf.RECORD_EXIT
events.
Setting the perf_process_fork
parameter to true
tells the plug-in to also listen to perf.RECORD_FORK
events, meaning that child processes created by the fork()
system call are processed.
Processing perf
events can pose a significant CPU overhead.
The CPU overhead of the scheduler plugin can be mitigated by using the scheduler runtime
option and setting it to 0
. This completely disables the dynamic scheduler functionality and the perf events are not monitored and acted upon. The disadvantage of this is that the process and thread tuning will be done only at profile application.
Example 2.12. Disabling the dynamic scheduler functionality
The following example disables the dynamic scheduler functionality while also isolating CPUs 1 and 3:
[scheduler] runtime=0 isolated_cores=1,3
The mmapped
buffer is used for perf
events. Under heavy loads, this buffer might overflow and as a result the plugin might start missing events and not processing some newly created processes. In such cases, use the perf_mmap_pages
parameter to increase the buffer size. The value of the perf_mmap_pages
parameter must be a power of 2. If the perf_mmap_pages
parameter is not manually set, a default value of 128 is used.
Confinement using cgroups
The scheduler
plugin supports process and thread confinement using cgroups
v1.
The cgroup_mount_point
option specifies the path to mount the cgroup file system, or, where TuneD expects it to be mounted. If it is unset, /sys/fs/cgroup/cpuset
is expected.
If the cgroup_groups_init
option is set to 1
, TuneD creates and removes all cgroups
defined with the cgroup*
options. This is the default behavior. If the cgroup_mount_point
option is set to 0
, the cgroups
must be preset by other means.
If the cgroup_mount_point_init
option is set to 1
, TuneD creates and removes the cgroup mount point. It implies cgroup_groups_init = 1
. If the cgroup_mount_point_init
option is set to 0
, you must preset the cgroups
mount point by other means. This is the default behavior.
The cgroup_for_isolated_cores
option is the cgroup
name for the isolated_cores
option functionality. For example, if a system has 4 CPUs, isolated_cores=1
means that Tuned moves all processes and threads to CPUs 0, 2, and 3. The scheduler
plug-in isolates the specified core by writing the calculated CPU affinity to the cpuset.cpus
control file of the specified cgroup and moves all the matching processes and threads to this group. If this option is unset, classic cpuset affinity using sched_setaffinity()
sets the CPU affinity.
The cgroup.cgroup_name
option defines affinities for arbitrary cgroups
. You can even use hierarchic cgroups, but you must specify the hierarchy in the correct order. TuneD does not do any sanity checks here, with the exception that it forces the cgroup
to be in the location specified by the cgroup_mount_point
option.
The syntax of the scheduler option starting with group.
has been augmented to use cgroup.cgroup_name
instead of the hexadecimal affinity
. The matching processes are moved to the cgroup
cgroup_name
. You can also use cgroups not defined by the cgroup.
option as described above. For example, cgroups
not managed by TuneD.
All cgroup
names are sanitized by replacing all periods (.
) with slashes (/
). This prevents the plugin from writing outside the location specified by the cgroup_mount_point
option.
Example 2.13. Using cgroups
v1 with the scheduler
plug-in
The following example creates 2 cgroups
, group1
and group2
. It sets the cgroup group1
affinity to CPU 2 and the cgroup
group2
to CPUs 0 and 2. Given a 4 CPU setup, the isolated_cores=1
option moves all processes and threads to CPU cores 0, 2, and 3. Processes and threads specified by the ps_blacklist
regular expression are not moved.
[scheduler] cgroup_mount_point=/sys/fs/cgroup/cpuset cgroup_mount_point_init=1 cgroup_groups_init=1 cgroup_for_isolated_cores=group cgroup.group1=2 cgroup.group2=0,2 group.ksoftirqd=0:f:2:cgroup.group1:ksoftirqd.* ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.* isolated_cores=1
The cgroup_ps_blacklist
option excludes processes belonging to the specified cgroups
. The regular expression specified by this option is matched against cgroup
hierarchies from /proc/PID/cgroups
. Commas (,
) separate cgroups
v1 hierarchies from /proc/PID/cgroups
before regular expression matching. The following is an example of content the regular expression is matched against:
10:hugetlb:/,9:perf_event:/,8:blkio:/
Multiple regular expressions can be separated by semicolons (;
). The semicolon represents a logical 'or' operator.
Example 2.14. Excluding processes from the scheduler using cgroups
In the following example, the scheduler
plug-in moves all processes away from core 1, except for processes which belong to cgroup /daemons
. The \b
string is a regular expression metacharacter that matches a word boundary.
[scheduler] isolated_cores=1 cgroup_ps_blacklist=:/daemons\b
In the following example, the scheduler
plugin excludes all processes which belong to a cgroup with a hierarchy-ID of 8 and controller-list blkio
.
[scheduler] isolated_cores=1 cgroup_ps_blacklist=\b8:blkio:
Recent kernels moved some sched_
and numa_balancing_
kernel run-time parameters from the /proc/sys/kernel
directory managed by the sysctl
utility, to debugfs
, typically mounted under the /sys/kernel/debug
directory. TuneD provides an abstraction mechanism for the following parameters via the scheduler
plugin where, based on the kernel used, TuneD writes the specified value to the correct location:
-
sched_min_granularity_ns
-
sched_latency_ns
, -
sched_wakeup_granularity_ns
-
sched_tunable_scaling
, -
sched_migration_cost_ns
-
sched_nr_migrate
-
numa_balancing_scan_delay_ms
-
numa_balancing_scan_period_min_ms
-
numa_balancing_scan_period_max_ms
numa_balancing_scan_size_mb
Example 2.15. Set tasks' "cache hot" value for migration decisions.
On the old kernels, setting the following parameter meant that
sysctl
wrote a value of500000
to the/proc/sys/kernel/sched_migration_cost_ns
file:[sysctl] kernel.sched_migration_cost_ns=500000
This is, on more recent kernels, equivalent to setting the following parameter via the
scheduler
plugin:[scheduler] sched_migration_cost_ns=500000
Meaning TuneD writes a value of
500000
to the/sys/kernel/debug/sched/migration_cost_ns
file.
2.10. Variables in TuneD profiles
Variables expand at run time when a TuneD profile is activated.
Using TuneD variables reduces the amount of necessary typing in TuneD profiles.
There are no predefined variables in TuneD profiles. You can define your own variables by creating the [variables]
section in a profile and using the following syntax:
[variables] variable_name=value
To expand the value of a variable in a profile, use the following syntax:
${variable_name}
Example 2.16. Isolating CPU cores using variables
In the following example, the ${isolated_cores}
variable expands to 1,2
; hence the kernel boots with the isolcpus=1,2
option:
[variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=${isolated_cores}
The variables can be specified in a separate file. For example, you can add the following lines to tuned.conf
:
[variables]
include=/etc/tuned/my-variables.conf
[bootloader]
cmdline=isolcpus=${isolated_cores}
If you add the isolated_cores=1,2
option to the /etc/tuned/my-variables.conf
file, the kernel boots with the isolcpus=1,2
option.
Additional resources
-
tuned.conf(5)
man page
2.11. Built-in functions in TuneD profiles
Built-in functions expand at run time when a TuneD profile is activated.
You can:
- Use various built-in functions together with TuneD variables
- Create custom functions in Python and add them to TuneD in the form of plug-ins
To call a function, use the following syntax:
${f:function_name:argument_1:argument_2}
To expand the directory path where the profile and the tuned.conf
file are located, use the PROFILE_DIR
function, which requires special syntax:
${i:PROFILE_DIR}
Example 2.17. Isolating CPU cores using variables and built-in functions
In the following example, the ${non_isolated_cores}
variable expands to 0,3-5
, and the cpulist_invert
built-in function is called with the 0,3-5
argument:
[variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=${f:cpulist_invert:${non_isolated_cores}}
The cpulist_invert
function inverts the list of CPUs. For a 6-CPU machine, the inversion is 1,2
, and the kernel boots with the isolcpus=1,2
command-line option.
Additional resources
-
tuned.conf(5)
man page
2.12. Built-in functions available in TuneD profiles
The following built-in functions are available in all TuneD profiles:
PROFILE_DIR
-
Returns the directory path where the profile and the
tuned.conf
file are located. exec
- Executes a process and returns its output.
assertion
- Compares two arguments. If they do not match, the function logs text from the first argument and aborts profile loading.
assertion_non_equal
- Compares two arguments. If they match, the function logs text from the first argument and aborts profile loading.
kb2s
- Converts kilobytes to disk sectors.
s2kb
- Converts disk sectors to kilobytes.
strip
- Creates a string from all passed arguments and deletes both leading and trailing white space.
virt_check
Checks whether TuneD is running inside a virtual machine (VM) or on bare metal:
- Inside a VM, the function returns the first argument.
- On bare metal, the function returns the second argument, even in case of an error.
cpulist_invert
-
Inverts a list of CPUs to make its complement. For example, on a system with 4 CPUs, numbered from 0 to 3, the inversion of the list
0,2,3
is1
. cpulist2hex
- Converts a CPU list to a hexadecimal CPU mask.
cpulist2hex_invert
- Converts a CPU list to a hexadecimal CPU mask and inverts it.
hex2cpulist
- Converts a hexadecimal CPU mask to a CPU list.
cpulist_online
- Checks whether the CPUs from the list are online. Returns the list containing only online CPUs.
cpulist_present
- Checks whether the CPUs from the list are present. Returns the list containing only present CPUs.
cpulist_unpack
-
Unpacks a CPU list in the form of
1-3,4
to1,2,3,4
. cpulist_pack
-
Packs a CPU list in the form of
1,2,3,5
to1-3,5
.
2.13. Creating new TuneD profiles
This procedure creates a new TuneD profile with custom performance rules.
Prerequisites
-
The
TuneD
service is running. See Installing and Enabling TuneD for details.
Procedure
In the
/etc/tuned/
directory, create a new directory named the same as the profile that you want to create:# mkdir /etc/tuned/my-profile
In the new directory, create a file named
tuned.conf
. Add a[main]
section and plug-in definitions in it, according to your requirements.For example, see the configuration of the
balanced
profile:[main] summary=General non-specialized TuneD profile [cpu] governor=conservative energy_perf_bias=normal [audio] timeout=10 [video] radeon_powersave=dpm-balanced, auto [scsi_host] alpm=medium_power
To activate the profile, use:
# tuned-adm profile my-profile
Verify that the TuneD profile is active and the system settings are applied:
$ tuned-adm active Current active profile: my-profile
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.
Additional resources
-
tuned.conf(5)
man page
2.14. Modifying existing TuneD profiles
This procedure creates a modified child profile based on an existing TuneD profile.
Prerequisites
-
The
TuneD
service is running. See Installing and Enabling TuneD for details.
Procedure
In the
/etc/tuned/
directory, create a new directory named the same as the profile that you want to create:# mkdir /etc/tuned/modified-profile
In the new directory, create a file named
tuned.conf
, and set the[main]
section as follows:[main] include=parent-profile
Replace parent-profile with the name of the profile you are modifying.
Include your profile modifications.
Example 2.18. Lowering swappiness in the throughput-performance profile
To use the settings from the
throughput-performance
profile and change the value ofvm.swappiness
to 5, instead of the default 10, use:[main] include=throughput-performance [sysctl] vm.swappiness=5
To activate the profile, use:
# tuned-adm profile modified-profile
Verify that the TuneD profile is active and the system settings are applied:
$ tuned-adm active Current active profile: my-profile
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.
Additional resources
-
tuned.conf(5)
man page
2.15. Setting the disk scheduler using TuneD
This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots.
In the following commands and configuration, replace:
-
device with the name of the block device, for example
sdf
-
selected-scheduler with the disk scheduler that you want to set for the device, for example
bfq
Prerequisites
-
The
TuneD
service is installed and enabled. For details, see Installing and enabling TuneD.
Procedure
Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL.
To see which profile is currently active, use:
$ tuned-adm active
Create a new directory to hold your TuneD profile:
# mkdir /etc/tuned/my-profile
Find the system unique identifier of the selected block device:
$ udevadm info --query=property --name=/dev/device | grep -E '(WWN|SERIAL)' ID_WWN=0x5002538d00000000_ ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0 ID_SERIAL_SHORT=20120501030900000
NoteThe command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.
Create the
/etc/tuned/my-profile/tuned.conf
configuration file. In the file, set the following options:Optional: Include an existing profile:
[main] include=existing-profile
Set the selected disk scheduler for the device that matches the WWN identifier:
[disk] devices_udev_regex=IDNAME=device system unique id elevator=selected-scheduler
Here:
-
Replace IDNAME with the name of the identifier being used (for example,
ID_WWN
). Replace device system unique id with the value of the chosen identifier (for example,
0x5002538d00000000
).To match multiple devices in the
devices_udev_regex
option, enclose the identifiers in parentheses and separate them with vertical bars:devices_udev_regex=(ID_WWN=0x5002538d00000000)|(ID_WWN=0x1234567800000000)
-
Replace IDNAME with the name of the identifier being used (for example,
Enable your profile:
# tuned-adm profile my-profile
Verification
Verify that the TuneD profile is active and applied:
$ tuned-adm active Current active profile: my-profile
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details.
Read the contents of the
/sys/block/device/queue/scheduler
file:# cat /sys/block/device/queue/scheduler [mq-deadline] kyber bfq none
In the file name, replace device with the block device name, for example
sdc
.The active scheduler is listed in square brackets (
[]
).
Additional resources
Chapter 3. Reviewing a system by using the tuna interface
The tuna
tool reduces the complexity of performing tuning tasks. Use tuna
to adjust scheduler tunables, tune thread priority, IRQ handlers, and to isolate CPU cores and sockets. By using the tuna
tool, you can perform the following operations:
- List the CPUs on a system.
- List the interrupt requests (IRQs) currently running on a system.
- Change policy and priority information about threads.
- Display the current policies and priorities of a system.
3.1. Installing the tuna tool
The tuna
tool is designed to be used on a running system. This allows application-specific measurement tools to see and analyze system performance immediately after changes have been made.
Procedure
Install the
tuna
tool:# dnf install tuna
Verification
Display the available
tuna
CLI options:# tuna -h
Additional resources
-
tuna(8)
man page
3.2. Viewing the system status by using the tuna tool
You can use the tuna
command-line interface (CLI) tool to view the system status.
Prerequisites
-
The
tuna
tool is installed. For more information, see Installing the tuna tool.
Procedure
View the current policies and priorities:
# tuna show_threads pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0
Alternatively, to view a specific thread corresponding to a PID or matching a command name, enter:
# tuna show_threads -t pid_or_cmd_list
The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns.
Depending on you scenario, perform one of the following actions:
-
To tune CPUs by using the
tuna
CLI, complete the steps in Tuning CPUs by using the tuna tool. -
To tune the IRQs by using the
tuna
tool, complete the steps in Tuning IRQs by using the tuna tool.
-
To tune CPUs by using the
Save the changed configuration:
# tuna save filename
This command saves only currently running kernel threads. Processes that are not running are not saved.
Additional resources
-
tuna(8)
man page on your system
3.3. Tuning CPUs by using the tuna tool
The tuna
tool commands can target individual CPUs. By using the tuna
tool, you can perform the following actions:
Isolate CPUs
- All tasks running on the specified CPU move to the next available CPU. Isolating a CPU makes this CPU unavailable by removing it from the affinity mask of all threads.
Include CPUs
- Allows tasks to run on the specified CPU.
Restore CPUs
- Restores the specified CPU to its previous configuration.
Prerequisites
-
The
tuna
tool is installed. For more information, see Installing the tuna tool.
Procedure
List all CPUs and specify the list of CPUs to be affected by the command:
# ps ax | awk 'BEGIN { ORS="," }{ print $1 }' PID,1,2,3,4,5,6,8,10,11,12,13,14,15,16,17,19
Display the thread list in the
tuna
interface:# tuna show_threads -t 'thread_list from above cmd'
Specify the list of CPUs to be affected by a command:
# *tuna [command] --cpus cpu_list *
The cpu_list argument is a list of comma-separated CPU numbers, for example,
--cpus 0,2
.To add a specific CPU to the current cpu_list, use, for example,
--cpus +0
.Depending on your scenario, perform one of the following actions:
To isolate a CPU, enter:
# tuna isolate --cpus cpu_list
To include a CPU, enter:
# tuna include --cpus cpu_list
To use a system with four or more processors, make all
ssh
threads run on CPU 0 and 1 and allhttp
threads on CPU 2 and 3:# tuna move --cpus 0,1 -t ssh* # tuna move --cpus 2,3 -t http\*
Verification
Display the current configuration and verify that the changes were applied:
# tuna show_threads -t ssh* pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 0,1 23 15 sshd # tuna show_threads -t http\* pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 2,3 23 15 http
Additional resources
-
/proc/cpuinfo
file -
tuna(8)
man page on your system
3.4. Tuning IRQs by using the tuna tool
The /proc/interrupts
file records the number of interrupts per IRQ, the type of interrupt, and the name of the device that is located at that IRQ.
Prerequisites
-
The
tuna
tool is installed. For more information, see Installing tuna tool.
Procedure
View the current IRQs and their affinity:
# tuna show_irqs # users affinity 0 timer 0 1 i8042 0 7 parport0 0
Specify the list of IRQs to be affected by a command:
# tuna [command] --irqs irq_list --cpus cpu_list
The irq_list argument is a list of comma-separated IRQ numbers or user-name patterns.
Replace [command] with, for example,
--spread
.Move an interrupt to a specified CPU:
# tuna show_irqs --irqs 128 users affinity 128 iwlwifi 0,1,2,3 # tuna move --irqs 128 --cpus 3
Replace 128 with the irq_list argument and 3 with the cpu_list argument.
The cpu_list argument is a list of comma-separated CPU numbers, for example,
--cpus 0,2
. For more information, see Tuning CPUs by using the tuna tool.
Verification
Compare the state of the selected IRQs before and after moving any interrupt to a specified CPU:
# tuna show_irqs --irqs 128 users affinity 128 iwlwifi 3
Additional resources
-
/procs/interrupts
file -
tuna(8)
man page on your system
Chapter 4. Monitoring performance using RHEL system roles
As a system administrator, you can use the metrics
RHEL system role to monitor the performance of a system.
4.1. Preparing a control node and managed nodes to use RHEL system roles
Before you can use individual RHEL system roles to manage services and settings, you must prepare the control node and managed nodes.
4.1.1. Preparing a control node on RHEL 9
Before using RHEL system roles, you must configure a control node. This system then configures the managed hosts from the inventory according to the playbooks.
Prerequisites
- The system is registered to the Customer Portal.
-
A
Red Hat Enterprise Linux Server
subscription is attached to the system. -
Optional: An
Ansible Automation Platform
subscription is attached to the system.
Procedure
Create a user named
ansible
to manage and run playbooks:[root@control-node]# useradd ansible
Switch to the newly created
ansible
user:[root@control-node]# su - ansible
Perform the rest of the procedure as this user.
Create an SSH public and private key:
[ansible@control-node]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ansible/.ssh/id_rsa): Enter passphrase (empty for no passphrase): <password> Enter same passphrase again: <password> ...
Use the suggested default location for the key file.
- Optional: To prevent Ansible from prompting you for the SSH key password each time you establish a connection, configure an SSH agent.
Create the
~/.ansible.cfg
file with the following content:[defaults] inventory = /home/ansible/inventory remote_user = ansible [privilege_escalation] become = True become_method = sudo become_user = root become_ask_pass = True
NoteSettings in the
~/.ansible.cfg
file have a higher priority and override settings from the global/etc/ansible/ansible.cfg
file.With these settings, Ansible performs the following actions:
- Manages hosts in the specified inventory file.
-
Uses the account set in the
remote_user
parameter when it establishes SSH connections to managed nodes. -
Uses the
sudo
utility to execute tasks on managed nodes as theroot
user. - Prompts for the root password of the remote user every time you apply a playbook. This is recommended for security reasons.
Create an
~/inventory
file in INI or YAML format that lists the hostnames of managed hosts. You can also define groups of hosts in the inventory file. For example, the following is an inventory file in the INI format with three hosts and one host group namedUS
:managed-node-01.example.com [US] managed-node-02.example.com ansible_host=192.0.2.100 managed-node-03.example.com
Note that the control node must be able to resolve the hostnames. If the DNS server cannot resolve certain hostnames, add the
ansible_host
parameter next to the host entry to specify its IP address.Install RHEL system roles:
On a RHEL host without Ansible Automation Platform, install the
rhel-system-roles
package:[root@control-node]# dnf install rhel-system-roles
This command installs the collections in the
/usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/
directory, and theansible-core
package as a dependency.On Ansible Automation Platform, perform the following steps as the
ansible
user:-
Define Red Hat automation hub as the primary source for content in the
~/.ansible.cfg
file. Install the
redhat.rhel_system_roles
collection from Red Hat automation hub:[ansible@control-node]$ ansible-galaxy collection install redhat.rhel_system_roles
This command installs the collection in the
~/.ansible/collections/ansible_collections/redhat/rhel_system_roles/
directory.
-
Define Red Hat automation hub as the primary source for content in the
Next step
- Prepare the managed nodes. For more information, see Preparing a managed node.
Additional resources
- Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories
- How to register and subscribe a system to the Red Hat Customer Portal using subscription-manager
-
The
ssh-keygen(1)
manual page - Connecting to remote machines with SSH keys using ssh-agent
- Ansible configuration settings
- How to build your inventory
4.1.2. Preparing a managed node
Managed nodes are the systems listed in the inventory and which will be configured by the control node according to the playbook. You do not have to install Ansible on managed hosts.
Prerequisites
- You prepared the control node. For more information, see Preparing a control node on RHEL 9.
You have SSH access from the control node.
ImportantDirect SSH access as the
root
user is a security risk. To reduce this risk, you will create a local user on this node and configure asudo
policy when preparing a managed node. Ansible on the control node can then use the local user account to log in to the managed node and run playbooks as different users, such asroot
.
Procedure
Create a user named
ansible
:[root@managed-node-01]# useradd ansible
The control node later uses this user to establish an SSH connection to this host.
Set a password for the
ansible
user:[root@managed-node-01]# passwd ansible Changing password for user ansible. New password: <password> Retype new password: <password> passwd: all authentication tokens updated successfully.
You must enter this password when Ansible uses
sudo
to perform tasks as theroot
user.Install the
ansible
user’s SSH public key on the managed node:Log in to the control node as the
ansible
user, and copy the SSH public key to the managed node:[ansible@control-node]$ ssh-copy-id managed-node-01.example.com /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub" The authenticity of host 'managed-node-01.example.com (192.0.2.100)' can't be established. ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.
When prompted, connect by entering
yes
:Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
When prompted, enter the password:
ansible@managed-node-01.example.com's password: <password> Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'managed-node-01.example.com'" and check to make sure that only the key(s) you wanted were added.
Verify the SSH connection by remotely executing a command on the control node:
[ansible@control-node]$ ssh managed-node-01.example.com whoami ansible
Create a
sudo
configuration for theansible
user:Create and edit the
/etc/sudoers.d/ansible
file by using thevisudo
command:[root@managed-node-01]# visudo /etc/sudoers.d/ansible
The benefit of using
visudo
over a normal editor is that this utility provides basic checks, such as for parse errors, before installing the file.Configure a
sudoers
policy in the/etc/sudoers.d/ansible
file that meets your requirements, for example:To grant permissions to the
ansible
user to run all commands as any user and group on this host after entering theansible
user’s password, use:ansible ALL=(ALL) ALL
To grant permissions to the
ansible
user to run all commands as any user and group on this host without entering theansible
user’s password, use:ansible ALL=(ALL) NOPASSWD: ALL
Alternatively, configure a more fine-granular policy that matches your security requirements. For further details on
sudoers
policies, see thesudoers(5)
manual page.
Verification
Verify that you can execute commands from the control node on an all managed nodes:
[ansible@control-node]$ ansible all -m ping BECOME password: <password> managed-node-01.example.com | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } ...
The hard-coded all group dynamically contains all hosts listed in the inventory file.
Verify that privilege escalation works correctly by running the
whoami
utility on all managed nodes by using the Ansiblecommand
module:[ansible@control-node]$ ansible all -m command -a whoami BECOME password: <password> managed-node-01.example.com | CHANGED | rc=0 >> root ...
If the command returns root, you configured
sudo
on the managed nodes correctly.
Additional resources
- Preparing a control node on RHEL 9
-
sudoers(5)
manual page
4.2. Introduction to the metrics
RHEL system role
RHEL system roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The metrics
system role configures performance analysis services for the local system and, optionally, includes a list of remote systems to be monitored by the local system. The metrics
system role enables you to use pcp
to monitor your systems performance without having to configure pcp
separately, as the set-up and deployment of pcp
is handled by the playbook.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.3. Using the metrics
RHEL system role to monitor your local system with visualization
This procedure describes how to use the metrics
RHEL system role to monitor your local system while simultaneously provisioning data visualization via Grafana
.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. localhost
is configured in the inventory file on the control node:localhost ansible_connection=local
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Manage metrics hosts: localhost roles: - rhel-system-roles.metrics vars: metrics_graph_service: yes metrics_manage_firewall: true metrics_manage_selinux: true
Because the
metrics_graph_service
boolean is set tovalue="yes"
,Grafana
is automatically installed and provisioned withpcp
added as a data source. Becausemetrics_manage_firewall
andmetrics_manage_selinux
are both set totrue
, the metrics role uses thefirewall
andselinux
system roles to manage the ports used by the metrics role.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
-
To view visualization of the metrics being collected on your machine, access the
grafana
web interface as described in Accessing the Grafana web UI.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.4. Using the metrics
RHEL system role to set up a fleet of individual systems to monitor themselves
This procedure describes how to use the metrics
system role to set up a fleet of machines to monitor themselves.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure a fleet of machines to monitor themselves hosts: managed-node-01.example.com roles: - rhel-system-roles.metrics vars: metrics_retention_days: 0 metrics_manage_firewall: true metrics_manage_selinux: true
Because
metrics_manage_firewall
andmetrics_manage_selinux
are both set totrue
, the metrics role uses thefirewall
andselinux
roles to manage the ports used by themetrics
role.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.5. Using the metrics
RHEL system role to monitor a fleet of machines centrally using your local machine
This procedure describes how to use the metrics
system role to set up your local machine to centrally monitor a fleet of machines while also provisioning visualization of the data via grafana
and querying of the data via redis
.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. localhost
is configured in the inventory file on the control node:localhost ansible_connection=local
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:- name: Set up your local machine to centrally monitor a fleet of machines hosts: localhost roles: - rhel-system-roles.metrics vars: metrics_graph_service: yes metrics_query_service: yes metrics_retention_days: 10 metrics_monitored_hosts: ["database.example.com", "webserver.example.com"] metrics_manage_firewall: yes metrics_manage_selinux: yes
Because the
metrics_graph_service
andmetrics_query_service
booleans are set tovalue="yes"
,grafana
is automatically installed and provisioned withpcp
added as a data source with thepcp
data recording indexed intoredis
, allowing thepcp
querying language to be used for complex querying of the data. Becausemetrics_manage_firewall
andmetrics_manage_selinux
are both set totrue
, themetrics
role uses thefirewall
andselinux
roles to manage the ports used by themetrics
role.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
-
To view a graphical representation of the metrics being collected centrally by your machine and to query the data, access the
grafana
web interface as described in Accessing the Grafana web UI.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.6. Setting up authentication while monitoring a system by using the metrics
RHEL system role
PCP supports the scram-sha-256
authentication mechanism through the Simple Authentication Security Layer (SASL) framework. The metrics
RHEL system role automates the steps to setup authentication by using the scram-sha-256
authentication mechanism. This procedure describes how to setup authentication by using the metrics
RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Edit an existing playbook file, for example
~/playbook.yml
, and add the authentication-related variables:--- - name: Set up authentication by using the scram-sha-256 authentication mechanism hosts: managed-node-01.example.com roles: - rhel-system-roles.metrics vars: metrics_retention_days: 0 metrics_manage_firewall: true metrics_manage_selinux: true metrics_username: <username> metrics_password: <password>
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Verify the
sasl
configuration:# pminfo -f -h "pcp://managed-node-01.example.com?username=<username>" disk.dev.read Password: <password> disk.dev.read inst [0 or "sda"] value 19540
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.7. Using the metrics
RHEL system role to configure and enable metrics collection for SQL Server
This procedure describes how to use the metrics
RHEL system role to automate the configuration and enabling of metrics collection for Microsoft SQL Server via pcp
on your local system.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a trusted connection to an SQL server.
- You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux.
localhost
is configured in the inventory file on the control node:localhost ansible_connection=local
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure and enable metrics collection for Microsoft SQL Server hosts: localhost roles: - rhel-system-roles.metrics vars: metrics_from_mssql: true metrics_manage_firewall: true metrics_manage_selinux: true
Because
metrics_manage_firewall
andmetrics_manage_selinux
are both set totrue
, themetrics
role uses thefirewall
andselinux
roles to manage the ports used by themetrics
role.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Use the
pcp
command to verify that SQL Server PMDA agent (mssql) is loaded and running:# pcp platform: Linux sqlserver.example.com 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64 hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM timezone: PDT+7 services: pmcd pmproxy pmcd: Version 5.0.2-1, 12 agents, 4 clients pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql jbd2 dm pmlogger: primary logger: /var/log/pcp/pmlogger/sqlserver.example.com/20200326.16.31 pmie: primary engine: /var/log/pcp/pmie/sqlserver.example.com/pmie.log
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file -
/usr/share/doc/rhel-system-roles/metrics/
directory
4.8. Configuring PMIE webhooks using the Metrics RHEL system role
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. -
In the Ansible inventory, you have defined the
servers
andmetrics_monitor
host groups. In this example, theservers
group includesserver-node-01.example.com
andserver-node-02.example.com
. Themetrics_monitor
group includespcp-monitor-node-01.example.com
.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure PCP webhooks hosts: servers tasks: - name: Configure PCP metrics recording ansible.builtin.include_role: name: rhel-system-roles.metrics vars: metrics_retention_days: 7 metrics_manage_firewall: true - name: Configure the PMIE webhooks hosts: metrics_monitor tasks: - name: Configure the monitoring node ansible.builtin.include_role: name: redhat.rhel_system_roles.metrics vars: metrics_manage_firewall: true metrics_retention_days: 7 metrics_monitored_hosts: "{{ groups['servers'] }}" metrics_webhook_endpoint: "http://<webserver>:<port>/<endpoint>"
The settings specified in the example playbook include the following:
metrics_manage_firewall
-
When
true
, thefirewall
RHEL system role manages the ports used by themetrics
role. metrics_retention_days
- Number of days to keep the collected metrics.
metrics_monitored_hosts
- Hosts that the monitoring system should observe.
metrics_webhook_endpoint
- A webhook endpoint where notifications of any detected performance issues are sent. By default, these detections are logged to the local system only.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md
file on the control node. The playbook configures thepcp-monitor-node-01.example.com
host as the central monitoring site for itself and theserver-node-01.example.com
andserver-node-02.example.com
systems. The playbook also configures theglobal webhook_action
andglobal webhook_endpoint
PMIE configuration options for all 3 systems and restarts the PMIE service to apply the changes.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Check the configuration summary on
pcp-monitor-node-01.example.com
:[root@pcp-monitor-node-01 ~]# pcp summary Performance Co-Pilot configuration on pcp-monitor-node-01.example.com: platform: Linux pcp-monitor-node-01.example.com 5.14.0-427.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Feb 23 01:51:18 EST 2024 x86_64 hardware: 8 cpus, 1 disk, 1 node, 1773MB RAM timezone: CEST-2 services: pmcd pmproxy pmcd: Version 6.2.0-1, 12 agents, 6 clients pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm jbd2 dm openmetrics pmlogger: primary logger: /var/log/pcp/pmlogger/pcp-monitor-node-01.example.com/20240510.16.25 server-node-01.example.com: /var/log/pmlogger/server-node-01.example.com/20240510.16.25 server-node-02.example.com: /var/log/pmlogger/server-node-02.example.com/20240510.16.25 pmie: primary engine: /var/log/pcp/pmie/pcp-monitor-node-01.example.com/pmie.log server-node-01.example.com: : /var/log/pcp/pmie/server-node-01.example.com/pmie.log server-node-02.example.com: : /var/log/pcp/pmie/server-node-02.example.com/pmie.log
The final three lines of the summary show that PMIE is configured to monitor all three systems.
Verify that the
global webhook_action
PMIE configuration option is enabled:[root@pcp-monitor-node-01 ~]# grep webbook_action /var/lib/pcp/config/pmie/config.default // 0 global webhook_action = yes
Chapter 5. Setting up PCP
Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements.
5.1. Overview of PCP
You can add performance metrics using Python, Perl, C++, and C interfaces. Analysis tools can use the Python, C++, C client APIs directly, and rich web applications can explore all available performance data using a JSON interface.
You can analyze data patterns by comparing live results with archived data.
Features of PCP:
- Light-weight distributed architecture, which is useful during the centralized analysis of complex systems.
- It allows the monitoring and management of real-time data.
- It allows logging and retrieval of historical data.
PCP has the following components:
-
The Performance Metric Collector Daemon (
pmcd
) collects performance data from the installed Performance Metric Domain Agents (pmda
). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host. -
Various client tools, such as
pminfo
orpmstat
, can retrieve, display, archive, and process this data on the same host or over the network. -
The
pcp
package provides the command-line tools and underlying functionality. -
The
pcp-gui
package provides the graphical application. Install thepcp-gui
package by executing thednf install pcp-gui
command. For more information, see Visually tracing PCP log archives with the PCP Charts application.
Additional resources
-
pcp(1)
man page -
/usr/share/doc/pcp-doc/
directory - System services and tools distributed with PCP
- Index of Performance Co-Pilot (PCP) articles, solutions, tutorials, and white papers fromon Red Hat Customer Portal
- Side-by-side comparison of PCP tools with legacy tools Red Hat Knowledgebase article
- PCP upstream documentation
5.2. Installing and enabling PCP
To begin using PCP, install all the required packages and enable the PCP monitoring services.
This procedure describes how to install PCP using the pcp
package. If you want to automate the PCP installation, install it using the pcp-zeroconf
package. For more information about installing PCP by using pcp-zeroconf
, see Setting up PCP with pcp-zeroconf.
Procedure
Install the
pcp
package:# dnf install pcp
Enable and start the
pmcd
service on the host machine:# systemctl enable pmcd # systemctl start pmcd
Verification
Verify if the
pmcd
process is running on the host:# pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM timezone: CEST-2 services: pmcd pmcd: Version 4.3.0-1, 8 agents pmda: root pmcd proc xfs linux mmv kvm jbd2
Additional resources
-
pmcd(1)
man page - System services and tools distributed with PCP
5.3. Deploying a minimal PCP setup
The minimal PCP setup collects performance statistics on Red Hat Enterprise Linux. The setup involves adding the minimum number of packages on a production system needed to gather data for further analysis.
You can analyze the resulting tar.gz
file and the archive of the pmlogger
output using various PCP tools and compare them with other sources of performance information.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Update the
pmlogger
configuration:# pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
Start the
pmcd
andpmlogger
services:# systemctl start pmcd.service # systemctl start pmlogger.service
- Execute the required operations to record the performance data.
Stop the
pmcd
andpmlogger
services:# systemctl stop pmcd.service # systemctl stop pmlogger.service
Save the output and save it to a
tar.gz
file named based on the host name and the current date and time:# cd /var/log/pcp/pmlogger/ # tar -czf $(hostname).$(date +%F-%Hh%M).pcp.tar.gz $(hostname)
Extract this file and analyze the data using PCP tools.
Additional resources
-
pmlogconf(1)
,pmlogger(1)
, andpmcd(1)
man pages - System services and tools distributed with PCP
5.4. System services and tools distributed with PCP
Performance Co-Pilot (PCP) includes various system services and tools you can use for measuring performance. The basic package pcp
includes the system services and basic tools. Additional tools are provided with the pcp-system-tools
, pcp-gui
, and pcp-devel
packages.
Roles of system services distributed with PCP
pmcd
- The Performance Metric Collector Daemon (PMCD).
pmie
- The Performance Metrics Inference Engine.
pmlogger
- The performance metrics logger.
pmproxy
- The realtime and historical performance metrics proxy, time series query and REST API service.
Tools distributed with base PCP package
pcp
- Displays the current status of a Performance Co-Pilot installation.
pcp-vmstat
- Provides a high-level system performance overview every 5 seconds. Displays information about processes, memory, paging, block IO, traps, and CPU activity.
pmconfig
- Displays the values of configuration parameters.
pmdiff
- Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions.
pmdumplog
- Displays control, metadata, index, and state information from a Performance Co-Pilot archive file.
pmfind
- Finds PCP services on the network.
pmie
- An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmieconf
-
Displays or sets configurable
pmie
variables. pmiectl
-
Manages non-primary instances of
pmie
. pminfo
- Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmlc
-
Interactively configures active
pmlogger
instances. pmlogcheck
- Identifies invalid data in a Performance Co-Pilot archive file.
pmlogconf
-
Creates and modifies a
pmlogger
configuration file. pmlogctl
-
Manages non-primary instances of
pmlogger
. pmloglabel
- Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file.
pmlogsummary
- Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file.
pmprobe
- Determines the availability of performance metrics.
pmsocks
- Allows access to a Performance Co-Pilot hosts through a firewall.
pmstat
- Periodically displays a brief summary of system performance.
pmstore
- Modifies the values of performance metrics.
pmtrace
- Provides a command line interface to the trace PMDA.
pmval
- Displays the current value of a performance metric.
Tools distributed with the separately installed pcp-system-tools
package
pcp-atop
- Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network.
pcp-atopsar
-
Generates a system-level activity report over a variety of system resource utilization. The report is generated from a raw logfile previously recorded using
pmlogger
or the-w
option ofpcp-atop
. pcp-dmcache
- Displays information about configured Device Mapper Cache targets, such as: device IOPs, cache and metadata device utilization, as well as hit and miss rates and ratios for both reads and writes for each cache device.
pcp-dstat
-
Displays metrics of one system at a time. To display metrics of multiple systems, use
--host
option. pcp-free
- Reports on free and used memory in a system.
pcp-htop
-
Displays all processes running on a system along with their command line arguments in a manner similar to the
top
command, but allows you to scroll vertically and horizontally as well as interact using a mouse. You can also view processes in a tree format and select and act on multiple processes at once. pcp-ipcs
- Displays information about the inter-process communication (IPC) facilities that the calling process has read access for.
pcp-mpstat
- Reports CPU and interrupt-related statistics.
pcp-numastat
- Displays NUMA allocation statistics from the kernel memory allocator.
pcp-pidstat
- Displays information about individual tasks or processes running on the system, such as CPU percentage, memory and stack usage, scheduling, and priority. Reports live data for the local host by default.
pcp-shping
-
Samples and reports on the shell-ping service metrics exported by the
pmdashping
Performance Metrics Domain Agent (PMDA). pcp-ss
-
Displays socket statistics collected by the
pmdasockets
PMDA. pcp-tapestat
- Reports I/O statistics for tape devices.
pcp-uptime
- Displays how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
pcp-verify
- Inspects various aspects of a Performance Co-Pilot collector installation and reports on whether it is configured correctly for certain modes of operation.
pmiostat
-
Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the
-x
device-mapper option). pmrep
- Reports on selected, easily customizable, performance metrics values.
Tools distributed with the separately installed pcp-gui
package
pmchart
- Plots performance metrics values available through the facilities of the Performance Co-Pilot.
pmdumptext
- Outputs the values of performance metrics collected live or from a Performance Co-Pilot archive.
Tools distributed with the separately installed pcp-devel
package
pmclient
- Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI).
pmdbg
- Displays available Performance Co-Pilot debug control flags and their values.
pmerr
- Displays available Performance Co-Pilot error codes and their corresponding error messages.
5.5. PCP deployment architectures
Performance Co-Pilot (PCP) supports multiple deployment architectures, based on the scale of the PCP deployment, and offers many options to accomplish advanced setups.
Available scaling deployment setup variants based on the recommended deployment set up by Red Hat, sizing factors, and configuration options include:
Localhost
Each service runs locally on the monitored machine. When you start a service without any configuration changes, this is the default deployment. Scaling beyond the individual node is not possible in this case.
By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor.
Decentralized
The only difference between localhost and decentralized setup is the centralized Redis service. In this model, the host executes
pmlogger
service on each monitored host and retrieves metrics from a localpmcd
instance. A localpmproxy
service then exports the performance metrics to a central Redis instance.Figure 5.1. Decentralized logging
Centralized logging - pmlogger farm
When the resource usage on the monitored hosts is constrained, another deployment option is a
pmlogger
farm, which is also known as centralized logging. In this setup, a single logger host executes multiplepmlogger
processes, and each is configured to retrieve performance metrics from a different remotepmcd
host. The centralized logger host is also configured to execute thepmproxy
service, which discovers the resulting PCP archives logs and loads the metric data into a Redis instance.Figure 5.2. Centralized logging - pmlogger farm
Federated - multiple pmlogger farms
For large scale deployments, Red Hat recommends to deploy multiple
pmlogger
farms in a federated fashion. For example, onepmlogger
farm per rack or data center. Eachpmlogger
farm loads the metrics into a central Redis instance.Figure 5.3. Federated - multiple pmlogger farms
By default, the deployment setup for Redis is standalone, localhost. However, Redis can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Redis cluster in the cloud, or to utilize a managed Redis cluster from a cloud vendor.
Additional resources
-
pcp(1)
,pmlogger(1)
,pmproxy(1)
, andpmcd(1)
man pages - Recommended deployment architecture
5.6. Recommended deployment architecture
The following table describes the recommended deployment architectures based on the number of monitored hosts.
Number of hosts (N) | 1-10 | 10-100 | 100-1000 |
---|---|---|---|
| N | N | N |
| 1 to N | N/10 to N | N/100 to N |
| 1 to N | 1 to N | N/100 to N |
Redis servers | 1 to N | 1 to N/10 | N/100 to N/10 |
Redis cluster | No | Maybe | Yes |
Recommended deployment setup | Localhost, Decentralized, or Centralized logging | Decentralized, Centralized logging, or Federated | Decentralized or Federated |
5.7. Sizing factors
The following are the sizing factors required for scaling:
Remote system size
-
The number of CPUs, disks, network interfaces, and other hardware resources affects the amount of data collected by each
pmlogger
on the centralized logging host. Logged Metrics
-
The number and types of logged metrics play an important role. In particular, the
per-process proc.*
metrics require a large amount of disk space, for example, with the standardpcp-zeroconf
setup, 10s logging interval, 11 MB without proc metrics versus 155 MB with proc metrics - a factor of 10 times more. Additionally, the number of instances for each metric, for example the number of CPUs, block devices, and network interfaces also impacts the required storage capacity. Logging Interval
-
The interval how often metrics are logged, affects the storage requirements. The expected daily PCP archive file sizes are written to the
pmlogger.log
file for eachpmlogger
instance. These values are uncompressed estimates. Since PCP archives compress very well, approximately 10:1, the actual long term disk space requirements can be determined for a particular site. pmlogrewrite
-
After every PCP upgrade, the
pmlogrewrite
tool is executed and rewrites old archives if there were changes in the metric metadata from the previous version and the new version of PCP. This process duration scales linear with the number of archives stored.
Additional resources
-
pmlogrewrite(1)
andpmlogger(1)
man pages
5.8. Configuration options for PCP scaling
The following are the configuration options, which are required for scaling:
sysctl and rlimit settings
-
When archive discovery is enabled,
pmproxy
requires four descriptors for everypmlogger
that it is monitoring or log-tailing, along with the additional file descriptors for the service logs andpmproxy
client sockets, if any. Eachpmlogger
process uses about 20 file descriptors for the remotepmcd
socket, archive files, service logs, and others. In total, this can exceed the default 1024 soft limit on a system running around 200pmlogger
processes. Thepmproxy
service inpcp-5.3.0
and later automatically increases the soft limit to the hard limit. On earlier versions of PCP, tuning is required if a high number ofpmlogger
processes are to be deployed, and this can be accomplished by increasing the soft or hard limits forpmlogger
. For more information, see How to set limits (ulimit) for services run by systemd. Local Archives
-
The
pmlogger
service stores metrics of local and remotepmcds
in the/var/log/pcp/pmlogger/
directory. To control the logging interval of the local system, update the/etc/pcp/pmlogger/control.d/configfile
file and add-t X
in the arguments, where X is the logging interval in seconds. To configure which metrics should be logged, executepmlogconf /var/lib/pcp/config/pmlogger/config.clienthostname
. This command deploys a configuration file with a default set of metrics, which can optionally be further customized. To specify retention settings, that is when to purge old PCP archives, update the/etc/sysconfig/pmlogger_timers
file and specifyPMLOGGER_DAILY_PARAMS="-E -k X"
, where X is the amount of days to keep PCP archives. Redis
The
pmproxy
service sends logged metrics frompmlogger
to a Redis instance. The following are the available two options to specify the retention settings in the/etc/pcp/pmproxy/pmproxy.conf
configuration file:-
stream.expire
specifies the duration when stale metrics should be removed, that is metrics which were not updated in a specified amount of time in seconds. -
stream.maxlen
specifies the maximum number of metric values for one metric per host. This setting should be the retention time divided by the logging interval, for example 20160 for 14 days of retention and 60s logging interval (60*60*24*14/60)
-
Additional resources
-
pmproxy(1)
,pmlogger(1)
, andsysctl(8)
man pages
5.9. Example: Analyzing the centralized logging deployment
The following results were gathered on a centralized logging setup, also known as pmlogger farm deployment, with a default pcp-zeroconf 5.3.0
installation, where each remote host is an identical container instance running pmcd
on a server with 64 CPU cores, 376 GB RAM, and one disk attached.
The logging interval is 10s, proc metrics of remote nodes are not included, and the memory values refer to the Resident Set Size (RSS) value.
Number of Hosts | 10 | 50 |
---|---|---|
PCP Archives Storage per Day | 91 MB | 522 MB |
| 160 MB | 580 MB |
| 2 MB | 9 MB |
| 1.4 GB | 6.3 GB |
Redis Memory per Day | 2.6 GB | 12 GB |
Number of Hosts | 10 | 50 | 100 |
---|---|---|---|
PCP Archives Storage per Day | 20 MB | 120 MB | 271 MB |
| 104 MB | 524 MB | 1049 MB |
| 0.38 MB | 1.75 MB | 3.48 MB |
| 2.67 GB | 5.5GB | 9 GB |
Redis Memory per Day | 0.54 GB | 2.65 GB | 5.3 GB |
The pmproxy
queues Redis requests and employs Redis pipelining to speed up Redis queries. This can result in high memory usage. For troubleshooting this issue, see Troubleshooting high memory usage.
5.10. Example: Analyzing the federated setup deployment
The following results were observed on a federated setup, also known as multiple pmlogger
farms, consisting of three centralized logging (pmlogger
farm) setups, where each pmlogger
farm was monitoring 100 remote hosts, that is 300 hosts in total.
This setup of the pmlogger
farms is identical to the configuration mentioned in the
Example: Analyzing the centralized logging deployment for 60s logging interval, except that the Redis servers were operating in cluster mode.
PCP Archives Storage per Day | pmlogger Memory | Network per Day (In/Out) | pmproxy Memory | Redis Memory per Day |
---|---|---|---|---|
277 MB | 1058 MB | 15.6 MB / 12.3 MB | 6-8 GB | 5.5 GB |
Here, all values are per host. The network bandwidth is higher due to the inter-node communication of the Redis cluster.
5.11. Establishing secure PCP connections
You can configure PCP collector and monitoring components to participate in secure PCP protocol exchanges.
5.11.1. Secure PCP connections
You can establish secure connections between Performance Co-Pilot (PCP) collector and monitoring components. PCP collector components are the parts of PCP that collect and extract performance data from different sources. PCP monitor components are the parts of PCP that display data collected from hosts or archives that have the PCP collector components installed. Establishing secure connections between these components helps prevent unauthorized parties from accessing or modifying the data being collected and monitored.
All connections with the Performance Metrics Collector Daemon (pmcd
) are made using the TCP/IP based PCP protocol. Protocol proxying and the PCP REST APIs are served by the pmproxy
daemon - the REST API can be accessed over HTTPS, ensuring a secure connection.
Both the pmcd
and pmproxy
daemons are capable of simultaneous TLS and non-TLS communications on a single port. The default port for pmcd
is 44321 and 44322 for pmproxy
. This means that you do not have to choose between TLS or non-TLS communications for your PCP collector systems and can use both at the same time.
5.11.2. Configuring secure connections for PCP collector components
All PCP collector systems must have valid certificates in order to participate in secure PCP protocol exchanges.
the pmproxy
daemon operates as both a client and a server from the perspective of TLS.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
The private client key is stored in the
/etc/pcp/tls/client.key
file. If you use a different path, adapt the corresponding steps of the procedure.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS client certificate is stored in the
/etc/pcp/tls/client.crt
file. If you use a different path, adapt the corresponding steps of the procedure. -
The CA certificate is stored in the
/etc/pcp/tls/ca.crt
file. If you use a different path, adapt the corresponding steps of the procedure. Additionally, for thepmproxy
daemon: -
The private server key is stored in the
/etc/pcp/tls/server.key
file. If you use a different path, adapt the corresponding steps of the procedure -
The TLS server certificate is stored in the
/etc/pcp/tls/server.crt
file. If you use a different path, adapt the corresponding steps of the procedure.
Procedure
Update the PCP TLS configuration file on the collector systems to use the CA issued certificates to establish a secure connection:
# cat > /etc/pcp/tls.conf << END tls-ca-cert-file = /etc/pcp/tls/ca.crt tls-key-file = /etc/pcp/tls/server.key tls-cert-file = /etc/pcp/tls/server.crt tls-client-key-file = /etc/pcp/tls/client.key tls-client-cert-file = /etc/pcp/tls/client.crt END
Restart the PCP collector infrastructure:
# systemctl restart pmcd.service # systemctl restart pmproxy.service
Verification
Verify the TLS configuration:
On the
pmcd
service:# grep 'Info:' /var/log/pcp/pmcd/pmcd.log [Tue Feb 07 11:47:33] pmcd(6558) Info: OpenSSL 3.0.7 setup
On the
pmproxy
service:# grep 'Info:' /var/log/pcp/pmproxy/pmproxy.log [Tue Feb 07 11:44:13] pmproxy(6014) Info: OpenSSL 3.0.7 setup
5.11.3. Configuring secure connections for PCP monitoring components
Configure your PCP monitoring components to participate in secure PCP protocol exchanges.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
The private client key is stored in the
~/.pcp/tls/client.key
file. If you use a different path, adapt the corresponding steps of the procedure.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS client certificate is stored in the
~/.pcp/tls/client.crt
file. If you use a different path, adapt the corresponding steps of the procedure. -
The CA certificate is stored in the
/etc/pcp/tls/ca.crt
file. If you use a different path, adapt the corresponding steps of the procedure.
Procedure
Create a TLS configuration file with the following information:
$ home=
echo ~
$ cat > ~/.pcp/tls.conf << END tls-ca-cert-file = /etc/pcp/tls/ca.crt tls-key-file = $home/.pcp/tls/client.key tls-cert-file = $home/.pcp/tls/client.crt ENDEstablish the secure connection:
$ export PCP_SECURE_SOCKETS=enforce $ export PCP_TLSCONF_PATH=~/.pcp/tls.conf
Verification
Verify the secure connection is configured:
$ pminfo --fetch --host pcps://localhost kernel.all.load kernel.all.load inst [1 or "1 minute"] value 1.26 inst [5 or "5 minute"] value 1.29 inst [15 or "15 minute"] value 1.28
5.12. Troubleshooting high memory usage
The following scenarios can result in high memory usage:
-
The
pmproxy
process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses. - The Redis node or cluster is overloaded and cannot process incoming requests on time.
The pmproxy
service daemon uses Redis streams and supports the configuration parameters, which are PCP tuning parameters and affects Redis memory usage and key retention. The /etc/pcp/pmproxy/pmproxy.conf
file lists the available configuration options for pmproxy
and the associated APIs.
The following procedure describes how to troubleshoot high memory usage issue.
Prerequisites
Install the
pcp-pmda-redis
package:# dnf install pcp-pmda-redis
Install the redis PMDA:
# cd /var/lib/pcp/pmdas/redis && ./Install
Procedure
To troubleshoot high memory usage, execute the following command and observe the
inflight
column:$ pmrep :pmproxy backlog inflight reqs/s resp/s wait req err resp err changed throttled byte count count/s count/s s/s count/s count/s count/s count/s 14:59:08 0 0 N/A N/A N/A N/A N/A N/A N/A 14:59:09 0 0 2268.9 2268.9 28 0 0 2.0 4.0 14:59:10 0 0 0.0 0.0 0 0 0 0.0 0.0 14:59:11 0 0 0.0 0.0 0 0 0 0.0 0.0
This column shows how many Redis requests are in-flight, which means they are queued or sent, and no reply was received so far.
A high number indicates one of the following conditions:
-
The
pmproxy
process is busy processing new PCP archives and does not have spare CPU cycles to process Redis requests and responses. - The Redis node or cluster is overloaded and cannot process incoming requests on time.
-
The
To troubleshoot the high memory usage issue, reduce the number of
pmlogger
processes for this farm, and add another pmlogger farm. Use the federated - multiple pmlogger farms setup.If the Redis node is using 100% CPU for an extended amount of time, move it to a host with better performance or use a clustered Redis setup instead.
To view the
pmproxy.redis.*
metrics, use the following command:$ pminfo -ftd pmproxy.redis pmproxy.redis.responses.wait [wait time for responses] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: counter Units: microsec value 546028367374 pmproxy.redis.responses.error [number of error responses] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: counter Units: count value 1164 [...] pmproxy.redis.requests.inflight.bytes [bytes allocated for inflight requests] Data Type: 64-bit int InDom: PM_INDOM_NULL 0xffffffff Semantics: discrete Units: byte value 0 pmproxy.redis.requests.inflight.total [inflight requests] Data Type: 64-bit unsigned int InDom: PM_INDOM_NULL 0xffffffff Semantics: discrete Units: count value 0 [...]
To view how many Redis requests are inflight, see the
pmproxy.redis.requests.inflight.total
metric andpmproxy.redis.requests.inflight.bytes
metric to view how many bytes are occupied by all current inflight Redis requests.In general, the redis request queue would be zero but can build up based on the usage of large pmlogger farms, which limits scalability and can cause high latency for
pmproxy
clients.Use the
pminfo
command to view information about performance metrics. For example, to view theredis.*
metrics, use the following command:$ pminfo -ftd redis redis.redis_build_id [Build ID] Data Type: string InDom: 24.0 0x6000000 Semantics: discrete Units: count inst [0 or "localhost:6379"] value "87e335e57cffa755" redis.total_commands_processed [Total number of commands processed by the server] Data Type: 64-bit unsigned int InDom: 24.0 0x6000000 Semantics: counter Units: count inst [0 or "localhost:6379"] value 595627069 [...] redis.used_memory_peak [Peak memory consumed by Redis (in bytes)] Data Type: 32-bit unsigned int InDom: 24.0 0x6000000 Semantics: instant Units: count inst [0 or "localhost:6379"] value 572234920 [...]
To view the peak memory usage, see the
redis.used_memory_peak
metric.
Additional resources
-
pmdaredis(1)
,pmproxy(1)
, andpminfo(1)
man pages - PCP deployment architectures
Chapter 6. Logging performance data with pmlogger
With the PCP tool you can log the performance metric values and replay them later. This allows you to perform a retrospective performance analysis.
Using the pmlogger
tool, you can:
- Create the archived logs of selected metrics on the system
- Specify which metrics are recorded on the system and how often
6.1. Modifying the pmlogger configuration file with pmlogconf
When the pmlogger
service is running, PCP logs a default set of metrics on the host.
Use the pmlogconf
utility to check the default configuration. If the pmlogger
configuration file does not exist, pmlogconf
creates it with a default metric values.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Create or modify the
pmlogger
configuration file:# pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
-
Follow
pmlogconf
prompts to enable or disable groups of related performance metrics and to control the logging interval for each enabled group.
Additional resources
-
pmlogconf(1)
andpmlogger(1)
man pages - System services and tools distributed with PCP
6.2. Editing the pmlogger configuration file manually
To create a tailored logging configuration with specific metrics and given intervals, edit the pmlogger
configuration file manually. The default pmlogger
configuration file is /var/lib/pcp/config/pmlogger/config.default
. The configuration file specifies which metrics are logged by the primary logging instance.
In manual configuration, you can:
- Record metrics which are not listed in the automatic configuration.
- Choose custom logging frequencies.
- Add PMDA with the application metrics.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Open and edit the
/var/lib/pcp/config/pmlogger/config.default
file to add specific metrics:# It is safe to make additions from here on ... # log mandatory on every 5 seconds { xfs.write xfs.write_bytes xfs.read xfs.read_bytes } log mandatory on every 10 seconds { xfs.allocs xfs.block_map xfs.transactions xfs.log } [access] disallow * : all; allow localhost : enquire;
Additional resources
-
pmlogger(1)
man page - System services and tools distributed with PCP
6.3. Enabling the pmlogger service
The pmlogger
service must be started and enabled to log the metric values on the local machine.
This procedure describes how to enable the pmlogger
service.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Start and enable the
pmlogger
service:# systemctl start pmlogger # systemctl enable pmlogger
Verification
Verify if the
pmlogger
service is enabled:# pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM timezone: CEST-2 services: pmcd pmcd: Version 4.3.0-1, 8 agents, 1 client pmda: root pmcd proc xfs linux mmv kvm jbd2 pmlogger: primary logger: /var/log/pcp/pmlogger/workstation/20190827.15.54
Additional resources
-
pmlogger(1)
man page - System services and tools distributed with PCP
-
/var/lib/pcp/config/pmlogger/config.default
file
6.4. Setting up a client system for metrics collection
This procedure describes how to set up a client system so that a central server can collect metrics from clients running PCP.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Install the
pcp-system-tools
package:# dnf install pcp-system-tools
Configure an IP address for
pmcd
:# echo "-i 192.168.4.62" >>/etc/pcp/pmcd/pmcd.options
Replace 192.168.4.62 with the IP address, the client should listen on.
By default,
pmcd
is listening on the localhost.Configure the firewall to add the public
zone
permanently:# firewall-cmd --permanent --zone=public --add-port=44321/tcp success # firewall-cmd --reload success
Set an SELinux boolean:
# setsebool -P pcp_bind_all_unreserved_ports on
Enable the
pmcd
andpmlogger
services:# systemctl enable pmcd pmlogger # systemctl restart pmcd pmlogger
Verification
Verify if the
pmcd
is correctly listening on the configured IP address:# ss -tlp | grep 44321 LISTEN 0 5 127.0.0.1:44321 0.0.0.0:* users:(("pmcd",pid=151595,fd=6)) LISTEN 0 5 192.168.4.62:44321 0.0.0.0:* users:(("pmcd",pid=151595,fd=0)) LISTEN 0 5 [::1]:44321 [::]:* users:(("pmcd",pid=151595,fd=7))
Additional resources
-
pmlogger(1)
,firewall-cmd(1)
,ss(8)
, andsetsebool(8)
man pages - System services and tools distributed with PCP
-
/var/lib/pcp/config/pmlogger/config.default
file
6.5. Setting up a central server to collect data
This procedure describes how to create a central server to collect metrics from clients running PCP.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
- Client is configured for metrics collection. For more information, see Setting up a client system for metrics collection.
Procedure
Install the
pcp-system-tools
package:# dnf install pcp-system-tools
Create the
/etc/pcp/pmlogger/control.d/remote
file with the following content:# DO NOT REMOVE OR EDIT THE FOLLOWING LINE $version=1.1 192.168.4.13 n n PCP_ARCHIVE_DIR/rhel7u4a -r -T24h10m -c config.rhel7u4a 192.168.4.14 n n PCP_ARCHIVE_DIR/rhel6u10a -r -T24h10m -c config.rhel6u10a 192.168.4.62 n n PCP_ARCHIVE_DIR/rhel8u1a -r -T24h10m -c config.rhel8u1a 192.168.4.69 n n PCP_ARCHIVE_DIR/rhel9u3a -r -T24h10m -c config.rhel9u3a
Replace 192.168.4.13, 192.168.4.14, 192.168.4.62 and 192.168.4.69 with the client IP addresses.
Enable the
pmcd
andpmlogger
services:# systemctl enable pmcd pmlogger # systemctl restart pmcd pmlogger
Verification
Ensure that you can access the latest archive file from each directory:
# for i in /var/log/pcp/pmlogger/rhel*/*.0; do pmdumplog -L $i; done Log Label (Log Format Version 2) Performance metrics from host rhel6u10a.local commencing Mon Nov 25 21:55:04.851 2019 ending Mon Nov 25 22:06:04.874 2019 Archive timezone: JST-9 PID for pmlogger: 24002 Log Label (Log Format Version 2) Performance metrics from host rhel7u4a commencing Tue Nov 26 06:49:24.954 2019 ending Tue Nov 26 07:06:24.979 2019 Archive timezone: CET-1 PID for pmlogger: 10941 [..]
The archive files from the
/var/log/pcp/pmlogger/
directory can be used for further analysis and graphing.
Additional resources
-
pmlogger(1)
man page - System services and tools distributed with PCP
-
/var/lib/pcp/config/pmlogger/config.default
file
6.6. Systemd
units and pmlogger
When you deploy the pmlogger
service, either as a single host monitoring itself or a pmlogger
farm with a single host collecting metrics from several remote hosts, there are several associated systemd
service and timer units that are automatically deployed. These services and timers provide routine checks to ensure that your pmlogger
instances are running, restart any missing instances, and perform archive management such as file compression.
The checking and housekeeping services typically deployed by pmlogger
are:
pmlogger_daily.service
-
Runs daily, soon after midnight by default, to aggregate, compress, and rotate one or more sets of PCP archives. Also culls archives older than the limit, 2 weeks by default. Triggered by the
pmlogger_daily.timer
unit, which is required by thepmlogger.service
unit. pmlogger_check
-
Performs half-hourly checks that
pmlogger
instances are running. Restarts any missing instances and performs any required compression tasks. Triggered by thepmlogger_check.timer
unit, which is required by thepmlogger.service
unit. pmlogger_farm_check
-
Checks the status of all configured
pmlogger
instances. Restarts any missing instances. Migrates all non–primary instances to thepmlogger_farm
service. Triggered by thepmlogger_farm_check.timer
, which is required by thepmlogger_farm.service
unit that is itself required by thepmlogger.service
unit.
These services are managed through a series of positive dependencies, meaning that they are all enabled upon activating the primary pmlogger
instance. Note that while pmlogger_daily.service
is disabled by default, pmlogger_daily.timer
being active via the dependency with pmlogger.service
will trigger pmlogger_daily.service
to run.
pmlogger_daily
is also integrated with pmlogrewrite
for automatically rewriting archives before merging. This helps to ensure metadata consistency amid changing production environments and PMDAs. For example, if pmcd
on one monitored host is updated during the logging interval, the semantics for some metrics on the host might be updated, thus making the new archives incompatible with the previously recorded archives from that host. For more information see the pmlogrewrite(1)
man page.
Managing systemd
services triggered by pmlogger
You can create an automated custom archive management system for data collected by your pmlogger
instances. This is done using control files. These control files are:
For the primary
pmlogger
instance:-
etc/pcp/pmlogger/control
-
/etc/pcp/pmlogger/control.d/local
-
For the remote hosts:
/etc/pcp/pmlogger/control.d/remote
Replace remote with your desired file name.
- NOTE
-
The primary
pmlogger
instance must be running on the same host as thepmcd
it connects to. You do not need to have a primary instance and you might not need it in your configuration if one central host is collecting data on severalpmlogger
instances connected topmcd
instances running on remote host
The file should contain one line for each host to be logged. The default format of the primary logger instance that is automatically created looks similar to:
# === LOGGER CONTROL SPECIFICATIONS === # #Host P? S? directory args # local primary logger LOCALHOSTNAME y n PCP_ARCHIVE_DIR/LOCALHOSTNAME -r -T24h10m -c config.default -v 100Mb
The fields are:
Host
- The name of the host to be logged
P?
-
Stands for “Primary?” This field indicates if the host is the primary logger instance,
y
, or not,n
. There can only be one primary logger across all the files in your configuration and it must be running on the same host as thepmcd
it connects to. S?
-
Stands for “Socks?” This field indicates if this logger instance needs to use the
SOCKS
protocol to connect topmcd
through a firewall,y
, or not,n
. directory
- All archives associated with this line are created in this directory.
args
Arguments passed to
pmlogger
.The default values for the
args
field are:-r
- Report the archive sizes and growth rate.
T24h10m
-
Specifies when to end logging for each day. This is typically the time when
pmlogger_daily.service
runs. The default value of24h10m
indicates that logging should end 24 hours and 10 minutes after it begins, at the latest. -c config.default
- Specifies which configuration file to use. This essentially defines what metrics to record.
-v 100Mb
-
Specifies the size at which point one data volume is filled and another is created. After it switches to the new archive, the previously recorded one will be compressed by either
pmlogger_daily
orpmlogger_check
.
Additional resources
-
pmlogger(1)
man page -
pmlogger_daily(1)
man page -
pmlogger_check(1)
man page -
pmlogger.control(5)
man page -
pmlogrewrite(1)
man page
6.7. Replaying the PCP log archives with pmrep
After recording the metric data, you can replay the PCP log archives. To export the logs to text files and import them into spreadsheets, use PCP utilities such as pcp2csv
, pcp2xml
, pmrep
or pmlogsummary
.
Using the pmrep
tool, you can:
- View the log files
- Parse the selected PCP log archive and export the values into an ASCII table
- Extract the entire archive log or only select metric values from the log by specifying individual metrics on the command line
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
-
The
pmlogger
service is enabled. For more information, see Enabling the pmlogger service. Install the
pcp-system-tools
package:# dnf install pcp-gui
Procedure
Display the data on the metric:
$ pmrep --start @3:00am --archive 20211128 --interval 5seconds --samples 10 --output csv disk.dev.write Time,"disk.dev.write-sda","disk.dev.write-sdb" 2021-11-28 03:00:00,, 2021-11-28 03:00:05,4.000,5.200 2021-11-28 03:00:10,1.600,7.600 2021-11-28 03:00:15,0.800,7.100 2021-11-28 03:00:20,16.600,8.400 2021-11-28 03:00:25,21.400,7.200 2021-11-28 03:00:30,21.200,6.800 2021-11-28 03:00:35,21.000,27.600 2021-11-28 03:00:40,12.400,33.800 2021-11-28 03:00:45,9.800,20.600
The mentioned example displays the data on the
disk.dev.write
metric collected in an archive at a 5 second interval in comma-separated-value format.NoteReplace
20211128
in this example with a filename containing thepmlogger
archive you want to display data for.
Additional resources
-
pmlogger(1)
,pmrep(1)
, andpmlogsummary(1)
man pages - System services and tools distributed with PCP
6.8. Enabling PCP version 3 archives
Performance Co-Pilot (PCP) archives store historical values of PCP metrics recorded from a single host and support retrospective performance analysis. PCP archives contain all the important metric data and metadata needed for offline or offsite analysis. These archives can be read by most PCP client tools or dumped raw by the pmdumplog
tool.
From PCP 6.0, version 3 archives are supported in addition to version 2 archives. Version 2 archives remain the default and will continue to receive long-term support for backwards compatibility purposes in addition to version 3 archives receiving long-term support from RHEL 9.2 and on.
Using PCP version 3 archives offers the following benefits over version 2:
- Support for instance domain change-deltas
- Y2038-safe timestamps
- Nanosecond-precision timestamps
- Arbitrary timezones support
- 64-bit file offsets used for individual volumes larger than 2GB
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Open the
/etc/pcp.conf
file in a text editor of your choice and set the PCP archive version:PCP_ARCHIVE_VERSION=3
Restart the
pmlogger
service to apply your configuration changes:# systemctl restart pmlogger.service
- Create a new PCP archive log using your new configuration. For more information, see Logging performance data with pmlogger.
Verification
Verify the version of the archive created with your new configuration:
# pmloglabel -l /var/log/pcp/pmlogger/20230208 Log Label (Log Format Version 3) Performance metrics from host host1 commencing Wed Feb 08 00:11:09.396 2023 ending Thu Feb 07 00:13:54.347 2023
Additional resources
-
logarchive(5)
man page -
pmlogger(1)
man page - Logging performance data with pmlogger
Chapter 7. Monitoring performance with Performance Co-Pilot
Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements.
As a system administrator, you can monitor the system’s performance using the PCP application in Red Hat Enterprise Linux 9.
7.1. Monitoring postfix with pmda-postfix
This procedure describes how to monitor performance metrics of the postfix
mail server with pmda-postfix
. It helps to check how many emails are received per second.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
-
The
pmlogger
service is enabled. For more information, see Enabling the pmlogger service.
Procedure
Install the following packages:
Install the
pcp-system-tools
:# dnf install pcp-system-tools
Install the
pmda-postfix
package to monitorpostfix
:# dnf install pcp-pmda-postfix postfix
Install the logging daemon:
# dnf install rsyslog
Install the mail client for testing:
# dnf install mutt
Enable the
postfix
andrsyslog
services:# systemctl enable postfix rsyslog # systemctl restart postfix rsyslog
Enable the SELinux boolean, so that
pmda-postfix
can access the required log files:# setsebool -P pcp_read_generic_logs=on
Install the
PMDA
:# cd /var/lib/pcp/pmdas/postfix/ # ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Waiting for pmcd to terminate ... Starting pmcd ... Check postfix metrics have appeared ... 7 metrics and 58 values
Verification
Verify the
pmda-postfix
operation:echo testmail | mutt root
Verify the available metrics:
# pminfo postfix postfix.received postfix.sent postfix.queues.incoming postfix.queues.maildrop postfix.queues.hold postfix.queues.deferred postfix.queues.active
Additional resources
-
rsyslogd(8)
,postfix(1)
, andsetsebool(8)
man pages - System services and tools distributed with PCP
7.2. Visually tracing PCP log archives with the PCP Charts application
After recording metric data, you can replay the PCP log archives as graphs. The metrics are sourced from one or more live hosts with alternative options to use metric data from PCP log archives as a source of historical data. To customize the PCP Charts application interface to display the data from the performance metrics, you can use line plot, bar graphs, or utilization graphs.
Using the PCP Charts application, you can:
- Replay the data in the PCP Charts application application and use graphs to visualize the retrospective data alongside live data of the system.
- Plot performance metric values into graphs.
- Display multiple charts simultaneously.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
-
Logged performance data with the
pmlogger
. For more information, see Logging performance data with pmlogger. Install the
pcp-gui
package:# dnf install pcp-gui
Procedure
Launch the PCP Charts application from the command line:
# pmchart
Figure 7.1. PCP Charts application
The
pmtime
server settings are located at the bottom. The start and pause button allows you to control:- The interval in which PCP polls the metric data
- The date and time for the metrics of historical data
- Click File and then New Chart to select metric from both the local machine and remote machines by specifying their host name or address. Advanced configuration options include the ability to manually set the axis values for the chart, and to manually choose the color of the plots.
Record the views created in the PCP Charts application:
Following are the options to take images or record the views created in the PCP Charts application:
- Click File and then Export to save an image of the current view.
- Click Record and then Start to start a recording. Click Record and then Stop to stop the recording. After stopping the recording, the recorded metrics are archived to be viewed later.
Optional: In the PCP Charts application, the main configuration file, known as the view, allows the metadata associated with one or more charts to be saved. This metadata describes all chart aspects, including the metrics used and the chart columns. Save the custom view configuration by clicking File and then Save View, and load the view configuration later.
The following example of the PCP Charts application view configuration file describes a stacking chart graph showing the total number of bytes read and written to the given XFS file system
loop1
:#kmchart version 1 chart title "Filesystem Throughput /loop1" style stacking antialiasing off plot legend "Read rate" metric xfs.read_bytes instance "loop1" plot legend "Write rate" metric xfs.write_bytes instance "loop1"
Additional resources
-
pmchart(1)
andpmtime(1)
man pages - System services and tools distributed with PCP
7.3. Collecting data from SQL server using PCP
The SQL Server agent is available in Performance Co-Pilot (PCP), which helps you to monitor and analyze database performance issues.
This procedure describes how to collect data for Microsoft SQL Server via pcp
on your system.
Prerequisites
- You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a 'trusted' connection to an SQL server.
- You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux.
Procedure
Install PCP:
# dnf install pcp-zeroconf
Install packages required for the
pyodbc
driver:# dnf install python3-pyodbc
Install the
mssql
agent:Install the Microsoft SQL Server domain agent for PCP:
# dnf install pcp-pmda-mssql
Edit the
/etc/pcp/mssql/mssql.conf
file to configure the SQL server account’s username and password for themssql
agent. Ensure that the account you configure has access rights to performance data.username: user_name password: user_password
Replace user_name with the SQL Server account and user_password with the SQL Server user password for this account.
Install the agent:
# cd /var/lib/pcp/pmdas/mssql # ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check mssql metrics have appeared ... 168 metrics and 598 values [...]
Verification
Using the
pcp
command, verify if the SQL Server PMDA (mssql
) is loaded and running:$ pcp Performance Co-Pilot configuration on rhel.local: platform: Linux rhel.local 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64 hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM timezone: PDT+7 services: pmcd pmproxy pmcd: Version 5.0.2-1, 12 agents, 4 clients pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql jbd2 dm pmlogger: primary logger: /var/log/pcp/pmlogger/rhel.local/20200326.16.31 pmie: primary engine: /var/log/pcp/pmie/rhel.local/pmie.log
View the complete list of metrics that PCP can collect from the SQL Server:
# pminfo mssql
After viewing the list of metrics, you can report the rate of transactions. For example, to report on the overall transaction count per second, over a five second time window:
# pmval -t 1 -T 5 mssql.databases.transactions
-
View the graphical chart of these metrics on your system by using the
pmchart
command. For more information, see Visually tracing PCP log archives with the PCP Charts application.
Additional resources
-
pcp(1)
,pminfo(1)
,pmval(1)
,pmchart(1)
, andpmdamssql(1)
man pages - Performance Co-Pilot for Microsoft SQL Server with RHEL 8.2 Red Hat Developers Blog post
7.4. Generating PCP archives from sadc archives
You can use the sadf
tool provided by the sysstat
package to generate PCP archives from native sadc
archives.
Prerequisites
A
sadc
archive has been created:# /usr/lib64/sa/sadc 1 5 -
In this example,
sadc
is sampling system data 1 time in a 5 second interval. The outfile is specified as-
which results insadc
writing the data to the standard system activity daily data file. This file is named saDD and is located in the /var/log/sa directory by default.
Procedure
Generate a PCP archive from a
sadc
archive:# sadf -l -O pcparchive=/tmp/recording -2
In this example, using the
-2
option results insadf
generating a PCP archive from asadc
archive recorded 2 days ago.
Verification
You can use PCP commands to inspect and analyze the PCP archive generated from a sadc
archive as you would a native PCP archive. For example:
To show a list of metrics in the PCP archive generated from an
sadc
archive archive, run:$ pminfo --archive /tmp/recording Disk.dev.avactive Disk.dev.read Disk.dev.write Disk.dev.blkread [...]
To show the timespace of the archive and hostname of the PCP archive, run:
$ pmdumplog --label /tmp/recording Log Label (Log Format Version 2) Performance metrics from host shard commencing Tue Jul 20 00:10:30.642477 2021 ending Wed Jul 21 00:10:30.222176 2021
To plot performance metrics values into graphs, run:
$ pmchart --archive /tmp/recording
Chapter 8. Performance analysis of XFS with PCP
The XFS PMDA ships as part of the pcp
package and is enabled by default during the installation. It is used to gather performance metric data of XFS file systems in Performance Co-Pilot (PCP).
You can use PCP to analyze XFS file system’s performance.
8.1. Installing XFS PMDA manually
If the XFS PMDA is not listed in the pcp
configuration output, install the PMDA agent manually.
This procedure describes how to manually install the PMDA agent.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Navigate to the xfs directory:
# cd /var/lib/pcp/pmdas/xfs/
Install the XFS PMDA manually:
xfs]# ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check xfs metrics have appeared ... 387 metrics and 387 values
Verification
Verify that the
pmcd
process is running on the host and the XFS PMDA is listed as enabled in the configuration:# pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 hardware: 12 cpus, 2 disks, 1 node, 36023MB RAM timezone: CEST-2 services: pmcd pmcd: Version 4.3.0-1, 8 agents pmda: root pmcd proc xfs linux mmv kvm jbd2
Additional resources
-
pmcd(1)
man page - System services and tools distributed with PCP
8.2. Examining XFS performance metrics with pminfo
PCP enables XFS PMDA to allow the reporting of certain XFS metrics per each of the mounted XFS file systems. This makes it easier to pinpoint specific mounted file system issues and evaluate performance.
The pminfo
command provides per-device XFS metrics for each mounted XFS file system.
This procedure displays a list of all available metrics provided by the XFS PMDA.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Display the list of all available metrics provided by the XFS PMDA:
# pminfo xfs
Display information for the individual metrics. The following examples examine specific XFS
read
andwrite
metrics using thepminfo
tool:Display a short description of the
xfs.write_bytes
metric:# pminfo --oneline xfs.write_bytes xfs.write_bytes [number of bytes written in XFS file system write operations]
Display a long description of the
xfs.read_bytes
metric:# pminfo --helptext xfs.read_bytes xfs.read_bytes Help: This is the number of bytes read via read(2) system calls to files in XFS file systems. It can be used in conjunction with the read_calls count to calculate the average size of the read operations to file in XFS file systems.
Obtain the current performance value of the
xfs.read_bytes
metric:# pminfo --fetch xfs.read_bytes xfs.read_bytes value 4891346238
Obtain per-device XFS metrics with
pminfo
:# pminfo --fetch --oneline xfs.perdev.read xfs.perdev.write xfs.perdev.read [number of XFS file system read operations] inst [0 or "loop1"] value 0 inst [0 or "loop2"] value 0 xfs.perdev.write [number of XFS file system write operations] inst [0 or "loop1"] value 86 inst [0 or "loop2"] value 0
Additional resources
-
pminfo(1)
man page - PCP metric groups for XFS
- Per-device PCP metric groups for XFS
8.3. Resetting XFS performance metrics with pmstore
With PCP, you can modify the values of certain metrics, especially if the metric acts as a control variable, such as the xfs.control.reset
metric. To modify a metric value, use the pmstore
tool.
This procedure describes how to reset XFS metrics using the pmstore
tool.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Display the value of a metric:
$ pminfo -f xfs.write xfs.write value 325262
Reset all the XFS metrics:
# pmstore xfs.control.reset 1 xfs.control.reset old value=0 new value=1
Verification
View the information after resetting the metric:
$ pminfo --fetch xfs.write xfs.write value 0
Additional resources
-
pmstore(1)
andpminfo(1)
man pages - System services and tools distributed with PCP
- PCP metric groups for XFS
8.4. PCP metric groups for XFS
The following table describes the available PCP metric groups for XFS.
Metric Group | Metrics provided |
| General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. |
| Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. |
| Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. |
| Counters for directory operations on XFS file systems for creation, entry deletions, count of “getdent” operations. |
| Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. |
| Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. |
| Counters for the number of log buffer writes over XFS file sytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. |
| Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. |
| Counts for the number of attribute get, set, remove and list operations over all XFS file systems. |
| Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. |
| Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. |
| Metrics regarding the operations of the XFS btree. |
| Configuration metrics which are used to reset the metric counters for the XFS stats. Control metrics are toggled by means of the pmstore tool. |
8.5. Per-device PCP metric groups for XFS
The following table describes the available per-device PCP metric group for XFS.
Metric Group | Metrics provided |
| General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. |
| Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. |
| Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. |
| Counters for directory operations of XFS file systems for creation, entry deletions, count of “getdent” operations. |
| Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. |
| Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. |
| Counters for the number of log buffer writes over XFS filesytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. |
| Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. |
| Counts for the number of attribute get, set, remove and list operations over all XFS file systems. |
| Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. |
| Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. |
| Metrics regarding the operations of the XFS btree. |
Chapter 9. Setting up graphical representation of PCP metrics
Using a combination of pcp
, grafana
, pcp redis
, pcp bpftrace
, and pcp vector
provides provides graphical representation of the live data or data collected by Performance Co-Pilot (PCP).
9.1. Setting up PCP with pcp-zeroconf
This procedure describes how to set up PCP on a system with the pcp-zeroconf
package. Once the pcp-zeroconf
package is installed, the system records the default set of metrics into archived files.
Procedure
Install the
pcp-zeroconf
package:# dnf install pcp-zeroconf
Verification
Ensure that the
pmlogger
service is active, and starts archiving the metrics:# pcp | grep pmlogger pmlogger: primary logger: /var/log/pcp/pmlogger/localhost.localdomain/20200401.00.12
Additional resources
-
pmlogger
man page - Monitoring performance with Performance Co-Pilot
9.2. Setting up a grafana-server
Grafana generates graphs that are accessible from a browser. The grafana-server
is a back-end server for the Grafana dashboard. It listens, by default, on all interfaces, and provides web services accessed through the web browser. The grafana-pcp
plugin interacts with the pmproxy
protocol in the backend.
This procedure describes how to set up a grafana-server
.
Prerequisites
- PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
Procedure
Install the following packages:
# dnf install grafana grafana-pcp
Restart and enable the following service:
# systemctl restart grafana-server # systemctl enable grafana-server
Open the server’s firewall for network traffic to the Grafana service.
# firewall-cmd --permanent --add-service=grafana success # firewall-cmd --reload success
Verification
Ensure that the
grafana-server
is listening and responding to requests:# ss -ntlp | grep 3000 LISTEN 0 128 *:3000 *:* users:(("grafana-server",pid=19522,fd=7))
Ensure that the
grafana-pcp
plugin is installed:# grafana-cli plugins ls | grep performancecopilot-pcp-app performancecopilot-pcp-app @ 3.1.0
Additional resources
-
pmproxy(1)
andgrafana-server
man pages
9.3. Accessing the Grafana web UI
This procedure describes how to access the Grafana web interface.
Using the Grafana web interface, you can:
- add PCP Redis, PCP bpftrace, and PCP Vector data sources
- create dashboard
- view an overview of any useful metrics
- create alerts in PCP Redis
Prerequisites
- PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server.
Procedure
On the client system, open a browser and access the
grafana-server
on port3000
, using http://192.0.2.0:3000 link.Replace 192.0.2.0 with your machine IP.
For the first login, enter admin in both the Email or username and Password field.
Grafana prompts to set a New password to create a secured account. If you want to set it later, click Skip.
- From the menu, hover over the Configuration icon and then click Plugins.
- In the Plugins tab, type performance co-pilot in the Search by name or type text box and then click Performance Co-Pilot (PCP) plugin.
- In the Plugins / Performance Co-Pilot pane, click .
Click Grafana icon. The Grafana Home page is displayed.
Figure 9.1. Home Dashboard
NoteThe top corner of the screen has a similar icon, but it controls the general Dashboard settings.
In the Grafana Home page, click Add your first data source to add PCP Redis, PCP bpftrace, and PCP Vector data sources. For more information about adding data source, see:
- To add pcp redis data source, view default dashboard, create a panel, and an alert rule, see Creating panels and alert in PCP Redis data source.
- To add pcp bpftrace data source and view the default dashboard, see Viewing the PCP bpftrace System Analysis dashboard.
- To add pcp vector data source, view the default dashboard, and to view the vector checklist, see Viewing the PCP Vector Checklist.
- Optional: From the menu, hover over the admin profile icon to change the Preferences including Edit Profile, Change Password, or to Sign out.
Additional resources
-
grafana-cli
andgrafana-server
man pages
9.4. Configuring secure connections for Grafana
You can establish secure connections between Grafana and Performance Co-Pilot (PCP) components. Establishing secure connections between these components helps prevent unauthorized parties from accessing or modifying the data being collected and monitored.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server. The private client key is stored in the
/etc/grafana/grafana.key
file. If you use a different path, modify the path in the corresponding steps of the procedure.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS client certificate is stored in the
/etc/grafana/grafana.crt
file. If you use a different path, modify the path in the corresponding steps of the procedure.
Procedure
As a root user, open the
/etc/grafana/grana.ini
file and adjust the following options in the[server]
section to reflect the following:protocol = https cert_key = /etc/grafana/grafana.key cert_file = /etc/grafana/grafana.crt
Ensure grafana can access the certificates:
# su grafana -s /bin/bash -c \ 'ls -1 /etc/grafana/grafana.crt /etc/grafana/grafana.key' /etc/grafana/grafana.crt /etc/grafana/grafana.key
Restart and enable the Grafana service to apply the configuration changes:
# systemctl restart grafana-server # systemctl enable grafana-server
Verification
-
On the client system, open a browser and access the
grafana-server
machine on port 3000, using the https://192.0.2.0:3000 link. Replace 192.0.2.0 with your machine IP. Confirm the lock icon is displayed beside the address bar.
NoteIf the protocol is set to
http
and an HTTPS connection is attempted, you will receive aERR_SSL_PROTOCOL_ERROR
error. If the protocol is set tohttps
and an HTTP connection is attempted, the Grafana server responds with a “Client sent an HTTP request to an HTTPS server” message.
9.5. Configuring PCP Redis
Use the PCP Redis data source to:
- View data archives
- Query time series using pmseries language
- Analyze data across multiple hosts
Prerequisites
- PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server. -
Mail transfer agent, for example,
sendmail
orpostfix
is installed and configured.
Procedure
Install the
redis
package:# dnf install redis
Start and enable the following services:
# systemctl start pmproxy redis # systemctl enable pmproxy redis
Restart the
grafana-server
:# systemctl restart grafana-server
Verification
Ensure that the
pmproxy
andredis
are working:# pmseries disk.dev.read 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
This command does not return any data if the
redis
package is not installed.
Additional resources
-
pmseries(1)
man page
9.6. Configuring secure connections for PCP redis
You can establish secure connections between performance co-pilot (PCP), Grafana, and PCP redis. Establishing secure connections between these components helps prevent unauthorized parties from accessing or modifying the data being collected and monitored.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server. - PCP redis is installed. For more information, see Configuring PCP Redis.
The private client key is stored in the
/etc/redis/client.key
file. If you use a different path, modify the path in the corresponding steps of the procedure.For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.
-
The TLS client certificate is stored in the
/etc/redis/client.crt
file. If you use a different path, modify the path in the corresponding steps of the procedure. -
The TLS server key is stored in the
/etc/redis/redis.key
file. If you use a different path, modify the path in the corresponding steps of the procedure. -
The TLS server certificate is stored in the
/etc/redis/redis.crt
file. If you use a different path, modify the path in the corresponding steps of the procedure. -
The CA certificate is stored in the
/etc/redis/ca.crt
file. If you use a different path, modify the path in the corresponding steps of the procedure.
Additionally, for the pmproxy
daemon:
-
The private server key is stored in the
/etc/pcp/tls/server.key
file. If you use a different path, modify the path in the corresponding steps of the procedure.
Procedure
As a root user, open the
/etc/redis/redis.conf
file and adjust the TLS/SSL options to reflect the following properties:port 0 tls-port 6379 tls-cert-file /etc/redis/redis.crt tls-key-file /etc/redis/redis.key tls-client-key-file /etc/redis/client.key tls-client-cert-file /etc/redis/client.crt tls-ca-cert-file /etc/redis/ca.crt
Ensure
redis
can access the TLS certificates:# su redis -s /bin/bash -c \ 'ls -1 /etc/redis/ca.crt /etc/redis/redis.key /etc/redis/redis.crt' /etc/redis/ca.crt /etc/redis/redis.crt /etc/redis/redis.key
Restart the
redis
server to apply the configuration changes:# systemctl restart redis
Verification
Confirm the TLS configuration works:
# redis-cli --tls --cert /etc/redis/client.crt \ --key /etc/redis/client.key \ --cacert /etc/redis/ca.crt <<< "PING" PONG
Unsuccessful TLS configuration might result in the following error message:
Could not negotiate a TLS connection: Invalid CA Certificate File/Directory
9.7. Creating panels and alert in PCP Redis data source
After adding the PCP Redis data source, you can view the dashboard with an overview of useful metrics, add a query to visualize the load graph, and create alerts that help you to view the system issues after they occur.
Prerequisites
- The PCP Redis is configured. For more information, see Configuring PCP Redis.
-
The
grafana-server
is accessible. For more information, see Accessing the Grafana web UI.
Procedure
- Log into the Grafana web UI.
- In the Grafana Home page, click Add your first data source.
- In the Add data source pane, type redis in the Filter by name or type text box and then click PCP Redis.
In the Data Sources / PCP Redis pane, perform the following:
-
Add
http://localhost:44322
in the URL field and then click . Click
→ → to see a dashboard with an overview of any useful metrics.Figure 9.2. PCP Redis: Host Overview
-
Add
Add a new panel:
- From the menu, hover over the → → to add a panel.
-
In the Query tab, select the PCP Redis from the query list instead of the selected default option and in the text field of A, enter metric, for example,
kernel.all.load
to visualize the kernel load graph. - Optional: Add Panel title and Description, and update other options from the Settings.
- Click Dashboard name. to apply changes and save the dashboard. Add
Click
to apply changes and go back to the dashboard.Figure 9.3. PCP Redis query panel
Create an alert rule:
- In the PCP Redis query panel, click Alert and then click Create Alert.
- Edit the Name, Evaluate query, and For fields from the Rule, and specify the Conditions for your alert.
Click
to apply changes and save the dashboard. Click to apply changes and go back to the dashboard.Figure 9.4. Creating alerts in the PCP Redis panel
- Optional: In the same panel, scroll down and click icon to delete the created rule.
Optional: From the menu, click Alerting icon to view the created alert rules with different alert statuses, to edit the alert rule, or to pause the existing rule from the Alert Rules tab.
To add a notification channel for the created alert rule to receive an alert notification from Grafana, see Adding notification channels for alerts.
9.8. Adding notification channels for alerts
By adding notification channels, you can receive an alert notification from Grafana whenever the alert rule conditions are met and the system needs further monitoring.
You can receive these alerts after selecting any one type from the supported list of notifiers, which includes DingDing, Discord, Email, Google Hangouts Chat, HipChat, Kafka REST Proxy, LINE, Microsoft Teams, OpsGenie, PagerDuty, Prometheus Alertmanager, Pushover, Sensu, Slack, Telegram, Threema Gateway, VictorOps, and webhook.
Prerequisites
-
The
grafana-server
is accessible. For more information, see Accessing the Grafana web UI. - An alert rule is created. For more information, see Creating panels and alert in PCP Redis data source.
Configure SMTP and add a valid sender’s email address in the
grafana/grafana.ini
file:# vi /etc/grafana/grafana.ini [smtp] enabled = true from_address = abc@gmail.com
Replace abc@gmail.com by a valid email address.
Restart
grafana-server
# systemctl restart grafana-server.service
Procedure
- From the menu, hover over the → → .
In the
New contact point
details view, perform the following:- Enter your name in the Name text box
-
Select the Contact point type, for example, Email and enter the email address. You can add many email addresses by using the
;
separator. - Optional: Configure Optional Email settings and Notification settings.
- Click .
Select a notification channel in the alert rule:
- From the menu, select Notification policies icon and then click + New specific policy.
- Choose the Contact point you have just created
- Click the Save policy button
Additional resources
9.9. Setting up authentication between PCP components
You can setup authentication using the scram-sha-256
authentication mechanism, which is supported by PCP through the Simple Authentication Security Layer (SASL) framework.
Procedure
Install the
sasl
framework for thescram-sha-256
authentication mechanism:# dnf install cyrus-sasl-scram cyrus-sasl-lib
Specify the supported authentication mechanism and the user database path in the
pmcd.conf
file:# vi /etc/sasl2/pmcd.conf mech_list: scram-sha-256 sasldb_path: /etc/pcp/passwd.db
Create a new user:
# useradd -r metrics
Replace metrics by your user name.
Add the created user in the user database:
# saslpasswd2 -a pmcd metrics Password: Again (for verification):
To add the created user, you are required to enter the metrics account password.
Set the permissions of the user database:
# chown root:pcp /etc/pcp/passwd.db # chmod 640 /etc/pcp/passwd.db
Restart the
pmcd
service:# systemctl restart pmcd
Verification
Verify the
sasl
configuration:# pminfo -f -h "pcp://127.0.0.1?username=metrics" disk.dev.read Password: disk.dev.read inst [0 or "sda"] value 19540
Additional resources
-
saslauthd(8)
,pminfo(1)
, andsha256
man pages - How can I setup authentication between PCP components, like PMDAs and pmcd in RHEL 8.2?
9.10. Installing PCP bpftrace
Install the PCP bpftrace
agent to introspect a system and to gather metrics from the kernel and user-space tracepoints.
The bpftrace
agent uses bpftrace scripts to gather the metrics. The bpftrace
scripts use the enhanced Berkeley Packet Filter (eBPF
).
This procedure describes how to install a pcp bpftrace
.
Prerequisites
- PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server. -
The
scram-sha-256
authentication mechanism is configured. For more information, see Setting up authentication between PCP components.
Procedure
Install the
pcp-pmda-bpftrace
package:# dnf install pcp-pmda-bpftrace
Edit the
bpftrace.conf
file and add the user that you have created in the {setting-up-authentication-between-pcp-components}:# vi /var/lib/pcp/pmdas/bpftrace/bpftrace.conf [dynamic_scripts] enabled = true auth_enabled = true allowed_users = root,metrics
Replace metrics by your user name.
Install
bpftrace
PMDA:# cd /var/lib/pcp/pmdas/bpftrace/ # ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check bpftrace metrics have appeared ... 7 metrics and 6 values
The
pmda-bpftrace
is now installed, and can only be used after authenticating your user. For more information, see Viewing the PCP bpftrace System Analysis dashboard.
Additional resources
-
pmdabpftrace(1)
andbpftrace
man pages
9.11. Viewing the PCP bpftrace System Analysis dashboard
Using the PCP bpftrace data source, you can access the live data from sources which are not available as normal data from the pmlogger
or archives
In the PCP bpftrace data source, you can view the dashboard with an overview of useful metrics.
Prerequisites
- The PCP bpftrace is installed. For more information, see Installing PCP bpftrace.
-
The
grafana-server
is accessible. For more information, see Accessing the Grafana web UI.
Procedure
- Log into the Grafana web UI.
- In the Grafana Home page, click Add your first data source.
- In the Add data source pane, type bpftrace in the Filter by name or type text box and then click PCP bpftrace.
In the Data Sources / PCP bpftrace pane, perform the following:
-
Add
http://localhost:44322
in the URL field. - Toggle the Basic Auth option and add the created user credentials in the User and Password field.
Click
.Figure 9.5. Adding PCP bpftrace in the data source
Click
→ → to see a dashboard with an overview of any useful metrics.Figure 9.6. PCP bpftrace: System Analysis
-
Add
9.12. Installing PCP Vector
This procedure describes how to install a pcp vector
.
Prerequisites
- PCP is configured. For more information, see Setting up PCP with pcp-zeroconf.
-
The
grafana-server
is configured. For more information, see Setting up a grafana-server.
Procedure
Install the
pcp-pmda-bcc
package:# dnf install pcp-pmda-bcc
Install the
bcc
PMDA:# cd /var/lib/pcp/pmdas/bcc # ./Install [Wed Apr 1 00:27:48] pmdabcc(22341) Info: Initializing, currently in 'notready' state. [Wed Apr 1 00:27:48] pmdabcc(22341) Info: Enabled modules: [Wed Apr 1 00:27:48] pmdabcc(22341) Info: ['biolatency', 'sysfork', [...] Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check bcc metrics have appeared ... 1 warnings, 1 metrics and 0 values
Additional resources
-
pmdabcc(1)
man page
9.13. Viewing the PCP Vector Checklist
The PCP Vector data source displays live metrics and uses the pcp
metrics. It analyzes data for individual hosts.
After adding the PCP Vector data source, you can view the dashboard with an overview of useful metrics and view the related troubleshooting or reference links in the checklist.
Prerequisites
- The PCP Vector is installed. For more information, see Installing PCP Vector.
-
The
grafana-server
is accessible. For more information, see Accessing the Grafana web UI.
Procedure
- Log into the Grafana web UI.
- In the Grafana Home page, click Add your first data source.
- In the Add data source pane, type vector in the Filter by name or type text box and then click PCP Vector.
In the Data Sources / PCP Vector pane, perform the following:
-
Add
http://localhost:44322
in the URL field and then click . Click
→ → to see a dashboard with an overview of any useful metrics.Figure 9.7. PCP Vector: Host Overview
-
Add
From the menu, hover over the Performance Co-Pilot plugin and then click PCP Vector Checklist.
In the PCP checklist, click help or warning icon to view the related troubleshooting or reference links.
Figure 9.8. Performance Co-Pilot / PCP Vector Checklist
9.14. Using heatmaps in Grafana
You can use heatmaps in Grafana to view histograms of your data over time, identify trends and patterns in your data, and see how they change over time. Each column within a heatmap represents a single histogram with different colored cells representing the different densities of observation of a given value within that histogram.
This specific workflow is for the heatmaps in Grafana version 9.0.9 and later on RHEL9.
Prerequisites
- PCP Redis is configured. For more information see Configuring PCP Redis.
-
The
grafana-server
is accessible. For more information see Accessing the Grafana Web UI. - The PCP Redis data source is configured. For more information see Creating panels and alerts in PCP Redis data source.
Procedure
- Hover the cursor over the Dashboards tab and click + New dashboard.
- In the Add panel menu, click Add a new panel.
In the Query tab:
- Select PCP Redis from the query list instead of the selected default option.
-
In the text field of A, enter a metric, for example,
kernel.all.load
to visualize the kernel load graph.
- Click the visualization dropdown menu, which is set to Time series by default, and then click Heatmap.
- Optional: In the Panel Options dropdown menu, add a Panel Title and Description.
In the Heatmap dropdown menu, under the Calculate from data setting, click Yes.
Heatmap
- Optional: In the Colors dropdown menu, change the Scheme from the default Orange and select the number of steps (color shades).
Optional: In the Tooltip dropdown menu, under the Show histogram (Y Axis) setting, click the toggle to display a cell’s position within its specific histogram when hovering your cursor over a cell in the heatmap. For example:
Show histogram (Y Axis) cell display
9.15. Troubleshooting Grafana issues
It is sometimes neccesary to troubleshoot Grafana issues, such as, Grafana does not display any data, the dashboard is black, or similar issues.
Procedure
Verify that the
pmlogger
service is up and running by executing the following command:$ systemctl status pmlogger
Verify if files were created or modified to the disk by executing the following command:
$ ls /var/log/pcp/pmlogger/$(hostname)/ -rlt total 4024 -rw-r--r--. 1 pcp pcp 45996 Oct 13 2019 20191013.20.07.meta.xz -rw-r--r--. 1 pcp pcp 412 Oct 13 2019 20191013.20.07.index -rw-r--r--. 1 pcp pcp 32188 Oct 13 2019 20191013.20.07.0.xz -rw-r--r--. 1 pcp pcp 44756 Oct 13 2019 20191013.20.30-00.meta.xz [..]
Verify that the
pmproxy
service is running by executing the following command:$ systemctl status pmproxy
Verify that
pmproxy
is running, time series support is enabled, and a connection to Redis is established by viewing the/var/log/pcp/pmproxy/pmproxy.log
file and ensure that it contains the following text:pmproxy(1716) Info: Redis slots, command keys, schema version setup
Here, 1716 is the PID of pmproxy, which will be different for every invocation of
pmproxy
.Verify if the Redis database contains any keys by executing the following command:
$ redis-cli dbsize (integer) 34837
Verify if any PCP metrics are in the Redis database and
pmproxy
is able to access them by executing the following commands:$ pmseries disk.dev.read 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df $ pmseries "disk.dev.read[count:10]" 2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df [Mon Jul 26 12:21:10.085468000 2021] 117971 70e83e88d4e1857a3a31605c6d1333755f2dd17c [Mon Jul 26 12:21:00.087401000 2021] 117758 70e83e88d4e1857a3a31605c6d1333755f2dd17c [Mon Jul 26 12:20:50.085738000 2021] 116688 70e83e88d4e1857a3a31605c6d1333755f2dd17c [...]
$ redis-cli --scan --pattern "*$(pmseries 'disk.dev.read')" pcp:metric.name:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:values:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:desc:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:labelvalue:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:instances:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df pcp:labelflags:series:2eb3e58d8f1e231361fb15cf1aa26fe534b4d9df
Verify if there are any errors in the Grafana logs by executing the following command:
$ journalctl -e -u grafana-server -- Logs begin at Mon 2021-07-26 11:55:10 IST, end at Mon 2021-07-26 12:30:15 IST. -- Jul 26 11:55:17 localhost.localdomain systemd[1]: Starting Grafana instance... Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Starting Grafana" logger=server version=7.3.6 c> Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Config loaded from" logger=settings file=/usr/s> Jul 26 11:55:17 localhost.localdomain grafana-server[1171]: t=2021-07-26T11:55:17+0530 lvl=info msg="Config loaded from" logger=settings file=/etc/g> [...]
Chapter 10. Optimizing the system performance using the web console
Learn how to set a performance profile in the RHEL web console to optimize the performance of the system for a selected task.
10.1. Performance tuning options in the web console
Red Hat Enterprise Linux 9 provides several performance profiles that optimize the system for the following tasks:
- Systems using the desktop
- Throughput performance
- Latency performance
- Network performance
- Low power consumption
- Virtual machines
The TuneD
service optimizes system options to match the selected profile.
In the web console, you can set which performance profile your system uses.
Additional resources
10.2. Setting a performance profile in the web console
Depending on the task you want to perform, you can use the web console to optimize system performance by setting a suitable performance profile.
Prerequisites
You have installed the RHEL 9 web console.
For instructions, see Installing and enabling the web console.
Procedure
Log in to the RHEL 9 web console.
For details, see Logging in to the web console.
- Click Overview.
In the Configuration section, click the current performance profile.
In the Change Performance Profile dialog box, set the required profile.
- Click .
Verification
- The Overview tab now shows the selected performance profile in the Configuration section.
10.3. Monitoring performance on the local system by using the web console
Red Hat Enterprise Linux web console uses the Utilization Saturation and Errors (USE) Method for troubleshooting. The new performance metrics page has a historical view of your data organized chronologically with the newest data at the top.
In the Metrics and history page, you can view events, errors, and graphical representation for resource utilization and saturation.
Prerequisites
You have installed the RHEL 9 web console.
For instructions, see Installing and enabling the web console.
-
The
cockpit-pcp
package, which enables collecting the performance metrics, is installed. The Performance Co-Pilot (PCP) service is enabled:
# systemctl enable --now pmlogger.service pmproxy.service
Procedure
Log in to the RHEL 9 web console.
For details, see Logging in to the web console.
- Click Overview.
In the Usage section, click View metrics and history.
The Metrics and history section opens:
- The current system configuration and usage:
- The performance metrics in a graphical form over a user-specified time interval:
10.4. Monitoring performance on several systems by using the web console and Grafana
Grafana enables you to collect data from several systems at once and review a graphical representation of their collected Performance Co-Pilot (PCP) metrics. You can set up performance metrics monitoring and export for several systems in the web console interface.
Prerequisites
You have installed the RHEL 9 web console.
For instructions, see Installing and enabling the web console.
-
You have installed the
cockpit-pcp
package. You have enabled the PCP service:
# systemctl enable --now pmlogger.service pmproxy.service
- You have set up the Grafana dashboard. For more information, see Setting up a grafana-server.
You have installed the
redis
package.Alternatively, you can install the package from the web console interface later in the procedure.
Procedure
Log in to the RHEL 9 web console.
For details, see Logging in to the web console.
- In the Overview page, click View metrics and history in the Usage table.
- Click the button.
Move the Export to network slider to active position.
If you do not have the
redis
package installed, the web console prompts you to install it.-
To open the
pmproxy
service, select a zone from a drop-down list and click the button. - Click Save.
Verification
- Click Networking.
- In the Firewall table, click the button.
-
Search for
pmproxy
in your selected zone.
Repeat this procedure on all the systems you want to watch.
Additional resources
Chapter 11. Setting the disk scheduler
The disk scheduler is responsible for ordering the I/O requests submitted to a storage device.
You can configure the scheduler in several different ways:
- Set the scheduler using TuneD, as described in Setting the disk scheduler using TuneD
-
Set the scheduler using
udev
, as described in Setting the disk scheduler using udev rules - Temporarily change the scheduler on a running system, as described in Temporarily setting a scheduler for a specific disk
In Red Hat Enterprise Linux 9, block devices support only multi-queue scheduling. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems.
The traditional, single-queue schedulers, which were available in Red Hat Enterprise Linux 7 and earlier versions, have been removed.
11.1. Available disk schedulers
The following multi-queue disk schedulers are supported in Red Hat Enterprise Linux 9:
none
- Implements a first-in first-out (FIFO) scheduling algorithm. It merges requests at the generic block layer through a simple last-hit cache.
mq-deadline
Attempts to provide a guaranteed latency for requests from the point at which requests reach the scheduler.
The
mq-deadline
scheduler sorts queued I/O requests into a read or write batch and then schedules them for execution in increasing logical block addressing (LBA) order. By default, read batches take precedence over write batches, because applications are more likely to block on read I/O operations. Aftermq-deadline
processes a batch, it checks how long write operations have been starved of processor time and schedules the next read or write batch as appropriate.This scheduler is suitable for most use cases, but particularly those in which the write operations are mostly asynchronous.
bfq
Targets desktop systems and interactive tasks.
The
bfq
scheduler ensures that a single application is never using all of the bandwidth. In effect, the storage device is always as responsive as if it was idle. In its default configuration,bfq
focuses on delivering the lowest latency rather than achieving the maximum throughput.bfq
is based oncfq
code. It does not grant the disk to each process for a fixed time slice but assigns a budget measured in number of sectors to the process.This scheduler is suitable while copying large files and the system does not become unresponsive in this case.
kyber
The scheduler tunes itself to achieve a latency goal by calculating the latencies of every I/O request submitted to the block I/O layer. You can configure the target latencies for read, in the case of cache-misses, and synchronous write requests.
This scheduler is suitable for fast devices, for example NVMe, SSD, or other low latency devices.
11.2. Different disk schedulers for different use cases
Depending on the task that your system performs, the following disk schedulers are recommended as a baseline prior to any analysis and tuning tasks:
Use case | Disk scheduler |
---|---|
Traditional HDD with a SCSI interface |
Use |
High-performance SSD or a CPU-bound system with fast storage |
Use |
Desktop or interactive tasks |
Use |
Virtual guest |
Use |
11.3. The default disk scheduler
Block devices use the default disk scheduler unless you specify another scheduler.
For non-volatile Memory Express (NVMe)
block devices specifically, the default scheduler is none
and Red Hat recommends not changing this.
The kernel selects a default disk scheduler based on the type of device. The automatically selected scheduler is typically the optimal setting. If you require a different scheduler, Red Hat recommends to use udev
rules or the TuneD application to configure it. Match the selected devices and switch the scheduler only for those devices.
11.4. Determining the active disk scheduler
This procedure determines which disk scheduler is currently active on a given block device.
Procedure
Read the content of the
/sys/block/device/queue/scheduler
file:# cat /sys/block/device/queue/scheduler [mq-deadline] kyber bfq none
In the file name, replace device with the block device name, for example
sdc
.The active scheduler is listed in square brackets (
[ ]
).
11.5. Setting the disk scheduler using TuneD
This procedure creates and enables a TuneD profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots.
In the following commands and configuration, replace:
-
device with the name of the block device, for example
sdf
-
selected-scheduler with the disk scheduler that you want to set for the device, for example
bfq
Prerequisites
-
The
TuneD
service is installed and enabled. For details, see Installing and enabling TuneD.
Procedure
Optional: Select an existing TuneD profile on which your profile will be based. For a list of available profiles, see TuneD profiles distributed with RHEL.
To see which profile is currently active, use:
$ tuned-adm active
Create a new directory to hold your TuneD profile:
# mkdir /etc/tuned/my-profile
Find the system unique identifier of the selected block device:
$ udevadm info --query=property --name=/dev/device | grep -E '(WWN|SERIAL)' ID_WWN=0x5002538d00000000_ ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0 ID_SERIAL_SHORT=20120501030900000
NoteThe command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.
Create the
/etc/tuned/my-profile/tuned.conf
configuration file. In the file, set the following options:Optional: Include an existing profile:
[main] include=existing-profile
Set the selected disk scheduler for the device that matches the WWN identifier:
[disk] devices_udev_regex=IDNAME=device system unique id elevator=selected-scheduler
Here:
-
Replace IDNAME with the name of the identifier being used (for example,
ID_WWN
). Replace device system unique id with the value of the chosen identifier (for example,
0x5002538d00000000
).To match multiple devices in the
devices_udev_regex
option, enclose the identifiers in parentheses and separate them with vertical bars:devices_udev_regex=(ID_WWN=0x5002538d00000000)|(ID_WWN=0x1234567800000000)
-
Replace IDNAME with the name of the identifier being used (for example,
Enable your profile:
# tuned-adm profile my-profile
Verification
Verify that the TuneD profile is active and applied:
$ tuned-adm active Current active profile: my-profile
$ tuned-adm verify Verification succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details.
Read the contents of the
/sys/block/device/queue/scheduler
file:# cat /sys/block/device/queue/scheduler [mq-deadline] kyber bfq none
In the file name, replace device with the block device name, for example
sdc
.The active scheduler is listed in square brackets (
[]
).
Additional resources
11.6. Setting the disk scheduler using udev rules
This procedure sets a given disk scheduler for specific block devices using udev
rules. The setting persists across system reboots.
In the following commands and configuration, replace:
-
device with the name of the block device, for example
sdf
-
selected-scheduler with the disk scheduler that you want to set for the device, for example
bfq
Procedure
Find the system unique identifier of the block device:
$ udevadm info --name=/dev/device | grep -E '(WWN|SERIAL)' E: ID_WWN=0x5002538d00000000 E: ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0 E: ID_SERIAL_SHORT=20120501030900000
NoteThe command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the device system unique ID.
Configure the
udev
rule. Create the/etc/udev/rules.d/99-scheduler.rules
file with the following content:ACTION=="add|change", SUBSYSTEM=="block", ENV{IDNAME}=="device system unique id", ATTR{queue/scheduler}="selected-scheduler"
Here:
-
Replace IDNAME with the name of the identifier being used (for example,
ID_WWN
). -
Replace device system unique id with the value of the chosen identifier (for example,
0x5002538d00000000
).
-
Replace IDNAME with the name of the identifier being used (for example,
Reload
udev
rules:# udevadm control --reload-rules
Apply the scheduler configuration:
# udevadm trigger --type=devices --action=change
Verification
Verify the active scheduler:
# cat /sys/block/device/queue/scheduler
11.7. Temporarily setting a scheduler for a specific disk
This procedure sets a given disk scheduler for specific block devices. The setting does not persist across system reboots.
Procedure
Write the name of the selected scheduler to the
/sys/block/device/queue/scheduler
file:# echo selected-scheduler > /sys/block/device/queue/scheduler
In the file name, replace device with the block device name, for example
sdc
.
Verification
Verify that the scheduler is active on the device:
# cat /sys/block/device/queue/scheduler
Chapter 12. Tuning the performance of a Samba server
Learn what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact.
Parts of this section were adopted from the Performance Tuning documentation published in the Samba Wiki. License: CC BY 4.0. Authors and contributors: See the history tab on the Wiki page.
Prerequisites
- Samba is set up as a file or print server
12.1. Setting the SMB protocol version
Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version.
To always have the latest stable SMB protocol version enabled, do not set the server max protocol
parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled.
The following procedure explains how to use the default value in the server max protocol
parameter.
Procedure
-
Remove the
server max protocol
parameter from the[global]
section in the/etc/samba/smb.conf
file. Reload the Samba configuration
# smbcontrol all reload-config
12.3. Settings that can have a negative performance impact
By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options
parameter in the /etc/samba/smb.conf
file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases.
To use the optimized settings from the Kernel, remove the socket options
parameter from the [global]
section in the /etc/samba/smb.conf
.
Chapter 13. Optimizing virtual machine performance
Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. The following sections explain the reasons for this deterioration and provide instructions on how to minimize the performance impact of virtualization in RHEL 9, so that your hardware infrastructure resources can be used as efficiently as possible.
13.1. What influences virtual machine performance
VMs are run as user-space processes on the host. The hypervisor therefore needs to convert the host’s system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM therefore cannot achieve the same performance efficiency as the host.
The impact of virtualization on system performance
More specific reasons for VM performance loss include:
- Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler.
- VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel.
- Disk and network I/O settings of the host might have a significant performance impact on the VM.
- Network traffic typically travels to a VM through a software-based bridge.
- Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware.
The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include:
- The number of concurrently running VMs.
- The amount of virtual devices used by each VM.
- The device types used by the VMs.
Reducing VM performance loss
RHEL 9 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably:
-
The
TuneD
service can automatically optimize the resource distribution and performance of your VMs. - Block I/O tuning can improve the performances of the VM’s block devices, such as disks.
- NUMA tuning can increase vCPU performance.
- Virtual networking can be optimized in various ways.
Tuning VM performance can have adverse effects on other virtualization functions. For example, it can make migrating the modified VM more difficult.
13.2. Optimizing virtual machine performance by using TuneD
The TuneD
utility is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments.
To optimize RHEL 9 for virtualization, use the following profiles:
-
For RHEL 9 virtual machines, use the virtual-guest profile. It is based on the generally applicable
throughput-performance
profile, but also decreases the swappiness of virtual memory. - For RHEL 9 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance.
Prerequisites
-
The
TuneD
service is installed and enabled.
Procedure
To enable a specific TuneD
profile:
List the available
TuneD
profiles.# tuned-adm list Available profiles: - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case [...] - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced
Optional: Create a new
TuneD
profile or edit an existingTuneD
profile.For more information, see Customizing TuneD profiles.
Activate a
TuneD
profile.# tuned-adm profile selected-profile
To optimize a virtualization host, use the virtual-host profile.
# tuned-adm profile virtual-host
On a RHEL guest operating system, use the virtual-guest profile.
# tuned-adm profile virtual-guest
Additional resources
13.3. Optimizing libvirt daemons
The libvirt
virtualization suite works as a management layer for the RHEL hypervisor, and your libvirt
configuration significantly impacts your virtualization host. Notably, RHEL 9 contains two different types of libvirt
daemons, monolithic or modular, and which type of daemons you use affects how granularly you can configure individual virtualization drivers.
13.3.1. Types of libvirt daemons
RHEL 9 supports the following libvirt
daemon types:
- Monolithic libvirt
The traditional
libvirt
daemon,libvirtd
, controls a wide variety of virtualization drivers, by using a single configuration file -/etc/libvirt/libvirtd.conf
.As such,
libvirtd
allows for centralized hypervisor configuration, but may use system resources inefficiently. Therefore,libvirtd
will become unsupported in a future major release of RHEL.However, if you updated to RHEL 9 from RHEL 8, your host still uses
libvirtd
by default.- Modular libvirt
Newly introduced in RHEL 9, modular
libvirt
provides a specific daemon for each virtualization driver. These include the following:- virtqemud - A primary daemon for hypervisor management
- virtinterfaced - A secondary daemon for host NIC management
- virtnetworkd - A secondary daemon for virtual network management
- virtnodedevd - A secondary daemon for host physical device management
- virtnwfilterd - A secondary daemon for host firewall management
- virtsecretd - A secondary daemon for host secret management
- virtstoraged - A secondary daemon for storage management
Each of the daemons has a separate configuration file - for example
/etc/libvirt/virtqemud.conf
. As such, modularlibvirt
daemons provide better options for fine-tuninglibvirt
resource management.If you performed a fresh install of RHEL 9, modular
libvirt
is configured by default.
Next steps
-
If your RHEL 9 uses
libvirtd
, Red Hat recommends switching to modular daemons. For instructions, see Enabling modular libvirt daemons.
13.3.2. Enabling modular libvirt daemons
In RHEL 9, the libvirt
library uses modular daemons that handle individual virtualization driver sets on your host. For example, the virtqemud
daemon handles QEMU drivers.
If you performed a fresh install of a RHEL 9 host, your hypervisor uses modular libvirt
daemons by default. However, if you upgraded your host from RHEL 8 to RHEL 9, your hypervisor uses the monolithic libvirtd
daemon, which is the default in RHEL 8.
If that is the case, Red Hat recommends enabling the modular libvirt
daemons instead, because they provide better options for fine-tuning libvirt
resource management. In addition, libvirtd
will become unsupported in a future major release of RHEL.
Prerequisites
Your hypervisor is using the monolithic
libvirtd
service.# systemctl is-active libvirtd.service active
If this command displays
active
, you are usinglibvirtd
.- Your virtual machines are shut down.
Procedure
Stop
libvirtd
and its sockets.$ systemctl stop libvirtd.service $ systemctl stop libvirtd{,-ro,-admin,-tcp,-tls}.socket
Disable
libvirtd
to prevent it from starting on boot.$ systemctl disable libvirtd.service $ systemctl disable libvirtd{,-ro,-admin,-tcp,-tls}.socket
Enable the modular
libvirt
daemons.# for drv in qemu interface network nodedev nwfilter secret storage; do systemctl unmask virt${drv}d.service; systemctl unmask virt${drv}d{,-ro,-admin}.socket; systemctl enable virt${drv}d.service; systemctl enable virt${drv}d{,-ro,-admin}.socket; done
Start the sockets for the modular daemons.
# for drv in qemu network nodedev nwfilter secret storage; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
Optional: If you require connecting to your host from remote hosts, enable and start the virtualization proxy daemon.
Check whether the
libvirtd-tls.socket
service is enabled on your system.# grep listen_tls /etc/libvirt/libvirtd.conf listen_tls = 0
If
libvirtd-tls.socket
is not enabled (listen_tls = 0
), activatevirtproxyd
as follows:# systemctl unmask virtproxyd.service # systemctl unmask virtproxyd{,-ro,-admin}.socket # systemctl enable virtproxyd.service # systemctl enable virtproxyd{,-ro,-admin}.socket # systemctl start virtproxyd{,-ro,-admin}.socket
If
libvirtd-tls.socket
is enabled (listen_tls = 1
), activatevirtproxyd
as follows:# systemctl unmask virtproxyd.service # systemctl unmask virtproxyd{,-ro,-admin,-tls}.socket # systemctl enable virtproxyd.service # systemctl enable virtproxyd{,-ro,-admin,-tls}.socket # systemctl start virtproxyd{,-ro,-admin,-tls}.socket
To enable the TLS socket of
virtproxyd
, your host must have TLS certificates configured to work withlibvirt
. For more information, see the Upstream libvirt documentation.
Verification
Activate the enabled virtualization daemons.
# virsh uri qemu:///system
Verify that your host is using the
virtqemud
modular daemon.# systemctl is-active virtqemud.service active
If the status is
active
, you have successfully enabled modularlibvirt
daemons.
13.4. Configuring virtual machine memory
To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks.
To perform these actions, you can use the web console or the command-line interface.
13.4.1. Adding and removing virtual machine memory by using the web console
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the web console to adjust amount of memory allocated to the VM.
Prerequisites
You have installed the RHEL 9 web console.
For instructions, see Installing and enabling the web console.
The guest OS is running the memory balloon drivers. To verify this is the case:
Ensure the VM’s configuration includes the
memballoon
device:# virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>
If this commands displays any output and the model is not set to
none
, thememballoon
device is present.Ensure the balloon drivers are running in the guest OS.
-
In Windows guests, the drivers are installed as a part of the
virtio-win
driver package. For instructions, see Installing paravirtualized KVM drivers for Windows virtual machines. -
In Linux guests, the drivers are generally included by default and activate when the
memballoon
device is present.
-
In Windows guests, the drivers are installed as a part of the
- The web console VM plug-in is installed on your system.
Procedure
Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification.
# virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB
Log in to the RHEL 9 web console.
For details, see Logging in to the web console.
In the
interface, click the VM whose information you want to see.A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Click
next to theMemory
line in the Overview pane.The
Memory Adjustment
dialog appears.Configure the virtual memory for the selected VM.
Maximum allocation - Sets the maximum amount of host memory that the VM can use for its processes. You can specify the maximum memory when creating the VM or increase it later. You can specify memory as multiples of MiB or GiB.
Adjusting maximum memory allocation is only possible on a shut-off VM.
Current allocation - Sets the actual amount of memory allocated to the VM. This value can be less than the Maximum allocation but cannot exceed it. You can adjust the value to regulate the memory available to the VM for its processes. You can specify memory as multiples of MiB or GiB.
If you do not specify this value, the default allocation is the Maximum allocation value.
Click
.The memory allocation of the VM is adjusted.
13.4.2. Adding and removing virtual machine memory by using the command-line interface
To improve the performance of a virtual machine (VM) or to free up the host resources it is using, you can use the CLI to adjust amount of memory allocated to the VM.
Prerequisites
The guest OS is running the memory balloon drivers. To verify this is the case:
Ensure the VM’s configuration includes the
memballoon
device:# virsh dumpxml testguest | grep memballoon <memballoon model='virtio'> </memballoon>
If this commands displays any output and the model is not set to
none
, thememballoon
device is present.Ensure the ballon drivers are running in the guest OS.
-
In Windows guests, the drivers are installed as a part of the
virtio-win
driver package. For instructions, see Installing paravirtualized KVM drivers for Windows virtual machines. -
In Linux guests, the drivers are generally included by default and activate when the
memballoon
device is present.
-
In Windows guests, the drivers are installed as a part of the
Procedure
Optional: Obtain the information about the maximum memory and currently used memory for a VM. This will serve as a baseline for your changes, and also for verification.
# virsh dominfo testguest Max memory: 2097152 KiB Used memory: 2097152 KiB
Adjust the maximum memory allocated to a VM. Increasing this value improves the performance potential of the VM, and reducing the value lowers the performance footprint the VM has on your host. Note that this change can only be performed on a shut-off VM, so adjusting a running VM requires a reboot to take effect.
For example, to change the maximum memory that the testguest VM can use to 4096 MiB:
# virt-xml testguest --edit --memory memory=4096,currentMemory=4096 Domain 'testguest' defined successfully. Changes will take effect after the domain is fully powered off.
To increase the maximum memory of a running VM, you can attach a memory device to the VM. This is also referred to as memory hot plug. For details, see Attaching devices to virtual machines.
WarningRemoving memory devices from a running VM (also referred as a memory hot unplug) is not supported, and highly discouraged by Red Hat.
Optional: You can also adjust the memory currently used by the VM, up to the maximum allocation. This regulates the memory load that the VM has on the host until the next reboot, without changing the maximum VM allocation.
# virsh setmem testguest --current 2048
Verification
Confirm that the memory used by the VM has been updated:
# virsh dominfo testguest Max memory: 4194304 KiB Used memory: 2097152 KiB
Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use.
# virsh domstats --balloon testguest Domain: 'testguest' balloon.current=365624 balloon.maximum=4194304 balloon.swap_in=0 balloon.swap_out=0 balloon.major_fault=306 balloon.minor_fault=156117 balloon.unused=3834448 balloon.available=4035008 balloon.usable=3746340 balloon.last-update=1587971682 balloon.disk_caches=75444 balloon.hugetlb_pgalloc=0 balloon.hugetlb_pgfail=0 balloon.rss=1005456
13.4.3. Adding and removing virtual machine memory by using virtio-mem
RHEL 9 provides the virtio-mem
paravirtualized memory device. This device makes it possible to dynamically add or remove host memory in virtual machines (VMs). For example, you can use virtio-mem
to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements.
13.4.3.1. Overview of virtio-mem
virtio-mem
is a paravirtualized memory device that can be used to dynamically add or remove host memory in virtual machines (VMs). For example, you can use this device to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements.
By using virtio-mem
, you can increase the memory of a VM beyond its initial size, and shrink it back to its original size, in units that can have the size of 4 to several hundred mebibytes (MiBs). Note, however, that virtio-mem
also relies on a specific guest operating system configuration, especially to reliably unplug memory.
virtio-mem feature limitations
virtio-mem
is currently not compatible with the following features:
- Using memory locking for real-time applications on the host
- Using encrypted virtualization on the host
-
Combining
virtio-mem
withmemballoon
inflation and deflation on the host -
Unloading or reloading the
virtio_mem
driver in a VM -
Using vhost-user devices, with the exception of
virtiofs
13.4.3.2. Configuring memory onlining in virtual machines
Before using virtio-mem
to attach memory to a running virtual machine (also known as memory hot-plugging), you must configure the virtual machine (VM) operating system to automatically set the hot-plugged memory to an online state. Otherwise, the guest operating system is not able to use the additional memory. You can choose from one of the following configurations for memory onlining:
-
online_movable
-
online_kernel
-
auto-movable
To learn about differences between these configurations, see: Comparison of memory onlining configurations
Memory onlining is configured with udev rules by default in RHEL. However, when using virtio-mem
, it is recommended to configure memory onlining directly in the kernel.
Prerequisites
- The host has Intel 64 or AMD64 CPU architecture.
- The host uses RHEL 9.4 or later as the operating system.
VMs running on the host use one of the following operating system versions:
RHEL 8.10
ImportantUnplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
- RHEL 9
Procedure
To set memory onlining to use the
online_movable
configuration in the VM:Set the
memhp_default_state
kernel command line parameter toonline_movable
:# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_movable
- Reboot the VM.
To set memory onlining to use the
online_kernel
configuration in the VM:Set the
memhp_default_state
kernel command line parameter toonline_kernel
:# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_kernel
- Reboot the VM.
To use the
auto-movable
memory onlining policy in the VM:Set the
memhp_default_state
kernel command line parameter toonline
:# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online
Set the
memory_hotplug.online_policy
kernel command line parameter toauto-movable
:# grubby --update-kernel=ALL --remove-args="memory_hotplug.online_policy" --args=memory_hotplug.online_policy=auto-movable
Optional: To further tune the
auto-movable
onlining policy, change thememory_hotplug.auto_movable_ratio
andmemory_hotplug.auto_movable_numa_aware
parameters:# grubby --update-kernel=ALL --remove-args="memory_hotplug.auto_movable_ratio" --args=memory_hotplug.auto_movable_ratio=<percentage> # grubby --update-kernel=ALL --remove-args="memory_hotplug.memory_auto_movable_numa_aware" --args=memory_hotplug.auto_movable_numa_aware=<y/n>
-
The
memory_hotplug.auto_movable_ratio parameter
sets the maximum ratio of memory only available for movable allocations compared to memory available for any allocations. The ratio is expressed in percents and the default value is: 301 (%), which is a 3:1 ratio. The
memory_hotplug.auto_movable_numa_aware
parameter controls whether thememory_hotplug.auto_movable_ratio
parameter applies to memory across all available NUMA nodes or only for memory within a single NUMA node. The default value is: y (yes)For example, if the maximum ratio is set to 301% and the
memory_hotplug.auto_movable_numa_aware
is set to y (yes), than the 3:1 ratio is applied even within the NUMA node with the attachedvirtio-mem
device. If the parameter is set to n (no), the maximum 3:1 ratio is applied only for all the NUMA nodes as a whole.Additionally, if the ratio is not exceeded, the newly hot-plugged memory will be available only for movable allocations. Otherwise, the newly hot-plugged memory will be available for both movable and unmovable allocations.
-
The
- Reboot the VM.
Verification
To see if the
online_movable
configuration has been set correctly, check the current value of thememhp_default_state
kernel parameter:# cat /sys/devices/system/memory/auto_online_blocks online_movable
To see if the
online_kernel
configuration has been set correctly, check the current value of thememhp_default_state
kernel parameter:# cat /sys/devices/system/memory/auto_online_blocks online_kernel
To see if the
auto-movable
configuration has been set correctly, check the following kernel parameters:memhp_default_state
:# cat /sys/devices/system/memory/auto_online_blocks online
memory_hotplug.online_policy
:# cat /sys/module/memory_hotplug/parameters/online_policy auto-movable
memory_hotplug.auto_movable_ratio
:# cat /sys/module/memory_hotplug/parameters/auto_movable_ratio 301
memory_hotplug.auto_movable_numa_aware
:# cat /sys/module/memory_hotplug/parameters/auto_movable_numa_aware y
13.4.3.3. Attaching a virtio-mem device to virtual machines
To attach additional memory to a running virtual machine (also known as memory hot-plugging) and afterwards be able to resize the hot-plugged memory, you can use a virtio-mem
device. Specifically, you can use libvirt XML configuration files and virsh
commands to define and attach virtio-mem
devices to virtual machines (VMs).
Prerequisites
- The host has Intel 64 or AMD64 CPU architecture.
- The host uses RHEL 9.4 or later as the operating system.
VMs running on the host use one of the following operating system versions:
RHEL 8.10
ImportantUnplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
- RHEL 9
- The VM has memory onlining configured. For instructions, see: Configuring memory onlining in virtual machines
Procedure
Ensure the XML configuration of the target VM includes the
maxMemory
parameter:# virsh edit testguest1 <domain type='kvm'> <name>testguest1</name> ... <maxMemory unit='GiB'>128</maxMemory> ... </domain>
In this example, the XML configuration of the
testguest1
VM defines amaxMemory
parameter with a 128 gibibyte (GiB) size. ThemaxMemory
size specifies the maximum memory the VM can use, which includes both initial and hot-plugged memory.Create and open an XML file to define
virtio-mem
devices on the host, for example:# vim virtio-mem-device.xml
Add XML definitions of
virtio-mem
devices to the file and save it:<memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memory> <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>1</node> <block unit='MiB'>2</block> <requested unit='GiB'>0</requested> <current unit='GiB'>0</current> </target> <alias name='ua-virtiomem1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memory>
In this example, two
virtio-mem
devices are defined with the following parameters:-
size
: This is the maximum size of the device. In the example, it is 48 GiB. Thesize
must be a multiple of theblock
size. -
node
: This is the assigned vNUMA node for thevirtio-mem
device. -
block
: This is the block size of the device. It must be at least the size of the Transparent Huge Page (THP), which is 2 MiB on Intel 64 or AMD64 CPU architecture. The 2 MiB block size on Intel 64 or AMD64 architecture is usually a good default choice. When usingvirtio-mem
with Virtual Function I/O (VFIO) or mediated devices (mdev), the total number of blocks across allvirtio-mem
devices must not be larger than 32768, otherwise the plugging of RAM might fail. -
requested
: This is the amount of memory you attach to the VM with thevirtio-mem
device. However, it is just a request towards the VM and it might not be resolved successfully, for example if the VM is not properly configured. Therequested
size must be a multiple of theblock
size and cannot exceed the maximum definedsize
. -
current
: This represents the current size thevirtio-mem
device provides to the VM. Thecurrent
size can differ from therequested
size, for example when requests cannot be completed or when rebooting the VM. alias
: This is an optional user-defined alias that you can use to specify the intendedvirtio-mem
device, for example when editing the device with libvirt commands. All user-defined aliases in libvirt must start with the "ua-" prefix.Apart from these specific parameters,
libvirt
handles thevirtio-mem
device like any other PCI device.
-
Use the XML file to attach the defined
virtio-mem
devices to a VM. For example, to permanently attach the two devices defined in thevirtio-mem-device.xml
to the runningtestguest1
VM:# virsh attach-device testguest1 virtio-mem-device.xml --live --config
The
--live
option attaches the device to a running VM only, without persistence between boots. The--config
option makes the configuration changes persistent. You can also attach the device to a shutdown VM without the--live
option.Optional: To dynamically change the
requested
size of avirtio-mem
device attached to a running VM, use thevirsh update-memory-device
command:# virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 4GiB
In this example:
-
testguest1
is the VM you want to update. -
--alias ua-virtiomem0
is thevirtio-mem
device specified by a previously defined alias. --requested-size 4GiB
changes therequested
size of thevirtio-mem
device to 4 GiB.WarningUnplugging memory from a running VM by reducing the
requested
size might be unreliable. Whether this process succeeds depends on various factors, such as the memory onlining policy that is used.In some cases, the guest operating system cannot complete the request successfully, because changing the amount of hot-plugged memory is not possible at that time.
Additionally, unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
-
Optional: To unplug a
virtio-mem
device from a shut-down VM, use thevirsh detach-device
command:# virsh detach-device testguest1 virtio-mem-device.xml
Optional: To unplug a
virtio-mem
device from a running VM:Change the
requested
size of thevirtio-mem
device to 0, otherwise the attempt to unplug avirtio-mem
device from a running VM will fail.# virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 0
Unplug a
virtio-mem
device from the running VM:# virsh detach-device testguest1 virtio-mem-device.xml
Verification
In the VM, check the available RAM and see if the total amount now includes the hot-plugged memory:
# free -h total used free shared buff/cache available Mem: 31Gi 5.5Gi 14Gi 1.3Gi 11Gi 23Gi Swap: 8.0Gi 0B 8.0Gi
# numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 4 5 6 7 node 0 size: 29564 MB node 0 free: 13351 MB node distances: node 0 0: 10
The current amount of plugged-in RAM can be also viewed on the host by displaying the XML configuration of the running VM:
# virsh dumpxml testguest1 <domain type='kvm'> <name>testguest1</name> ... <currentMemory unit='GiB'>31</currentMemory> ... <memory model='virtio-mem'> <target> <size unit='GiB'>48</size> <node>0</node> <block unit='MiB'>2</block> <requested unit='GiB'>16</requested> <current unit='GiB'>16</current> </target> <alias name='ua-virtiomem0'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> ... </domain>
In this example:
-
<currentMemory unit='GiB'>31</currentMemory>
represents the total RAM available in the VM from all sources. -
<current unit='GiB'>16</current>
represents the current size of the plugged-in RAM provided by thevirtio-mem
device.
-
Additional resources
13.4.3.4. Comparison of memory onlining configurations
When attaching memory to a running RHEL virtual machine (also known as memory hot-plugging), you must set the hot-plugged memory to an online state in the virtual machine (VM) operating system. Otherwise, the system will not be able to use the memory.
The following table summarizes the main considerations when choosing between the available memory onlining configurations.
Configuration name | Unplugging memory from a VM | A risk of creating a memory zone imbalance | A potential use case | Memory requirements of the intended workload |
---|---|---|---|---|
| Hot-plugged memory can be reliably unplugged. | Yes | Hot-plugging a comparatively small amount of memory | Mostly user-space memory |
| Movable portions of hot-plugged memory can be reliably unplugged. | Minimal | Hot-plugging a large amount of memory | Mostly user-space memory |
| Hot-plugged memory cannot be reliably unplugged. | No | Unreliable memory unplugging is acceptable. | User-space or kernel-space memory |
A zone imbalance is a lack of available memory pages in one of the Linux memory zones. A zone imbalance can negatively impact the system performance. For example, the kernel might crash if it runs out of free memory for unmovable allocations. Usually, movable allocations contain mostly user-space memory pages and unmovable allocations contain mostly kernel-space memory pages.
13.4.4. Additional resources
- Attaching devices to virtual machines Attaching devices to virtual machines.
13.5. Optimizing virtual machine I/O performance
The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM’s overall efficiency. To address this, you can optimize a VM’s I/O by configuring block I/O parameters.
13.5.1. Tuning block I/O in virtual machines
When multiple block devices are being used by one or more VMs, it might be important to adjust the I/O priority of specific virtual devices by modifying their I/O weights.
Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device’s weight makes it consume less host resources.
Each device’s weight
value must be within the 100
to 1000
range. Alternatively, the value can be 0
, which removes that device from per-device listings.
Procedure
To display and set a VM’s block I/O parameters:
Display the current
<blkio>
parameters for a VM:# virsh dumpxml VM-name
<domain> [...] <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> [...] </domain>
Edit the I/O weight of a specified device:
# virsh blkiotune VM-name --device-weights device, I/O-weight
For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500.
# virsh blkiotune testguest1 --device-weights /dev/sda, 500
13.5.2. Disk I/O throttling in virtual machines
When several VMs are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM virtualization provides the ability to set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs.
To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine.
Procedure
Use the
virsh domblklist
command to list the names of all the disk devices on a specified VM.# virsh domblklist rollin-coal Target Source ------------------------------------------------ vda /var/lib/libvirt/images/rollin-coal.qcow2 sda - sdb /home/horridly-demanding-processes.iso
Find the host block device where the virtual disk that you want to throttle is mounted.
For example, if you want to throttle the
sdb
virtual disk from the previous step, the following output shows that the disk is mounted on the/dev/nvme0n1p3
partition.$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 4G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 600M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot └─nvme0n1p3 259:3 0 236.9G 0 part └─luks-a1123911-6f37-463c-b4eb-fxzy1ac12fea 253:0 0 236.9G 0 crypt /home
Set I/O limits for the block device by using the
virsh blkiotune
command.# virsh blkiotune VM-name --parameter device,limit
The following example throttles the
sdb
disk on therollin-coal
VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput.# virsh blkiotune rollin-coal --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800
Additional information
- Disk I/O throttling can be useful in various situations, for example when VMs belonging to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks.
- I/O throttling can be applied independently to each block device attached to a VM and supports limits on throughput and I/O operations.
-
Red Hat does not support using the
virsh blkdeviotune
command to configure I/O throttling in VMs. For more information about unsupported features when using RHEL 9 as a VM host, see Unsupported features in RHEL 9 virtualization.
13.5.3. Enabling multi-queue virtio-scsi
When using virtio-scsi
storage devices in your virtual machines (VMs), the multi-queue virtio-scsi feature provides improved storage performance and scalability. It enables each virtual CPU (vCPU) to have a separate queue and interrupt to use without affecting other vCPUs.
Procedure
To enable multi-queue virtio-scsi support for a specific VM, add the following to the VM’s XML configuration, where N is the total number of vCPU queues:
<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>
13.6. Optimizing virtual machine CPU performance
Much like physical CPUs in host machines, vCPUs are critical to virtual machine (VM) performance. As a result, optimizing vCPUs can have a significant impact on the resource efficiency of your VMs. To optimize your vCPU:
- Adjust how many host CPUs are assigned to the VM. You can do this using the CLI or the web console.
Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host:
# virt-xml testguest1 --edit --cpu host-model
On an ARM 64 system, use
--cpu host-passthrough
.- Manage kernel same-page merging (KSM).
If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness.
For details, see Configuring NUMA in a virtual machine and Sample vCPU performance tuning scenario.
13.6.1. Adding and removing virtual CPUs by using the command-line interface
To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM.
When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 9, and Red Hat highly discourages its use.
Prerequisites
Optional: View the current state of the vCPUs in the targeted VM. For example, to display the number of vCPUs on the testguest VM:
# virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1
This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.
Procedure
Adjust the maximum number of vCPUs that can be attached to a VM, which takes effect on the VM’s next boot.
For example, to increase the maximum vCPU count for the testguest VM to 8:
# virsh setvcpus testguest 8 --maximum --config
Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors.
Adjust the current number of vCPUs attached to a VM, up to the maximum configured in the previous step. For example:
To increase the number of vCPUs attached to the running testguest VM to 4:
# virsh setvcpus testguest 4 --live
This increases the VM’s performance and host load footprint of testguest until the VM’s next boot.
To permanently decrease the number of vCPUs attached to the testguest VM to 1:
# virsh setvcpus testguest 1 --config
This decreases the VM’s performance and host load footprint of testguest after the VM’s next boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance.
Verification
Confirm that the current state of vCPU for the VM reflects your changes.
# virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4
Additional resources
13.6.2. Managing virtual CPUs by using the web console
By using the RHEL 9 web console, you can review and configure virtual CPUs used by virtual machines (VMs) to which the web console is connected.
Prerequisites
You have installed the RHEL 9 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
Log in to the RHEL 9 web console.
For details, see Logging in to the web console.
In the
interface, click the VM whose information you want to see.A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Click
next to the number of vCPUs in the Overview pane.The vCPU details dialog appears.
Configure the virtual CPUs for the selected VM.
vCPU Count - The number of vCPUs currently in use.
NoteThe vCPU count cannot be greater than the vCPU Maximum.
- vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the VM.
- Sockets - The number of sockets to expose to the VM.
- Cores per socket - The number of cores for each socket to expose to the VM.
Threads per core - The number of threads for each core to expose to the VM.
Note that the Sockets, Cores per socket, and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values.
Click
.The virtual CPUs for the VM are configured.
NoteChanges to virtual CPU settings only take effect after the VM is restarted.
Additional resources
13.6.3. Configuring NUMA in a virtual machine
The following methods can be used to configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM) on a RHEL 9 host.
Prerequisites
The host is a NUMA-compatible machine. To detect whether this is the case, use the
virsh nodeinfo
command and see theNUMA cell(s)
line:# virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1200 MHz CPU socket(s): 1 Core(s) per socket: 12 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 67012964 KiB
If the value of the line is 2 or greater, the host is NUMA-compatible.
Procedure
For ease of use, you can set up a VM’s NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement.
Automatic methods
Set the VM’s NUMA policy to
Preferred
. For example, to do so for the testguest5 VM:# virt-xml testguest5 --edit --vcpus placement=auto # virt-xml testguest5 --edit --numatune mode=preferred
Enable automatic NUMA balancing on the host:
# echo 1 > /proc/sys/kernel/numa_balancing
Start the
numad
service to automatically align the VM CPU with memory resources.# systemctl start numad
Manual methods
Pin specific vCPU threads to a specific host CPU or range of CPUs. This is also possible on non-NUMA hosts and VMs, and is recommended as a safe method of vCPU performance improvement.
For example, the following commands pin vCPU threads 0 to 5 of the testguest6 VM to host CPUs 1, 3, 5, 7, 9, and 11, respectively:
# virsh vcpupin testguest6 0 1 # virsh vcpupin testguest6 1 3 # virsh vcpupin testguest6 2 5 # virsh vcpupin testguest6 3 7 # virsh vcpupin testguest6 4 9 # virsh vcpupin testguest6 5 11
Afterwards, you can verify whether this was successful:
# virsh vcpupin testguest6 VCPU CPU Affinity ---------------------- 0 1 1 3 2 5 3 7 4 9 5 11
After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. For example, the following commands pin the QEMU process thread of testguest6 to CPUs 13 and 15, and verify this was successful:
# virsh emulatorpin testguest6 13,15 # virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 13,15
Finally, you can also specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM’s vCPU. For example, the following commands set testguest6 to use host NUMA nodes 3 to 5, and verify this was successful:
# virsh numatune testguest6 --nodeset 3-5 # virsh numatune testguest6
For best performance results, it is recommended to use all of the manual tuning methods listed above
Known issues
Additional resources
- Sample vCPU performance tuning scenario
-
View the current NUMA configuration of your system using the
numastat
utility
13.6.4. Sample vCPU performance tuning scenario
To obtain the best vCPU performance possible, Red Hat recommends by using manual vcpupin
, emulatorpin
, and numatune
settings together, for example like in the following scenario.
Starting scenario
Your host has the following hardware specifics:
- 2 NUMA nodes
- 3 CPU cores on each node
- 2 threads on each core
The output of
virsh nodeinfo
of such a machine would look similar to:# virsh nodeinfo CPU model: x86_64 CPU(s): 12 CPU frequency: 3661 MHz CPU socket(s): 2 Core(s) per socket: 3 Thread(s) per core: 2 NUMA cell(s): 2 Memory size: 31248692 KiB
You intend to modify an existing VM to have 8 vCPUs, which means that it will not fit in a single NUMA node.
Therefore, you should distribute 4 vCPUs on each NUMA node and make the vCPU topology resemble the host topology as closely as possible. This means that vCPUs that run as sibling threads of a given physical CPU should be pinned to host threads on the same core. For details, see the Solution below:
Solution
Obtain the information about the host topology:
# virsh capabilities
The output should include a section that looks similar to the following:
<topology> <cells num="2"> <cell id="0"> <memory unit="KiB">15624346</memory> <pages unit="KiB" size="4">3906086</pages> <pages unit="KiB" size="2048">0</pages> <pages unit="KiB" size="1048576">0</pages> <distances> <sibling id="0" value="10" /> <sibling id="1" value="21" /> </distances> <cpus num="6"> <cpu id="0" socket_id="0" core_id="0" siblings="0,3" /> <cpu id="1" socket_id="0" core_id="1" siblings="1,4" /> <cpu id="2" socket_id="0" core_id="2" siblings="2,5" /> <cpu id="3" socket_id="0" core_id="0" siblings="0,3" /> <cpu id="4" socket_id="0" core_id="1" siblings="1,4" /> <cpu id="5" socket_id="0" core_id="2" siblings="2,5" /> </cpus> </cell> <cell id="1"> <memory unit="KiB">15624346</memory> <pages unit="KiB" size="4">3906086</pages> <pages unit="KiB" size="2048">0</pages> <pages unit="KiB" size="1048576">0</pages> <distances> <sibling id="0" value="21" /> <sibling id="1" value="10" /> </distances> <cpus num="6"> <cpu id="6" socket_id="1" core_id="3" siblings="6,9" /> <cpu id="7" socket_id="1" core_id="4" siblings="7,10" /> <cpu id="8" socket_id="1" core_id="5" siblings="8,11" /> <cpu id="9" socket_id="1" core_id="3" siblings="6,9" /> <cpu id="10" socket_id="1" core_id="4" siblings="7,10" /> <cpu id="11" socket_id="1" core_id="5" siblings="8,11" /> </cpus> </cell> </cells> </topology>
- Optional: Test the performance of the VM by using the applicable tools and utilities.
Set up and mount 1 GiB huge pages on the host:
Note1 GiB huge pages might not be available on some architectures and configurations, such as ARM 64 hosts.
Add the following line to the host’s kernel command line:
default_hugepagesz=1G hugepagesz=1G
Create the
/etc/systemd/system/hugetlb-gigantic-pages.service
file with the following content:[Unit] Description=HugeTLB Gigantic Pages Reservation DefaultDependencies=no Before=dev-hugepages.mount ConditionPathExists=/sys/devices/system/node ConditionKernelCommandLine=hugepagesz=1G [Service] Type=oneshot RemainAfterExit=yes ExecStart=/etc/systemd/hugetlb-reserve-pages.sh [Install] WantedBy=sysinit.target
Create the
/etc/systemd/hugetlb-reserve-pages.sh
file with the following content:#!/bin/sh nodes_path=/sys/devices/system/node/ if [ ! -d $nodes_path ]; then echo "ERROR: $nodes_path does not exist" exit 1 fi reserve_pages() { echo $1 > $nodes_path/$2/hugepages/hugepages-1048576kB/nr_hugepages } reserve_pages 4 node1 reserve_pages 4 node2
This reserves four 1GiB huge pages from node1 and four 1GiB huge pages from node2.
Make the script created in the previous step executable:
# chmod +x /etc/systemd/hugetlb-reserve-pages.sh
Enable huge page reservation on boot:
# systemctl enable hugetlb-gigantic-pages
Use the
virsh edit
command to edit the XML configuration of the VM you wish to optimize, in this example super-VM:# virsh edit super-vm
Adjust the XML configuration of the VM in the following way:
-
Set the VM to use 8 static vCPUs. Use the
<vcpu/>
element to do this. Pin each of the vCPU threads to the corresponding host CPU threads that it mirrors in the topology. To do so, use the
<vcpupin/>
elements in the<cputune>
section.Note that, as shown by the
virsh capabilities
utility above, host CPU threads are not ordered sequentially in their respective cores. In addition, the vCPU threads should be pinned to the highest available set of host cores on the same NUMA node. For a table illustration, see the Sample topology section below.The XML configuration for steps a. and b. can look similar to:
<cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='7'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='8'/> <vcpupin vcpu='7' cpuset='11'/> <emulatorpin cpuset='6,9'/> </cputune>
Set the VM to use 1 GiB huge pages:
<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>
Configure the VM’s NUMA nodes to use memory from the corresponding NUMA nodes on the host. To do so, use the
<memnode/>
elements in the<numatune/>
section:<numatune> <memory mode="preferred" nodeset="1"/> <memnode cellid="0" mode="strict" nodeset="0"/> <memnode cellid="1" mode="strict" nodeset="1"/> </numatune>
Ensure the CPU mode is set to
host-passthrough
, and that the CPU uses cache inpassthrough
mode:<cpu mode="host-passthrough"> <topology sockets="2" cores="2" threads="2"/> <cache mode="passthrough"/>
On an ARM 64 system, omit the
<cache mode="passthrough"/>
line.
-
Set the VM to use 8 static vCPUs. Use the
Verification
Confirm that the resulting XML configuration of the VM includes a section similar to the following:
[...] <memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='7'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='8'/> <vcpupin vcpu='7' cpuset='11'/> <emulatorpin cpuset='6,9'/> </cputune> <numatune> <memory mode="preferred" nodeset="1"/> <memnode cellid="0" mode="strict" nodeset="0"/> <memnode cellid="1" mode="strict" nodeset="1"/> </numatune> <cpu mode="host-passthrough"> <topology sockets="2" cores="2" threads="2"/> <cache mode="passthrough"/> <numa> <cell id="0" cpus="0-3" memory="2" unit="GiB"> <distances> <sibling id="0" value="10"/> <sibling id="1" value="21"/> </distances> </cell> <cell id="1" cpus="4-7" memory="2" unit="GiB"> <distances> <sibling id="0" value="21"/> <sibling id="1" value="10"/> </distances> </cell> </numa> </cpu> </domain>
- Optional: Test the performance of the VM by using the applicable tools and utilities to evaluate the impact of the VM’s optimization.
Sample topology
The following tables illustrate the connections between the vCPUs and the host CPUs they should be pinned to:
Table 13.2. Host topology CPU threads
0
3
1
4
2
5
6
9
7
10
8
11
Cores
0
1
2
3
4
5
Sockets
0
1
NUMA nodes
0
1
Table 13.3. VM topology vCPU threads
0
1
2
3
4
5
6
7
Cores
0
1
2
3
Sockets
0
1
NUMA nodes
0
1
Table 13.4. Combined host and VM topology vCPU threads
0
1
2
3
4
5
6
7
Host CPU threads
0
3
1
4
2
5
6
9
7
10
8
11
Cores
0
1
2
3
4
5
Sockets
0
1
NUMA nodes
0
1
In this scenario, there are 2 NUMA nodes and 8 vCPUs. Therefore, 4 vCPU threads should be pinned to each node.
In addition, Red Hat recommends leaving at least a single CPU thread available on each node for host system operations.
Because in this example, each NUMA node houses 3 cores, each with 2 host CPU threads, the set for node 0 translates as follows:
<vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='5'/>
13.6.5. Managing kernel same-page merging
Kernel Same-Page Merging (KSM) improves memory density by sharing identical memory pages between virtual machines (VMs). However, enabling KSM increases CPU utilization, and might adversely affect overall performance depending on the workload.
Depending on your requirements, you can either enable or disable KSM for a single session or persistently.
In RHEL 9 and later, KSM is disabled by default.
Prerequisites
- Root access to your host system.
Procedure
Disable KSM:
To deactivate KSM for a single session, use the
systemctl
utility to stopksm
andksmtuned
services.# systemctl stop ksm # systemctl stop ksmtuned
To deactivate KSM persistently, use the
systemctl
utility to disableksm
andksmtuned
services.# systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. # systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.
Memory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the PageKSM
pages in the system by using the following command:
# echo 2 > /sys/kernel/mm/ksm/run
After anonymous pages replace the KSM pages, the khugepaged
kernel service will rebuild transparent hugepages on the VM’s physical memory.
- Enable KSM:
Enabling KSM increases CPU utilization and affects overall CPU performance.
Install the
ksmtuned
service:# dnf install ksmtuned
Start the service:
To enable KSM for a single session, use the
systemctl
utility to start theksm
andksmtuned
services.# systemctl start ksm # systemctl start ksmtuned
To enable KSM persistently, use the
systemctl
utility to enable theksm
andksmtuned
services.# systemctl enable ksm Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service → /usr/lib/systemd/system/ksm.service # systemctl enable ksmtuned Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service → /usr/lib/systemd/system/ksmtuned.service
13.7. Optimizing virtual machine network performance
Due to the virtual nature of a VM’s network interface card (NIC), the VM loses a portion of its allocated host network bandwidth, which can reduce the overall workload efficiency of the VM. The following tips can minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput.
Procedure
Use any of the following methods and observe if it has a beneficial effect on your VM network performance:
- Enable the vhost_net module
On the host, ensure the
vhost_net
kernel feature is enabled:# lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net
If the output of this command is blank, enable the
vhost_net
kernel module:# modprobe vhost_net
- Set up multi-queue virtio-net
To set up the multi-queue virtio-net feature for a VM, use the
virsh edit
command to edit to the XML configuration of the VM. In the XML, add the following to the<devices>
section, and replaceN
with the number of vCPUs in the VM, up to 16:<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>
If the VM is running, restart it for the changes to take effect.
- Batching network packets
In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use:
# ethtool -C tap0 rx-frames 64
- SR-IOV
- If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs. For more information, see Managing SR-IOV devices.
Additional resources
13.8. Virtual machine performance monitoring tools
To identify what consumes the most VM resources and which aspect of VM performance needs optimization, performance diagnostic tools, both general and VM-specific, can be used.
Default OS performance monitoring tools
For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems:
On your RHEL 9 host, as root, use the
top
utility or the system monitor application, and look forqemu
andvirt
in the output. This shows how much host system resources your VMs are consuming.-
If the monitoring tool displays that any of the
qemu
orvirt
processes consume a large portion of the host CPU or memory capacity, use theperf
utility to investigate. For details, see below. -
In addition, if a
vhost_net
thread process, named for example vhost_net-1234, is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features, such asmulti-queue virtio-net
.
-
If the monitoring tool displays that any of the
On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources.
-
On Linux systems, you can use the
top
utility. - On Windows systems, you can use the Task Manager application.
-
On Linux systems, you can use the
perf kvm
You can use the perf
utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 9 host. To do so:
On the host, install the perf package:
# dnf install perf
Use one of the
perf kvm stat
commands to display perf statistics for your virtualization host:-
For real-time monitoring of your hypervisor, use the
perf kvm stat live
command. -
To log the perf data of your hypervisor over a period of time, activate the logging by using the
perf kvm stat record
command. After the command is canceled or interrupted, the data is saved in theperf.data.guest
file, which can be analyzed by using theperf kvm stat report
command.
-
For real-time monitoring of your hypervisor, use the
Analyze the
perf
output for types ofVM-EXIT
events and their distribution. For example, thePAUSE_INSTRUCTION
events should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs.# perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time EXTERNAL_INTERRUPT 365634 31.59% 18.04% 0.42us 58780.59us 204.08us ( +- 0.99% ) MSR_WRITE 293428 25.35% 0.13% 0.59us 17873.02us 1.80us ( +- 4.63% ) PREEMPTION_TIMER 276162 23.86% 0.23% 0.51us 21396.03us 3.38us ( +- 5.19% ) PAUSE_INSTRUCTION 189375 16.36% 11.75% 0.72us 29655.25us 256.77us ( +- 0.70% ) HLT 20440 1.77% 69.83% 0.62us 79319.41us 14134.56us ( +- 0.79% ) VMCALL 12426 1.07% 0.03% 1.02us 5416.25us 8.77us ( +- 7.36% ) EXCEPTION_NMI 27 0.00% 0.00% 0.69us 1.34us 0.98us ( +- 3.50% ) EPT_MISCONFIG 5 0.00% 0.00% 5.15us 10.85us 7.88us ( +- 11.67% ) Total Samples:1157497, Total events handled time:413728274.66us.
Other event types that can signal problems in the output of
perf kvm stat
include:-
INSN_EMULATION
- suggests suboptimal VM I/O configuration.
-
For more information about using perf
to monitor virtualization performance, see the perf-kvm
man page.
numastat
To see the current NUMA configuration of your system, you can use the numastat
utility, which is provided by installing the numactl package.
The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting:
# numastat -c qemu-kvm
Per-node process memory usage (in MBs)
PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total
--------------- ------ ------ ------ ------ ------ ------ ------ ------ -----
51722 (qemu-kvm) 68 16 357 6936 2 3 147 598 8128
51747 (qemu-kvm) 245 11 5 18 5172 2532 1 92 8076
53736 (qemu-kvm) 62 432 1661 506 4851 136 22 445 8116
53773 (qemu-kvm) 1393 3 1 2 12 0 0 6702 8114
--------------- ------ ------ ------ ------ ------ ------ ------ ------ -----
Total 1769 463 2024 7462 10037 2672 169 7837 32434
In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient.
# numastat -c qemu-kvm
Per-node process memory usage (in MBs)
PID Node 0 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node 7 Total
--------------- ------ ------ ------ ------ ------ ------ ------ ------ -----
51747 (qemu-kvm) 0 0 7 0 8072 0 1 0 8080
53736 (qemu-kvm) 0 0 7 0 0 0 8113 0 8120
53773 (qemu-kvm) 0 0 7 0 0 0 1 8110 8118
59065 (qemu-kvm) 0 0 8050 0 0 0 0 0 8051
--------------- ------ ------ ------ ------ ------ ------ ------ ------ -----
Total 0 0 8072 0 8072 0 8114 8110 32368
Chapter 14. Importance of power management
Reducing the overall power consumption of computer systems helps to save cost. Effectively optimizing energy consumption of each system component includes studying different tasks that your system performs, and configuring each component to ensure that its performance is correct for that job. Lowering the power consumption of a specific component or of the system as a whole leads to lower heat and performance.
Proper power management results in:
- heat reduction for servers and computing centers
- reduced secondary costs, including cooling, space, cables, generators, and uninterruptible power supplies (UPS)
- extended battery life for laptops
- lower carbon dioxide output
- meeting government regulations or legal requirements regarding Green IT, for example, Energy Star
- meeting company guidelines for new systems
This section describes the information regarding power management of your Red Hat Enterprise Linux systems.
14.1. Power management basics
Effective power management is built on the following principles:
An idle CPU should only wake up when needed
Since Red Hat Enterprise Linux 6, the kernel runs
tickless
, which means the previous periodic timer interrupts have been replaced with on-demand interrupts. Therefore, idle CPUs are allowed to remain idle until a new task is queued for processing, and CPUs that have entered lower power states can remain in these states longer. However, benefits from this feature can be offset if your system has applications that create unnecessary timer events. Polling events, such as checks for volume changes or mouse movement, are examples of such events.Red Hat Enterprise Linux includes tools using which you can identify and audit applications on the basis of their CPU usage. For more information see, Audit and analysis overview and Tools for auditing.
Unused hardware and devices should be disabled completely
- This is true for devices that have moving parts, for example, hard disks. In addition to this, some applications may leave an unused but enabled device "open"; when this occurs, the kernel assumes that the device is in use, which can prevent the device from going into a power saving state.
Low activity should translate to low wattage
In many cases, however, this depends on modern hardware and correct BIOS configuration or UEFI on modern systems, including non-x86 architectures. Make sure that you are using the latest official firmware for your systems and that in the power management or device configuration sections of the BIOS the power management features are enabled. Some features to look for include:
- Collaborative Processor Performance Controls (CPPC) support for ARM64
- PowerNV support for IBM Power Systems
- SpeedStep
- PowerNow!
- Cool’n’Quiet
- ACPI (C-state)
Smart
If your hardware has support for these features and they are enabled in the BIOS, Red Hat Enterprise Linux uses them by default.
Different forms of CPU states and their effects
Modern CPUs together with Advanced Configuration and Power Interface (ACPI) provide different power states. The three different states are:
- Sleep (C-states)
- Frequency and voltage (P-states)
Heat output (T-states or thermal states)
A CPU running on the lowest sleep state, consumes the least amount of watts, but it also takes considerably more time to wake it up from that state when needed. In very rare cases this can lead to the CPU having to wake up immediately every time it just went to sleep. This situation results in an effectively permanently busy CPU and loses some of the potential power saving if another state had been used.
A turned off machine uses the least amount of power
- One of the best ways to save power is to turn off systems. For example, your company can develop a corporate culture focused on "green IT" awareness with a guideline to turn off machines during lunch break or when going home. You also might consolidate several physical servers into one bigger server and virtualize them using the virtualization technology, which is shipped with Red Hat Enterprise Linux.
14.2. Audit and analysis overview
The detailed manual audit, analysis, and tuning of a single system is usually the exception because the time and cost spent to do so typically outweighs the benefits gained from these last pieces of system tuning.
However, performing these tasks once for a large number of nearly identical systems where you can reuse the same settings for all systems can be very useful. For example, consider the deployment of thousands of desktop systems, or an HPC cluster where the machines are nearly identical. Another reason to do auditing and analysis is to provide a basis for comparison against which you can identify regressions or changes in system behavior in the future. The results of this analysis can be very helpful in cases where hardware, BIOS, or software updates happen regularly and you want to avoid any surprises with regard to power consumption. Generally, a thorough audit and analysis gives you a much better idea of what is really happening on a particular system.
Auditing and analyzing a system with regard to power consumption is relatively hard, even with the most modern systems available. Most systems do not provide the necessary means to measure power use via software. Exceptions exist though:
- iLO management console of Hewlett Packard server systems has a power management module that you can access through the web.
- IBM provides a similar solution in their BladeCenter power management module.
- On some Dell systems, the IT Assistant offers power monitoring capabilities as well.
Other vendors are likely to offer similar capabilities for their server platforms, but as can be seen there is no single solution available that is supported by all vendors. Direct measurements of power consumption are often only necessary to maximize savings as far as possible.
14.3. Tools for auditing
Red Hat Enterprise Linux 8 offers tools using which you can perform system auditing and analysis. Most of them can be used as supplementary sources of information in case you want to verify what you have discovered already or in case you need more in-depth information about certain parts.
Many of these tools are used for performance tuning as well, which include:
PowerTOP
-
It identifies specific components of kernel and user-space applications that frequently wake up the CPU. Use the
powertop
command as root to start the PowerTop tool andpowertop --calibrate
to calibrate the power estimation engine. For more information about PowerTop, see Managing power consumption with PowerTOP. Diskdevstat and netdevstat
They are SystemTap tools that collect detailed information about the disk activity and network activity of all applications running on a system. Using the collected statistics by these tools, you can identify applications that waste power with many small I/O operations rather than fewer, larger operations. Using the
dnf install tuned-utils-systemtap kernel-debuginfo
command as root, install thediskdevstat
andnetdevstat
tool.To view the detailed information about the disk and network activity, use:
# diskdevstat PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 3575 1000 dm-2 59 0.000 0.365 0.006 5 0.000 0.000 0.000 mozStorage #5 3575 1000 dm-2 7 0.000 0.000 0.000 0 0.000 0.000 0.000 localStorage DB [...] # netdevstat PID UID DEV XMIT_CNT XMIT_MIN XMIT_MAX XMIT_AVG RECV_CNT RECV_MIN RECV_MAX RECV_AVG COMMAND 3572 991 enp0s31f6 40 0.000 0.882 0.108 0 0.000 0.000 0.000 openvpn 3575 1000 enp0s31f6 27 0.000 1.363 0.160 0 0.000 0.000 0.000 Socket Thread [...]
With these commands, you can specify three parameters:
update_interval
,total_duration
, anddisplay_histogram
.TuneD
-
It is a profile-based system tuning tool that uses the
udev
device manager to monitor connected devices, and enables both static and dynamic tuning of system settings. You can use thetuned-adm recommend
command to determine which profile Red Hat recommends as the most suitable for a particular product. For more information about TuneD, see Getting started with TuneD and Customizing TuneD profiles. Using thepowertop2tuned utility
, you can create custom TuneD profiles fromPowerTOP
suggestions. For information about thepowertop2tuned
utility, see Optimizing power consumption. Virtual memory statistics (vmstat)
It is provided by the
procps-ng
package. Using this tool, you can view the detailed information about processes, memory, paging, block I/O, traps, and CPU activity.To view this information, use:
$ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 5805576 380856 4852848 0 0 119 73 814 640 2 2 96 0 0
Using the
vmstat -a
command, you can display active and inactive memory. For more information about othervmstat
options, see thevmstat
man page.iostat
It is provided by the
sysstat
package. This tool is similar tovmstat
, but only for monitoring I/O on block devices. It also provides more verbose output and statistics.To monitor the system I/O, use:
$ iostat avg-cpu: %user %nice %system %iowait %steal %idle 2.05 0.46 1.55 0.26 0.00 95.67 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn nvme0n1 53.54 899.48 616.99 3445229 2363196 dm-0 42.84 753.72 238.71 2886921 914296 dm-1 0.03 0.60 0.00 2292 0 dm-2 24.15 143.12 379.80 548193 1454712
blktrace
It provides detailed information about how time is spent in the I/O subsystem.
To view this information in human readable format, use:
# blktrace -d /dev/dm-0 -o - | blkparse -i - 253,0 1 1 0.000000000 17694 Q W 76423384 + 8 [kworker/u16:1] 253,0 2 1 0.001926913 0 C W 76423384 + 8 [0] [...]
Here, The first column, 253,0 is the device major and minor tuple. The second column, 1, gives information about the CPU, followed by columns for timestamps and PID of the process issuing the IO process.
The sixth column, Q, shows the event type, the 7th column, W for write operation, the 8th column, 76423384, is the block number, and the + 8 is the number of requested blocks.
The last field, [kworker/u16:1], is the process name.
By default, the
blktrace
command runs forever until the process is explicitly killed. Use the-w
option to specify the run-time duration.turbostat
It is provided by the
kernel-tools
package. It reports on processor topology, frequency, idle power-state statistics, temperature, and power usage on x86-64 processors.To view this summary, use:
# turbostat CPUID(0): GenuineIntel 0x16 CPUID levels; 0x80000008 xlevels; family:model:stepping 0x6:8e:a (6:142:10) CPUID(1): SSE3 MONITOR SMX EIST TM2 TSC MSR ACPI-TM HT TM CPUID(6): APERF, TURBO, DTS, PTM, HWP, HWPnotify, HWPwindow, HWPepp, No-HWPpkg, EPB [...]
By default,
turbostat
prints a summary of counter results for the entire screen, followed by counter results every 5 seconds. Specify a different period between counter results with the-i
option, for example, executeturbostat -i 10
to print results every 10 seconds instead.Turbostat is also useful for identifying servers that are inefficient in terms of power usage or idle time. It also helps to identify the rate of system management interrupts (SMIs) occurring on the system. It can also be used to verify the effects of power management tuning.
cpupower
IT is a collection of tools to examine and tune power saving related features of processors. Use the
cpupower
command with thefrequency-info
,frequency-set
,idle-info
,idle-set
,set
,info
, andmonitor
options to display and set processor related values.For example, to view available cpufreq governors, use:
$ cpupower frequency-info --governors analyzing CPU 0: available cpufreq governors: performance powersave
For more information about
cpupower
, see Viewing CPU related information.GNOME Power Manager
- It is a daemon that is installed as part of the GNOME desktop environment. GNOME Power Manager notifies you of changes in your system’s power status; for example, a change from battery to AC power. It also reports battery status, and warns you when battery power is low.
Additional resources
-
powertop(1)
,diskdevstat(8)
,netdevstat(8)
,tuned(8)
,vmstat(8)
,iostat(1)
,blktrace(8)
,blkparse(8)
, andturbostat(8)
man pages -
cpupower(1)
,cpupower-set(1)
,cpupower-info(1)
,cpupower-idle(1)
,cpupower-frequency-set(1)
,cpupower-frequency-info(1)
, andcpupower-monitor(1)
man pages
Chapter 15. Managing power consumption with PowerTOP
As a system administrator, you can use the PowerTOP tool to analyze and manage power consumption.
15.1. The purpose of PowerTOP
PowerTOP is a program that diagnoses issues related to power consumption and provides suggestions on how to extend battery lifetime.
The PowerTOP tool can provide an estimate of the total power usage of the system and also individual power usage for each process, device, kernel worker, timer, and interrupt handler. The tool can also identify specific components of kernel and user-space applications that frequently wake up the CPU.
Red Hat Enterprise Linux 9 uses version 2.x of PowerTOP.
15.2. Using PowerTOP
Prerequisites
To be able to use PowerTOP, make sure that the
powertop
package has been installed on your system:# dnf install powertop
15.2.1. Starting PowerTOP
Procedure
To run PowerTOP, use the following command:
# powertop
Laptops should run on battery power when running the powertop
command.
15.2.2. Calibrating PowerTOP
Procedure
On a laptop, you can calibrate the power estimation engine by running the following command:
# powertop --calibrate
Let the calibration finish without interacting with the machine during the process.
Calibration takes time because the process performs various tests, cycles through brightness levels and switches devices on and off.
When the calibration process is completed, PowerTOP starts as normal. Let it run for approximately an hour to collect data.
When enough data is collected, power estimation figures will be displayed in the first column of the output table.
Note that powertop --calibrate
can only be used on laptops.
15.2.3. Setting the measuring interval
By default, PowerTOP takes measurements in 20 seconds intervals.
If you want to change this measuring frequency, use the following procedure:
Procedure
Run the
powertop
command with the--time
option:# powertop --time=time in seconds
15.3. PowerTOP statistics
While it runs, PowerTOP gathers statistics from the system.
PowerTOP's output provides multiple tabs:
-
Overview
-
Idle stats
-
Frequency stats
-
Device stats
-
Tunables
-
WakeUp
You can use the Tab
and Shift+Tab
keys to cycle through these tabs.
15.3.1. The Overview tab
In the Overview
tab, you can view a list of the components that either send wakeups to the CPU most frequently or consume the most power. The items within the Overview
tab, including processes, interrupts, devices, and other resources, are sorted according to their utilization.
The adjacent columns within the Overview
tab provide the following pieces of information:
- Usage
- Power estimation of how the resource is being used.
- Events/s
- Wakeups per second. The number of wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means that less power is consumed. Components are ordered by how much further their power usage can be optimized.
- Category
- Classification of the component; such as process, device, or timer.
- Description
- Description of the component.
If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well.
Apart from this, the Overview
tab includes the line with summary statistics such as:
- Total power consumption
- Remaining battery life (only if applicable)
- Summary of total wakeups per second, GPU operations per second, and virtual file system operations per second
15.3.2. The Idle stats tab
The Idle stats
tab shows usage of C-states for all processors and cores, while the Frequency stats
tab shows usage of P-states including the Turbo mode, if applicable, for all processors and cores. The duration of C- or P-states is an indication of how well the CPU usage has been optimized. The longer the CPU stays in the higher C- or P-states (for example C4 is higher than C3), the better the CPU usage optimization is. Ideally, residency is 90% or more in the highest C- or P-state when the system is idle.
15.3.3. The Device stats tab
The Device stats
tab provides similar information to the Overview
tab but only for devices.
15.3.4. The Tunables tab
The Tunables
tab contains PowerTOP's suggestions for optimizing the system for lower power consumption.
Use the up
and down
keys to move through suggestions, and the enter
key to toggle the suggestion on or off.
15.3.5. The WakeUp tab
The WakeUp
tab displays the device wakeup settings available for users to change as and when required.
Use the up
and down
keys to move through the available settings, and the enter
key to enable or disable a setting.
Figure 15.1. PowerTOP output
Additional resources
For more details on PowerTOP, see PowerTOP’s home page.
15.4. Why Powertop does not display Frequency stats values in some instances
While using the Intel P-State driver, PowerTOP only displays values in the Frequency Stats
tab if the driver is in passive mode. But, even in this case, the values may be incomplete.
In total, there are three possible modes of the Intel P-State driver:
- Active mode with Hardware P-States (HWP)
- Active mode without HWP
- Passive mode
Switching to the ACPI CPUfreq driver results in complete information being displayed by PowerTOP. However, it is recommended to keep your system on the default settings.
To see what driver is loaded and in what mode, run:
# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver
-
intel_pstate
is returned if the Intel P-State driver is loaded and in active mode. -
intel_cpufreq
is returned if the Intel P-State driver is loaded and in passive mode. -
acpi-cpufreq
is returned if the ACPI CPUfreq driver is loaded.
While using the Intel P-State driver, add the following argument to the kernel boot command line to force the driver to run in passive mode:
intel_pstate=passive
To disable the Intel P-State driver and use, instead, the ACPI CPUfreq driver, add the following argument to the kernel boot command line:
intel_pstate=disable
15.5. Generating an HTML output
Apart from the powertop’s
output in terminal, you can also generate an HTML report.
Procedure
Run the
powertop
command with the--html
option:# powertop --html=htmlfile.html
Replace the
htmlfile.html
parameter with the required name for the output file.
15.6. Optimizing power consumption
To optimize power consumption, you can use either the powertop
service or the powertop2tuned
utility.
15.6.1. Optimizing power consumption using the powertop service
You can use the powertop
service to automatically enable all PowerTOP's suggestions from the Tunables
tab on the boot:
Procedure
Enable the
powertop
service:# systemctl enable powertop
15.6.2. The powertop2tuned utility
The powertop2tuned
utility allows you to create custom TuneD profiles from PowerTOP suggestions.
By default, powertop2tuned
creates profiles in the /etc/tuned/
directory, and bases the custom profile on the currently selected TuneD profile. For safety reasons, all PowerTOP tunings are initially disabled in the new profile.
To enable the tunings, you can:
-
Uncomment them in the
/etc/tuned/profile_name/tuned.conf file
. Use the
--enable
or-e
option to generate a new profile that enables most of the tunings suggested by PowerTOP.Certain potentially problematic tunings, such as the USB autosuspend, are disabled by default and need to be uncommented manually.
15.6.3. Optimizing power consumption using the powertop2tuned utility
Prerequisites
The
powertop2tuned
utility is installed on the system:# dnf install tuned-utils
Procedure
Create a custom profile:
# powertop2tuned new_profile_name
Activate the new profile:
# tuned-adm profile new_profile_name
Additional information
For a complete list of options that
powertop2tuned
supports, use:$ powertop2tuned --help
15.6.4. Comparison of powertop.service and powertop2tuned
Optimizing power consumption with powertop2tuned
is preferred over powertop.service
for the following reasons:
-
The
powertop2tuned
utility represents integration of PowerTOP into TuneD, which enables to benefit of advantages of both tools. -
The
powertop2tuned
utility allows for fine-grained control of enabled tuning. -
With
powertop2tuned
, potentially dangerous tuning are not automatically enabled. -
With
powertop2tuned
, rollback is possible without reboot.
Chapter 16. Getting started with perf
As a system administrator, you can use the perf
tool to collect and analyze performance data of your system.
16.1. Introduction to perf
The perf
user-space tool interfaces with the kernel-based subsystem Performance Counters for Linux (PCL). perf
is a powerful tool that uses the Performance Monitoring Unit (PMU) to measure, record, and monitor a variety of hardware and software events. perf
also supports tracepoints, kprobes, and uprobes.
16.2. Installing perf
This procedure installs the perf
user-space tool.
Procedure
Install the
perf
tool:# dnf install perf
16.3. Common perf commands
perf stat
- This command provides overall statistics for common performance events, including instructions executed and clock cycles consumed. Options allow for selection of events other than the default measurement events.
perf record
-
This command records performance data into a file,
perf.data
, which can be later analyzed using theperf report
command. perf report
-
This command reads and displays the performance data from the
perf.data
file created byperf record
. perf list
- This command lists the events available on a particular machine. These events will vary based on performance monitoring hardware and software configuration of the system.
perf top
-
This command performs a similar function to the
top
utility. It generates and displays a performance counter profile in realtime. perf trace
-
This command performs a similar function to the
strace
tool. It monitors the system calls used by a specified thread or process and all signals received by that application. perf help
-
This command displays a complete list of
perf
commands.
Additional resources
-
Add the
--help
option to a subcommand to open the man page.
Chapter 17. Profiling CPU usage in real time with perf top
You can use the perf top
command to measure CPU usage of different functions in real time.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
17.1. The purpose of perf top
The perf top
command is used for real time system profiling and functions similarly to the top
utility. However, where the top
utility generally shows you how much CPU time a given process or thread is using, perf top
shows you how much CPU time each specific function uses. In its default state, perf top
tells you about functions being used across all CPUs in both the user-space and the kernel-space. To use perf top
you need root access.
17.2. Profiling CPU usage with perf top
This procedure activates perf top
and profiles CPU usage in real time.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf. - You have root access
Procedure
Start the
perf top
monitoring interface:# perf top
The monitoring interface looks similar to the following:
Samples: 8K of event 'cycles', 2000 Hz, Event count (approx.): 4579432780 lost: 0/0 drop: 0/0 Overhead Shared Object Symbol 2.20% [kernel] [k] do_syscall_64 2.17% [kernel] [k] module_get_kallsym 1.49% [kernel] [k] copy_user_enhanced_fast_string 1.37% libpthread-2.29.so [.] pthread_mutex_lock 1.31% [unknown] [.] 0000000000000000 1.07% [kernel] [k] psi_task_change 1.04% [kernel] [k] switch_mm_irqs_off 0.94% [kernel] [k] fget 0.74% [kernel] [k] entry_SYSCALL_64 0.69% [kernel] [k] syscall_return_via_sysret 0.69% libxul.so [.] 0x000000000113f9b0 0.67% [kernel] [k] kallsyms_expand_symbol.constprop.0 0.65% firefox [.] moz_xmalloc 0.65% libpthread-2.29.so [.] __pthread_mutex_unlock_usercnt 0.60% firefox [.] free 0.60% libxul.so [.] 0x000000000241d1cd 0.60% [kernel] [k] do_sys_poll 0.58% [kernel] [k] menu_select 0.56% [kernel] [k] _raw_spin_lock_irqsave 0.55% perf [.] 0x00000000002ae0f3
In this example, the kernel function
do_syscall_64
is using the most CPU time.
Additional resources
-
perf-top(1)
man page
17.3. Interpretation of perf top output
The perf top
monitoring interface displays the data in several columns:
- The "Overhead" column
- Displays the percent of CPU a given function is using.
- The "Shared Object" column
- Displays name of the program or library which is using the function.
- The "Symbol" column
-
Displays the function name or symbol. Functions executed in the kernel-space are identified by
[k]
and functions executed in the user-space are identified by[.]
.
17.4. Why perf displays some function names as raw function addresses
For kernel functions, perf
uses the information from the /proc/kallsyms
file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped.
The debuginfo
package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g
option in GCC) to display the function names or symbols in such a situation.
It is not necessary to re-run the perf record
command after installing the debuginfo
associated with an executable. Simply re-run the perf report
command.
Additional Resources
17.5. Enabling debug and source repositories
A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance.
Procedure
Enable the source and debug information package channels: The
$(uname -i)
part is automatically replaced with a matching value for architecture of your system:Architecture name Value 64-bit Intel and AMD
x86_64
64-bit ARM
aarch64
IBM POWER
ppc64le
64-bit IBM Z
s390x
17.6. Getting debuginfo packages for an application or library using GDB
Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package.
Prerequisites
- The application or library you want to debug must be installed on the system.
-
GDB and the
debuginfo-install
tool must be installed on the system. For details, see Setting up to debug applications. -
Repositories providing
debuginfo
anddebugsource
packages must be configured and enabled on the system. For details, see Enabling debug and source repositories.
Procedure
Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run.
$ gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)
Exit GDB: type q and confirm with Enter.
(gdb) q
Run the command suggested by GDB to install the required
debuginfo
packages:# dnf debuginfo-install coreutils-8.30-6.el8.x86_64
The
dnf
package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files.-
In case GDB is not able to suggest the
debuginfo
package, follow the procedure described in Getting debuginfo packages for an application or library manually.
Additional resources
- How can I download or install debuginfo packages for RHEL systems? — Red Hat Knowledgebase solution
Chapter 18. Counting events during process execution with perf stat
You can use the perf stat
command to count hardware and software events during process execution.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
18.1. The purpose of perf stat
The perf stat
command executes a specified command, keeps a running count of hardware and software event occurrences during the commands execution, and generates statistics of these counts. If you do not specify any events, then perf stat
counts a set of common hardware and software events.
18.2. Counting events with perf stat
You can use perf stat
to count hardware and software event occurrences during command execution and generate statistics of these counts. By default, perf stat
operates in per-thread mode.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
Procedure
Count the events.
Running the
perf stat
command without root access will only count events occurring in the user space:$ perf stat ls
Example 18.1. Output of perf stat ran without root access
Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 1.28 msec task-clock:u # 0.165 CPUs utilized 0 context-switches:u # 0.000 M/sec 0 cpu-migrations:u # 0.000 K/sec 104 page-faults:u # 0.081 M/sec 1,054,302 cycles:u # 0.823 GHz 1,136,989 instructions:u # 1.08 insn per cycle 228,531 branches:u # 178.447 M/sec 11,331 branch-misses:u # 4.96% of all branches 0.007754312 seconds time elapsed 0.000000000 seconds user 0.007717000 seconds sys
As you can see in the previous example, when
perf stat
runs without root access the event names are followed by:u
, indicating that these events were counted only in the user-space.To count both user-space and kernel-space events, you must have root access when running
perf stat
:# perf stat ls
Example 18.2. Output of perf stat ran with root access
Desktop Documents Downloads Music Pictures Public Templates Videos Performance counter stats for 'ls': 3.09 msec task-clock # 0.119 CPUs utilized 18 context-switches # 0.006 M/sec 3 cpu-migrations # 0.969 K/sec 108 page-faults # 0.035 M/sec 6,576,004 cycles # 2.125 GHz 5,694,223 instructions # 0.87 insn per cycle 1,092,372 branches # 352.960 M/sec 31,515 branch-misses # 2.89% of all branches 0.026020043 seconds time elapsed 0.000000000 seconds user 0.014061000 seconds sys
By default,
perf stat
operates in per-thread mode. To change to CPU-wide event counting, pass the-a
option toperf stat
. To count CPU-wide events, you need root access:# perf stat -a ls
Additional resources
-
perf-stat(1)
man page
18.3. Interpretation of perf stat output
perf stat
executes a specified command and counts event occurrences during the commands execution and displays statistics of these counts in three columns:
- The number of occurrences counted for a given event
- The name of the event that was counted
When related metrics are available, a ratio or percentage is displayed after the hash sign (
#
) in the right-most column.For example, when running in default mode,
perf stat
counts both cycles and instructions and, therefore, calculates and displays instructions per cycle in the right-most column. You can see similar behavior with regard to branch-misses as a percent of all branches since both events are counted by default.
18.4. Attaching perf stat to a running process
You can attach perf stat
to a running process. This will instruct perf stat
to count event occurrences only in the specified processes during the execution of a command.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
Procedure
Attach
perf stat
to a running process:$ perf stat -p ID1,ID2 sleep seconds
The previous example counts events in the processes with the IDs of
ID1
andID2
for a time period ofseconds
seconds as dictated by using thesleep
command.
Additional resources
-
perf-stat(1)
man page
Chapter 19. Recording and analyzing performance profiles with perf
The perf
tool allows you to record performance data and analyze it at a later time.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
19.1. The purpose of perf record
The perf record
command samples performance data and stores it in a file, perf.data
, which can be read and visualized with other perf
commands. perf.data
is generated in the current directory and can be accessed at a later time, possibly on a different machine.
If you do not specify a command for perf record
to record during, it will record until you manually stop the process by pressing Ctrl+C
. You can attach perf record
to specific processes by passing the -p
option followed by one or more process IDs. You can run perf record
without root access, however, doing so will only sample performance data in the user space. In the default mode, perf record
uses CPU cycles as the sampling event and operates in per-thread mode with inherit mode enabled.
19.2. Recording a performance profile without root access
You can use perf record
without root access to sample and record performance data in the user-space only.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
Procedure
Sample and record the performance data:
$ perf record command
Replace
command
with the command you want to sample data during. If you do not specify a command, thenperf record
will sample data until you manually stop it by pressing Ctrl+C.
Additional resources
-
perf-record(1)
man page
19.3. Recording a performance profile with root access
You can use perf record
with root access to sample and record performance data in both the user-space and the kernel-space simultaneously.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf. - You have root access.
Procedure
Sample and record the performance data:
# perf record command
Replace
command
with the command you want to sample data during. If you do not specify a command, thenperf record
will sample data until you manually stop it by pressing Ctrl+C.
Additional resources
-
perf-record(1)
man page
19.4. Recording a performance profile in per-CPU mode
You can use perf record
in per-CPU mode to sample and record performance data in both and user-space and the kernel-space simultaneously across all threads on a monitored CPU. By default, per-CPU mode monitors all online CPUs.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
Procedure
Sample and record the performance data:
# perf record -a command
Replace
command
with the command you want to sample data during. If you do not specify a command, thenperf record
will sample data until you manually stop it by pressing Ctrl+C.
Additional resources
-
perf-record(1)
man page
19.5. Capturing call graph data with perf record
You can configure the perf record
tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf.
Procedure
Sample and record performance data with the
--call-graph
option:$ perf record --call-graph method command
-
Replace
command
with the command you want to sample data during. If you do not specify a command, thenperf record
will sample data until you manually stop it by pressing Ctrl+C. Replace method with one of the following unwinding methods:
fp
-
Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option
--fomit-frame-pointer
, this may not be able to unwind the stack. dwarf
- Uses DWARF Call Frame Information to unwind the stack.
lbr
- Uses the last branch record hardware on Intel processors.
-
Replace
Additional resources
-
perf-record(1)
man page
19.6. Analyzing perf.data with perf report
You can use perf report
to display and analyze a perf.data
file.
Prerequisites
-
You have the
perf
user space tool installed as described in Installing perf. -
There is a
perf.data
file in the current directory. -
If the
perf.data
file was created with root access, you need to runperf report
with root access too.
Procedure
Display the contents of the
perf.data
file for further analysis:# perf report
This command displays output similar to the following:
Samples: 2K of event 'cycles', Event count (approx.): 235462960 Overhead Command Shared Object Symbol 2.36% kswapd0 [kernel.kallsyms] [k] page_vma_mapped_walk 2.13% sssd_kcm libc-2.28.so [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2 1.17% gnome-shell libglib-2.0.so.0.5600.4