Este conteúdo não está disponível no idioma selecionado.

Chapter 4. Setting up PCP


Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for managing and measuring system-level performance. You use Python, Perl, C++, and C interfaces to add performance metrics. Analysis tools use Python, C++, and C client APIs directly. Web applications can access performance data through a JSON interface. To analyze data patterns,compare live results with archived data.

Features of PCP
  • Light-weight distributed architecture useful during the centralized analysis of complex systems.
  • Ability to monitor and manage real-time data.
  • Ability to log and retrieve historical data.
PCP has the following components
  • The Performance Metric Collector Daemon (pmcd) collects performance data from the installed Performance Metric Domain Agents (PMDA). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host.
  • Various client tools, such as pminfo or pmstat, can retrieve, display, archive, and process this data on the same host or over the network.
  • The pcp and pcp-system-tools packages provide the command-line tools and core functionality.
  • The pcp-gui package provides the graphical application pmchart.
  • The grafana-pcp package provides powerful web-based visualizations and alerting with Grafana.

4.1. Installing and enabling PCP

Install the required packages and enable the PCP monitoring services to start using it. You can also automate the PCP installation by using the pcp-zeroconf package. For more information about installing PCP by using pcp-zeroconf, see Setting up PCP with pcp-zeroconf.

Procedure

  1. Install the pcp package:

    # dnf install pcp
  2. Enable and start the pmcd service on the host machine:

    # systemctl enable pmcd
    # systemctl start pmcd

Verification

  • Verify if the PMCD process is running on the host:

    # pcp
    Performance Co-Pilot configuration on arm10.local:
    
     platform: Linux arm10.local 6.12.0-55.13.1.el10_0.aarch64 #1 SMP PREEMPT_DYNAMIC Mon May 19 07:29:57 UTC 2025 aarch64
     hardware: 4 cpus, 1 disk, 1 node, 3579MB RAM
     timezone: JST-9
     services: pmcd
         pmcd: Version 6.3.7-1, 12 agents, 6 clients
         pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm jbd2
               dm openmetrics

4.2. Deploying a minimal PCP configuration

The minimal PCP configuration collects performance statistics on Red Hat Enterprise Linux. The configuration involves adding the minimum number of packages on a production system needed to gather data for further analysis.

Analyze the resulting tar.gz file and the archive of the pmlogger output by using various PCP tools. Then you can compare them with other performance data sources.

Prerequisites

Procedure

  1. Update the pmlogger configuration:

    # pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
  2. Start the pmcd and pmlogger services:

    # systemctl start pmcd.service
    # systemctl start pmlogger.service
  3. Run the required operations to record the performance data.
  4. Save the output and save it to a tar.gz file named based on the host name and the current date and time:

    # cd /var/log/pcp/pmlogger/
    # tar -czf $(hostname).$(date +%F-%Hh%M).pcp.tar.gz $(hostname)
  5. Extract this file and analyze the data using PCP tools.

4.3. System services and tools distributed with PCP

The basic package pcp includes the system services and basic tools. You can install additional tools that are provided with the pcp-system-tools, pcp-gui, and pcp-devel packages.

Roles of system services distributed with PCP

pmcd
The Performance Metric Collector Daemon.
pmie
The Performance Metrics Inference Engine.
pmlogger
The performance metrics logger.
pmproxy
The realtime and historical performance metrics proxy, time series query and REST API service.

Tools distributed with base PCP package

pcp
Displays the current status of a Performance Co-Pilot installation.
pcp-check
Activates, or deactivates core and optional components, such as pmcd, pmlogger, pmproxy, and PMDAs.
pcp-vmstat
Provides a high-level system performance overview every 5 seconds. Displays information about processes, memory, paging, block IO, traps, and CPU activity.
pmconfig
Displays the values of configuration parameters.
pmdiff
Compares average metric values across two archives within a time window to identify potential performance regressions.
pmdumplog
Displays control, metadata, index, and state information from a Performance Co-Pilot archive file.
pmfind
Finds PCP services on the network.
pmie
An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmieconf
Displays or sets configurable pmie variables.
pmiectl
Manages non-primary instances of pmie.
pminfo
Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmlc
Interactively configures active pmlogger instances.
pmlogcheck
Identifies invalid data in a Performance Co-Pilot archive file.
pmlogconf
Creates and modifies a pmlogger configuration file.
pmlogctl
Manages non-primary instances of pmlogger.
pmloglabel
Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file.
pmlogsummary
Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file.
pmprobe
Determines the availability of performance metrics.
pmsocks
Provides access to a Performance Co-Pilot hosted through a firewall.
pmstat
Periodically displays a brief summary of system performance.
pmstore
Modifies the values of performance metrics.
pmseries
Fast, scalable time series querying, using the facilities of PCP and a distributed key-value data store such as Valkey.
pmtrace
Provides a command-line interface to the trace PMDA.
pmval
An updating display of the current value of any performance metric.

Tools distributed with the separately installed pcp-system-tools package

pcp-atop
Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network.
pcp-atopsar
Generates a system-level activity report over a variety of system resource utilization. The report is generated from a raw logfile previously recorded using pmlogger or the -w option of pcp-atop.
pcp-dmcache
Displays information about configured Device Mapper Cache targets. Metrics include device IOPS, utilization, and read/write hit/miss rates for each cache device.
pcp-dstat
Displays metrics of one system at a time. To display metrics of multiple systems, use --host option.
pcp-free
Reports on free and used memory in a system.
pcp-htop
Lists all processes running on a system along with their command-line arguments. It is similar to the top command, with vertical and horizontal scrolling and mouse interaction.
pcp-ipcs
Displays information about the inter-process communication (IPC) facilities that the calling process has read access for.
pcp-mpstat
Reports CPU and interrupt-related statistics.
pcp-numastat
Displays NUMA allocation statistics from the kernel memory allocator.
pcp-pidstat
Displays information about individual tasks or processes running on the system. This includes CPU percentage, memory and stack usage, scheduling, and priority. Reports live data for the local host by default.
pcp-shping
Samples and reports on the shell-ping service metrics exported by the pmdashping Performance Metrics Domain Agent (PMDA).
pcp-ss
Displays socket statistics collected by the pmdasockets PMDA.
pcp-tapestat
Reports I/O statistics for tape devices.
pcp-uptime
Displays the system uptime, currently logged on users, and system load averages for the past 1, 5, and 15 minutes.
pcp-verify
Inspects various aspects of a Performance Co-Pilot collector’s configuration and ensures it is optimized for specific operations.
pcp-iostat
Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x device-mapper option).
pmrep
Reports on selected, easily customizable, performance metrics values.

Tools distributed with the separately installed pcp-gui package

pmchart
Plots performance metrics values available through the facilities of the PCP.
pmdumptext
Outputs the values of performance metrics collected live or from a PCP archive.

Tools distributed with the separately installed pcp-devel package

pmclient
Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI).
pmdbg
Displays available Performance Co-Pilot debug control flags and their values.
pmerr
Displays available Performance Co-Pilot error codes and their corresponding error messages.
pcp-xsos
Gives a fast summary report for a system by using a single sample taken from either a PCP archive or live metrics from the system.

Other tools distributed as a separate packages

pcp-geolocate
Discovers collector system geographical labels.
pcp2openmetrics
A customizable performance metrics exporter tool from PCP to Open Metrics format. You can select any live or archived PCP metric for exporting by using either command line arguments or a configuration file.

4.4. PCP deployment architectures

PCP supports multiple deployment architectures, based on the scale of the PCP deployment, and offers many options to accomplish advanced configurations.

Available scaling deployment configuration variants, determined by sizing factors and configuration options, include the following:

Localhost
Each service runs locally on the monitored machine. Starting a service without any configuration changes results in a default standalone deployment on the localhost. This configuration does not support scaling beyond a single node. However, Valkey can also run in a highly available and scalable clustered mode, where data is shared across multiple hosts. You can also deploy a Valkey cluster in the cloud or use a managed Valkey cluster from a cloud provider.
Decentralized
The only difference between localhost and decentralized configuration is the centralized Valkey service. In this model, the host runs pmlogger service on each monitored host and retrieves metrics from a local pmcd instance. A local pmproxy service then exports the performance metrics to a central Valkey instance.

Figure 4.1. Decentralized logging

Decentralized logging
Centralized logging - pmlogger farm
For resource-constrained hosts, use a pmlogger farm. This deployment is also called centralized logging. A single logger host runs pmlogger processes to collect metrics from multiple remote hosts. The centralized logger host runs the pmproxy service, which discovers the resulting PCP archives and loads the metric data into a Valkey instance.

Figure 4.2. Centralized logging - pmlogger farm

Centralized logging - pmlogger farm
Federated - multiple pmlogger farms
For large scale deployments, deploy multiple pmlogger farms in a federated fashion. For example, one pmlogger farm per rack or data center. Each pmlogger farm loads the metrics into a central Valkey instance.

Figure 4.3. Federated - multiple pmlogger farms

Federated - multiple pmlogger farms
Note

By default, the deployment configuration for Valkey is standalone, localhost. However, Valkey can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another option is to deploy a Valkey cluster in the cloud, or to use a managed Valkey cluster from a cloud provider.

4.5. Factors affecting scaling in PCP logging

The key factors influencing Performance Co-Pilot (PCP) logging are hardware resources, logged metrics, logging intervals, and post-upgrade archive management.

Remote system size
Remote hardware such as CPUs, disks, and network interfaces, determines the volume of data collected by each pmlogger instance.
Logged metrics
The number and types of logged metrics significantly affect storage requirements. In particular, the per-process proc.* metrics require a large amount of disk space. For example, a standard pcp-zeroconf logging at 10-second intervals uses 11 MB. Enabling proc metrics increases this to 155 MB. Additionally, the number of CPUs, block devices, and network interfaces impacts storage capacity requirements.
Logging interval
The frequency of metric logging determines storage usage. The expected daily PCP archive file sizes for each pmlogger instance are recorded in the pmlogger.log file. PCP archives typically compress at 10:1. Use this ratio to estimate long-term disk space requirements for uncompressed data.
Managing archive updates with pmlogrewrite
After PCP upgrades, pmlogrewrite updates existing archives if changes are detected in the metric metadata between versions. The time required for this process scales linearly with the number of stored archives.
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo