Chapter 4. Setting up PCP


Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements. You can add performance metrics using Python, Perl, C, and C interfaces. Analysis tools can use the Python, C, C client APIs directly, and rich web applications can explore all available performance data using a JSON interface. You can analyze data patterns by comparing live results with archived data.

Features of PCP
  • Light-weight distributed architecture useful during the centralized analysis of complex systems.
  • Ability to monitor and manage real-time data.
  • Ability to log and retrieve historical data.
PCP has the following components
  • The Performance Metric Collector Daemon (pmcd) collects performance data from the installed Performance Metric Domain Agents (PMDA). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host.
  • Various client tools, such as pminfo or pmstat, can retrieve, display, archive, and process this data on the same host or over the network.
  • The pcp and pcp-system-tools packages provide the command-line tools and core functionality.
  • The pcp-gui package provides the graphical application pmchart.
  • The grafana-pcp package provides powerful web-based visualisation and alerting with Grafana.

4.1. Installing and enabling PCP

Install the required packages and enable the PCP monitoring services to start using it. You can also automate the PCP installation by using the pcp-zeroconf package. For more information about installing PCP by using pcp-zeroconf, Setting up PCP with pcp-zeroconf.

Procedure

  1. Install the pcp package:

    # dnf install pcp
    Copy to Clipboard
  2. Enable and start the pmcd service on the host machine:

    # systemctl enable pmcd
    # systemctl start pmcd
    Copy to Clipboard

Verification

  • Verify if the PMCD process is running on the host:

    # pcp
    Performance Co-Pilot configuration on arm10.local:
    
     platform: Linux arm10.local 6.12.0-55.13.1.el10_0.aarch64 #1 SMP PREEMPT_DYNAMIC Mon May 19 07:29:57 UTC 2025 aarch64
     hardware: 4 cpus, 1 disk, 1 node, 3579MB RAM
     timezone: JST-9
     services: pmcd
         pmcd: Version 6.3.7-1, 12 agents, 6 clients
         pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm jbd2
               dm openmetrics
    Copy to Clipboard

4.2. Deploying a minimal PCP setup

The minimal PCP setup collects performance statistics on Red Hat Enterprise Linux. The setup involves adding the minimum number of packages on a production system needed to gather data for further analysis. You can analyze the resulting tar.gz file and the archive of the pmlogger output by using various PCP tools and compare them with other sources of performance information.

Prerequisites

Procedure

  1. Update the pmlogger configuration:

    # pmlogconf -r /var/lib/pcp/config/pmlogger/config.default
    Copy to Clipboard
  2. Start the pmcd and pmlogger services:

    # systemctl start pmcd.service
    # systemctl start pmlogger.service
    Copy to Clipboard
  3. Execute the required operations to record the performance data.
  4. Save the output and save it to a tar.gz file named based on the host name and the current date and time:

    # cd /var/log/pcp/pmlogger/
    # tar -czf $(hostname).$(date +%F-%Hh%M).pcp.tar.gz $(hostname)
    Copy to Clipboard
  5. Extract this file and analyze the data using PCP tools.

4.3. System services and tools distributed with PCP

The basic package pcp includes the system services and basic tools. You can install additional tools that are provided with the pcp-system-tools, pcp-gui, and pcp-devel packages.

Roles of system services distributed with PCP

pmcd
The Performance Metric Collector Daemon.
pmie
The Performance Metrics Inference Engine.
pmlogger
The performance metrics logger.
pmproxy
The realtime and historical performance metrics proxy, time series query and REST API service.

Tools distributed with base PCP package

pcp
Displays the current status of a Performance Co-Pilot installation.
pcp-check
Activates, or deactivates core and optional components, such as pmcd, pmlogger, pmproxy, and PMDAs.
pcp-vmstat
Provides a high-level system performance overview every 5 seconds. Displays information about processes, memory, paging, block IO, traps, and CPU activity.
pmconfig
Displays the values of configuration parameters.
pmdiff
Compares the average values for every metric in either one or two archives, in a given time window, for changes that are likely to be of interest when searching for performance regressions.
pmdumplog
Displays control, metadata, index, and state information from a Performance Co-Pilot archive file.
pmfind
Finds PCP services on the network.
pmie
An inference engine that periodically evaluates a set of arithmetic, logical, and rule expressions. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmieconf
Displays or sets configurable pmie variables.
pmiectl
Manages non-primary instances of pmie.
pminfo
Displays information about performance metrics. The metrics are collected either from a live system, or from a Performance Co-Pilot archive file.
pmlc
Interactively configures active pmlogger instances.
pmlogcheck
Identifies invalid data in a Performance Co-Pilot archive file.
pmlogconf
Creates and modifies a pmlogger configuration file.
pmlogctl
Manages non-primary instances of pmlogger.
pmloglabel
Verifies, modifies, or repairs the label of a Performance Co-Pilot archive file.
pmlogsummary
Calculates statistical information about performance metrics stored in a Performance Co-Pilot archive file.
pmprobe
Determines the availability of performance metrics.
pmsocks
Allows access to a Performance Co-Pilot hosted through a firewall.
pmstat
Periodically displays a brief summary of system performance.
pmstore
Modifies the values of performance metrics.
pmseries
Fast, scalable time series querying, using the facilities of PCP and a distributed key-value data store such as Valkey.
pmtrace
Provides a command-line interface to the trace PMDA.
pmval
An updating display of the current value of any performance metric.

Tools distributed with the separately installed pcp-system-tools package

pcp-atop
Shows the system-level occupation of the most critical hardware resources from the performance point of view: CPU, memory, disk, and network.
pcp-atopsar
Generates a system-level activity report over a variety of system resource utilization. The report is generated from a raw logfile previously recorded using pmlogger or the -w option of pcp-atop.
pcp-dmcache
Displays information about configured Device Mapper Cache targets, such as: device IOPs, cache and metadata device utilization, as well as hit and miss rates and ratios for both reads and writes for each cache device.
pcp-dstat
Displays metrics of one system at a time. To display metrics of multiple systems, use --host option.
pcp-free
Reports on free and used memory in a system.
pcp-htop
Displays all processes running on a system along with their command line arguments in a manner similar to the top command, but allows you to scroll vertically and horizontally as well as interact using a mouse. You can also view processes in a tree format and select and act on multiple processes at once.
pcp-ipcs
Displays information about the inter-process communication (IPC) facilities that the calling process has read access for.
pcp-mpstat
Reports CPU and interrupt-related statistics.
pcp-numastat
Displays NUMA allocation statistics from the kernel memory allocator.
pcp-pidstat
Displays information about individual tasks or processes running on the system, such as CPU percentage, memory and stack usage, scheduling, and priority. Reports live data for the local host by default.
pcp-shping
Samples and reports on the shell-ping service metrics exported by the pmdashping Performance Metrics Domain Agent (PMDA).
pcp-ss
Displays socket statistics collected by the pmdasockets PMDA.
pcp-tapestat
Reports I/O statistics for tape devices.
pcp-uptime
Displays how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.
pcp-verify
Inspects various aspects of a Performance Co-Pilot collector installation and reports on whether it is configured correctly for certain modes of operation.
pcp-iostat
Reports I/O statistics for SCSI devices (by default) or device-mapper devices (with the -x device-mapper option).
pmrep
Reports on selected, easily customizable, performance metrics values.

Tools distributed with the separately installed pcp-gui package

pmchart
Plots performance metrics values available through the facilities of the PCP.
pmdumptext
Outputs the values of performance metrics collected live or from a PCP archive.

Tools distributed with the separately installed pcp-devel package

pmclient
Displays high-level system performance metrics by using the Performance Metrics Application Programming Interface (PMAPI).
pmdbg
Displays available Performance Co-Pilot debug control flags and their values.
pmerr
Displays available Performance Co-Pilot error codes and their corresponding error messages.
pcp-xsos
Gives a fast summary report for a system by using a single sample taken from either a PCP archive or live metric values from that system.

Other tools distributed as a separate packages

pcp-geolocate
Discovers collector system geographical labels.
pcp2openmetrics
A customizable performance metrics exporter tool from PCP to Open Metrics format. You can select any available performance metric, live or archived, system and application for exporting by using either command line arguments or a configuration file.

4.4. PCP deployment architectures

PCP supports multiple deployment architectures, based on the scale of the PCP deployment, and offers many options to accomplish advanced setups. Available scaling deployment setup variants, determined by sizing factors and configuration options, include the following:

Localhost
Each service runs locally on the monitored machine. Starting a service without any configuration changes results in a default standalone deployment on the localhost. This setup does not support scaling beyond a single node. However, Valkey can also run in a highly available and scalable clustered mode, where data is shared across multiple hosts. You can also deploy a Valkey cluster in the cloud or use a managed Valkey cluster from a cloud provider.
Decentralized
The only difference between localhost and decentralized setup is the centralized Valkey service. In this model, the host executes pmlogger service on each monitored host and retrieves metrics from a local pmcd instance. A local pmproxy service then exports the performance metrics to a central Valkey instance.

Figure 4.1. Decentralized logging

Decentralized logging
Centralized logging - pmlogger farm
When the resource usage on the monitored hosts is constrained, another deployment option is a pmlogger farm, which is also known as centralized logging. In this setup, a single logger host executes multiple pmlogger processes, and each is configured to retrieve performance metrics from a different remote pmcd host. The centralized logger host is also configured to execute the pmproxy service, which discovers the resulting PCP archives logs and loads the metric data into a Valkey instance.

Figure 4.2. Centralized logging - pmlogger farm

Centralized logging - pmlogger farm
Federated - multiple pmlogger farms
For large scale deployments, deploy multiple pmlogger farms in a federated fashion. For example, one pmlogger farm per rack or data center. Each pmlogger farm loads the metrics into a central Valkey instance.

Figure 4.3. Federated - multiple pmlogger farms

Federated - multiple pmlogger farms
Note

By default, the deployment setup for Valkey is standalone, localhost. However, Valkey can optionally perform in a highly-available and highly scalable clustered fashion, where data is shared across multiple hosts. Another viable option is to deploy a Valkey cluster in the cloud, or to utilize a managed Valkey cluster from a cloud vendor.

4.5. Factors affecting scaling in PCP logging

The key factors influencing Performance Co-Pilot (PCP) logging are hardware resources, logged metrics, logging intervals, and post-upgrade archive management.

Remote system size
The hardware configuration of the remote system-such as the number of CPUs, disks, and network interfaces-directly impacts the volume of data collected by each pmlogger instance on the centralized logging host.
Logged metrics
The number and types of logged metrics significantly affect storage requirements. In particular, the per-process proc.* metrics require a large amount of disk space, for example, with the standard pcp-zeroconf setup, 10s logging interval, 11 MB without proc metrics but increases to 155 MB with proc metrics enabled - a ten fold difference. Additionally, the number of instances for each metric, for example the number of CPUs, block devices, and network interfaces also impacts storage capacity needs.
Logging interval
The frequency of metric logging determines storage usage. The expected daily PCP archive file sizes for each pmlogger instance are recorded in the pmlogger.log file. These estimates represent uncompressed data, but since PCP archives typically achieve a compression ratio of 10:1, long-term disk space requirements can be calculated accordingly.
Managing archive updates with pmlogrewrite
After every PCP upgrade, the pmlogrewrite tool updates existing archives if changes are detected in the metric metadata between versions. The time required for this process scales linearly with the number of stored archives.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat