Monitoring Satellite performance

download PDF
Red Hat Satellite 6.15

Collect metrics from Satellite and allow their analysis in external tools

Red Hat Satellite Documentation Team


This guide describes how to gather metrics from Red Hat Satellite 6 for analysis. It is aimed at Satellite administrators.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation. Let us know how we can improve it.

Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress.



  1. Click the following link: Create Issue. If Jira displays a login error, log in and proceed after you are redirected to the form.
  2. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
  3. Click Create.

Chapter 1. Metrics overview

You can set up a third-party solution to collect live metrics from Satellite Server, archive them for a fixed period of time, and analyze them. Obtaining metrics from Satellite is useful for capacity planning and troubleshooting performance issues.

If you need to raise a support case with Red Hat to resolve a performance issue, the archived data provides valuable insight. Note that Red Hat Support can only access the archived data if you upload it to a support case.

You can collect the following metrics from Satellite:

  • Basic statistics from the operating system, including system load, memory utilization, and input/output operations
  • Process statistics, including memory and CPU utilization
  • Apache HTTP Server activity statistics
  • PostgreSQL activity statistics
  • Satellite application statistics

Chapter 2. Metrics solution components

Red Hat recommends using the Performance Co-Pilot to collect and archive Satellite metrics.

Performance Co-Pilot (PCP)
Performance Co-Pilot is a suite of tools and libraries for acquiring, storing, and analyzing system-level performance measurements. You can use PCP to analyze live and historical metrics in the CLI.
Performance Metric Domain Agents (PMDA)
A Performance Metric Domain Agent is a PCP add-on which enables access to metrics of an application or service. To gather all metrics relevant to Satellite, you have to install PMDA for Apache HTTP Server and PostgreSQL.
A web application that visualizes metrics collected by PCP. To analyze metrics in the web UI, you have to install Grafana and the Grafana PCP plugin.

Chapter 3. Setting up the metrics monitoring solution

Install PCP packages and configure PCP data collection. You can use the PCP CLI tools to retrieve metrics in the command line. Optionally, you can install Grafana to enable web UI access to metrics.

3.1. Installing PCP

Install the PCP packages on your Satellite Server and enable PCP daemons.


  • Ensure you have the minimum of 20 GB space available in the /var/log/pcp directory.

    With the default PCP data retention settings, data storage is estimated to use between 100 MB and 500 MB of disk space per day, but may use up to several gigabytes over time. For more information, see Chapter 4, Metrics data retention.


  1. Install the PCP packages:

    # satellite-maintain packages install pcp \
    pcp-pmda-apache \
    pcp-pmda-openmetrics \
    pcp-pmda-postgresql \
    pcp-pmda-redis \
    pcp-system-tools \
  2. Enable and start the Performance Metrics Collector daemon and Performance Metrics Logger daemon:

    # systemctl enable --now pmcd pmlogger

3.2. Configuring PCP data collection

You can configure PCP to collect metrics about processes, Satellite, Apache HTTP Server, and PostgreSQL.


  1. Symlink the Satellite specific configuration to PMDA process monitoring:

    # ln -s /etc/pcp/proc/foreman-hotproc.conf /var/lib/pcp/pmdas/proc/hotproc.conf

    By default, PCP only collects basic system metrics. This step enables detailed metrics about the following Satellite processes:

    • Java
    • PostgreSQL
    • Redis
    • Dynflow
    • Puma
    • Pulpcore
  2. Install the process monitoring PMDA:

    # cd /var/lib/pcp/pmdas/proc
    # ./Install
  3. Configure PCP to collect metrics from Apache HTTP Server.

    1. Enable the Apache HTTP Server extended status module:

      # satellite-installer --enable-apache-mod-status
    2. Enable the Apache HTTP Server PMDA:

      # cd /var/lib/pcp/pmdas/apache
      # ./Install
  4. Configure PCP to collect metrics from PostgreSQL:

    # cd /var/lib/pcp/pmdas/postgresql
    # ./Install
  5. Enable the telemetry feature in Satellite:

    # satellite-installer --foreman-telemetry-prometheus-enabled true
  6. Configure PCP to collect data from Satellite:

    # cd /var/lib/pcp/pmdas/openmetrics
    # echo "" > config.d/foreman.url
    # ./Install
  7. Restart PCP to begin data collection:

    # systemctl restart pmcd pmlogger

3.3. Verifying PCP configuration

You can verify that PCP is configured correctly and services are active.


  • Print a summary of the active PCP configuration:

    # pcp

    Example output of the pcp command:

    Performance Co-Pilot configuration on
     platform: Linux 4.18.0-372.32.1.el8_6.x86_64 #1 SMP Fri Oct 7 12:35:10 EDT 2022 x86_64
     hardware: 16 cpus, 2 disks, 1 node, 31895MB RAM
     timezone: UTC
     services: pmcd pmproxy
         pmcd: Version 5.3.7-17, 13 agents, 4 clients
         pmda: root pmcd proc pmproxy xfs redis linux apache mmv kvm
               postgresql jbd2 openmetrics
     pmlogger: primary logger: /var/log/pcp/pmlogger/
         pmie: primary engine: /var/log/pcp/pmie/

    In this example, both the Performance Metrics Collector Daemon (pmcd) and Performance Metrics Proxy Daemon (pmproxy) services are running. It also confirms the PMDA that are collecting metrics. Finally, it lists the active log file, in which pmlogger is currently storing metrics.

3.4. Enabling web UI access to metrics

You can enable web UI access to metrics collected by PCP by installing Grafana.


  1. Install Grafana and the Grafana PCP plugin on your Satellite Server:

    # satellite-maintain packages install grafana grafana-pcp
  2. Start and enable the Grafana web service and the PCP proxy service:

    # systemctl enable --now pmproxy grafana-server
  3. Open the firewall port to allow access to the Grafana web interface:

    # firewall-cmd --permanent --add-service=grafana
  4. Reload the firewall configuration to apply the changes:

    # firewall-cmd --reload
  5. Install PCP Redis and configure Grafana to load it. For more information, see Configuring PCP Redis in Red Hat Enterprise Linux 8 Monitoring and managing system status and performance.
  6. Access the Grafana web UI, enable the PCP plugin, and add PCP Redis as a data source. For more information, see Accessing the Grafana web UI in Red Hat Enterprise Linux 8 Monitoring and managing system status and performance.

Chapter 4. Metrics data retention

The storage capacity required by PCP data logging is determined by the following factors:

  • The logged metrics
  • The logging interval
  • The retention policy

The default logging (sampling) interval is 60 seconds. The default retention policy is to compress archives older than one day and to keep archives for the last 14 days.

You can increase the logging interval or shorten the retention policy to save storage space. If you require high-resolution sampling, you can decrease the logging interval. In such case, ensure that you have enough storage space.

PCP archive logs are stored in the /var/log/pcp/pmlogger/ directory.

4.1. Changing default logging interval

You can change the default logging interval to either increase or decrease the sampling rate, at which the PCP metrics are logged. A larger interval results in a lower sampling rate.


  1. Open the /etc/pcp/pmlogger/control.d/local configuration file.
  2. Locate the LOCALHOSTNAME line.
  3. Append -t XXs, where XX is the required time interval in seconds.
  4. Save the file.
  5. Restart the pmlogger service:

    # systemctl restart pmlogger

4.2. Changing data retention policy

You can change the data retention policy to control after how long the PCP data are archived and deleted.


  1. Open the /etc/sysconfig/pmlogger_timers file.
  2. Locate the PMLOGGER_DAILY_PARAMS line.
  3. If the line is commented, uncomment the line.
  4. Configure the following parameters:

    • Ensure the default -E parameter is present.
    • Append the -x parameter and add a value for the required number of days after which data is archived.
    • Append the -k parameter and add a value for the number of days after which data is deleted.

    For example, the parameters -x 4 -k 7 specify that data will be compressed after 4 days and deleted after 7 days.

  5. Save the file.

4.3. Viewing data storage statistics

You can list all available metrics, grouped by the frequency at which they are logged. For each group, you can also view the storage required to store the listed metrics, per day.

Example storage statistics:

logged every 60 sec: 61752 bytes or 84.80 Mbytes/day


  • To view data storage statistics, enter the following command on your Satellite Server:

    # less /var/log/pcp/pmlogger/

Chapter 5. PCP metrics

Metrics are stored in a tree-like structure. For example, all network metrics are stored in a node named network. Each metric may be a single value, or a list of values, known as instances. For example, kernel load has three instances, a 1-minute, 5-minute, and 15-minute average.

For every metric entry, PCP stores both its data and metadata. This includes the metrics description, data type, units, and dimensions. For example, the metadata enables PCP to output multiple metrics with different dimensions.

If a metric is a counter, its value can only increase. For example, a count of disk write operations on a specific device only increases. When you query the value of a counter metric, PCP converts this into a rate value by default.

In addition to system metrics, such as CPU, memory, kernel, XFS, disk, and network, the following metrics are configured:



Basic metrics of key Satellite processes


Apache HTTP Server metrics


Basic PostgreSQL statistics


Satellite metrics

5.1. Identifying available metrics

  • To list all metrics available through PCP, enter the following command:

    # pminfo
  • To list all Satellite metrics and their descriptions, enter the following command:

    # foreman-rake telemetry:metrics
  • To list the archived metrics, enter the following command:

    # less /var/log/pcp/pmlogger/
  • The pmlogger daemon archives data as received, according to its configuration. To retrieve the name of the active log file, enter the following command:

    # pcp | grep logger

    The output includes the file name of the active log file, for example:

     pmlogger: primary logger: /var/log/pcp/pmlogger/

5.2. Retrieving live metrics

You can use the PCP CLI tools to retrieve live metrics.


  • To print current values of a particular metric, enter the following command:

    # pmval -f 1 disk.partitions.write

    In this example, metric instances on writes to disk partitions are displayed. PCP converts the number of writes to disk partitions from a counter value to a rate value. The -f 1 argument specifies to abbreviate the values to one decimal place.

    Example output:

    metric:    disk.partitions.write
    semantics: cumulative counter (converting to rate)
    units:     count (converting to count / sec)
    samples:   all
                     vda1                  vda2                   sr0
                      0.0                  12.0                   0.0
                      0.0                   1.0                   0.0
                      0.0                   1.0                   0.0
                      0.0                   2.0                   0.0
  • To print system performance summary every 2 seconds, enter the following command:

    # pmstat -t 2sec

5.3. Retrieving archived metrics

You can use the PCP CLI tools to retrieve metrics from an archive file.


  • To list all metrics that were enabled when the archive file was created, enter the following command:

    # pminfo --archive archive_file
  • To view the host and time period covered by an archive file, enter the following command:

    # pmdumplog -l archive_file
  • To list disk writes for each partition, over the time period covered by the archive file:

    # pmval --archive /var/log/pcp/pmlogger/ \
      -f 1 disk.partitions.write
  • To list disk write operations per partition, with a 2-second interval, over the time period between 14:00 and 14:15:

    # pmval --archive /var/log/pcp/pmlogger/ \
      -d -t 2sec \
      -f 3 disk.partitions.write \
      -S @14:00 -T @14:15
  • To list average values of all performance metrics, including the time and value of the minimum/maximum, over the time period between 14:00 and 14:30, and format the values as a table:

    # pmlogsummary /var/log/pcp/pmlogger/ \
      -HlfiImM \
      -S @14:00 \
      -T @14:30 \
      disk.partitions.write \
  • To display system metrics stored in an archive, starting from 14:00, in an interactive manner similar to the top tool:

    # pcp --archive /var/log/pcp/pmlogger/ \
      -S @14:00 \

5.4. Retrieving metrics in the web UI

You can view the Grafana dashboard, which displays widgets with the values of metrics. You can add and remove metrics to suit your requirements. You can also select the time span displayed for each widget.


  • Open the following URL in a browser:

Additional resources

  • For more details on using Grafana, see the Grafana Labs website.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.