Configuring core platform monitoring


Monitoring stack for Red Hat OpenShift 4.21

Configure the monitoring components, such as Prometheus and Alertmanager, to customize the stack to your cluster's requirements.

Red Hat OpenShift Documentation Team

Abstract

This document outlines configuration for core platform monitoring. It covers essential topics such as managing performance, setting up data storage and retention, and configuring alerts and notifications.

The OpenShift Container Platform installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens after the installation.

This section explains which monitoring components can be configured and how to prepare for configuring the monitoring stack.

Important

1.1. Configurable monitoring components

Review configurable monitoring components and their corresponding config map keys used to specify the components in the cluster-monitoring-config config map.

Expand
Table 1.1. Configurable core platform monitoring components
Componentcluster-monitoring-config config map key

Prometheus Operator

prometheusOperator

Prometheus

prometheusK8s

Alertmanager

alertmanagerMain

Thanos Querier

thanosQuerier

kube-state-metrics

kubeStateMetrics

monitoring-plugin

monitoringPlugin

openshift-state-metrics

openshiftStateMetrics

Telemeter Client

telemeterClient

Metrics Server

metricsServer

Warning

Different configuration changes to the ConfigMap object result in different outcomes:

  • The pods are not redeployed. Therefore, there is no service outage.
  • The affected pods are redeployed:

    • For single-node clusters, this results in temporary service outage.
    • For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available.
    • Configuring and resizing a persistent volume always results in a service outage, regardless of high availability.

Each procedure that requires a change in the config map includes its expected outcome.

1.2. Creating a cluster monitoring config map

Customize the default monitoring stack to match your infrastructure and performance requirements by creating and updating the cluster-monitoring-config config map in the openshift-monitoring project.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check whether the cluster-monitoring-config ConfigMap object exists:

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If the ConfigMap object does not exist:

    1. Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: cluster-monitoring-config
        namespace: openshift-monitoring
      data:
        config.yaml: |
    2. Apply the configuration to create the ConfigMap object:

      $ oc apply -f cluster-monitoring-config.yaml

As a cluster administrator, you can monitor all core platform and user-defined projects. To delegate core platform monitoring capabilities to other users, assign different permission levels.

You can grant the permissions by assigning one of the following monitoring roles or cluster roles:

Expand
NameDescriptionProject

cluster-monitoring-metrics-api

Users with this role have the ability to access Thanos Querier API endpoints. Additionally, it grants access to the core platform Prometheus API and user-defined Thanos Ruler API endpoints.

openshift-monitoring

cluster-monitoring-operator-alert-customization

Users with this role can manage AlertingRule and AlertRelabelConfig resources for core platform monitoring. These permissions are required for the alert customization feature.

openshift-monitoring

monitoring-alertmanager-edit

Users with this role can manage the Alertmanager API for core platform monitoring. They can also manage alert silences in the OpenShift Container Platform web console.

openshift-monitoring

monitoring-alertmanager-view

Users with this role can monitor the Alertmanager API for core platform monitoring. They can also view alert silences in the OpenShift Container Platform web console.

openshift-monitoring

cluster-monitoring-view

Users with this cluster role have the same access rights as cluster-monitoring-metrics-api role, with additional permissions, providing access to the /federate endpoint for the user-defined Prometheus.

Must be bound with ClusterRoleBinding to gain access to the /federate endpoint for the user-defined Prometheus.

Cluster administrators can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • The user account that you are assigning the role to already exists.

Procedure

  1. In the OpenShift Container Platform web console, go to User ManagementRoleBindingsCreate binding.
  2. In the Binding Type section, select the Namespace Role Binding type.
  3. In the Name field, enter a name for the role binding.
  4. In the Namespace field, select the project where you want to grant the access.

    Important

    The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field.

  5. Select a monitoring role or cluster role from the Role Name list.
  6. In the Subject section, select User.
  7. In the Subject Name field, enter the name of the user.
  8. Select Create to apply the role binding.

1.3.2. Granting user permissions by using the CLI

Cluster administrators can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI (oc).

Important

Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • The user account that you are assigning the role to already exists.
  • You have installed the OpenShift CLI (oc).

Procedure

  • To assign a monitoring role to a user for a project, enter the following command:

    $ oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace>

    where:

    <role>
    Specifies the wanted monitoring role.
    <user>
    Specifies the user to whom you want to assign the role.
    <namespace>
    Specifies the project where you want to grant the access.
  • To assign a monitoring cluster role to a user for a project, enter the following command:

    $ oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace>

    where:

    <cluster-role>
    Specifies the wanted monitoring cluster role.
    <user>
    Specifies the user to whom you want to assign the cluster role.
    <namespace>
    Specifies the project where you want to grant the access.

You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources.

Control the placement and distribution of monitoring components across cluster nodes to optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.

You can move the monitoring stack components to specific nodes with the following methods:

  • Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes.
  • Assign tolerations to enable moving components to tainted nodes.

Move monitoring stack components to specific nodes to optimize performance or meet hardware requirements, by configuring nodeSelector constraints in the cluster-monitoring-config config map to match labels assigned to the nodes.

Note

You cannot add a node selector constraint directly to an existing scheduled pod.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:

    $ oc label nodes <node_name> <node_label>

    where:

    <node_name>
    Specifies the name of the node where you want to add the label.
    <node_label>
    Specifies the name of the wanted label.
  2. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  3. Specify the node labels for the nodeSelector constraint for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        # ...
        <component>:
          nodeSelector:
            <node_label_1>
            <node_label_2>
        # ...

    where:

    <component>
    Specifies the monitoring stack component.
    <node_label_1>
    Specifies the label you added to the node.
    <node_label_2>
    Optional: Specifies additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.
    Note

    If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations.

  4. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed.

You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Specify tolerations for the component:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          tolerations:
            <toleration_specification>

    Substitute <component> and <toleration_specification> accordingly.

    For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          tolerations:
          - key: "key1"
            operator: "Equal"
            value: "value1"
            effect: "NoSchedule"
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data.

In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole.

After you set a value for enforcedBodySizeLimit, the alert PrometheusScrapeBodySizeLimitHit fires when at least one Prometheus scrape target replies with a response body larger than the configured value.

Note

If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up metric value to 0, which can trigger the TargetDown alert.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a value for enforcedBodySizeLimit to data/config.yaml/prometheusK8s to limit the body size that can be accepted per target scrape:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |-
        prometheusK8s:
          enforcedBodySizeLimit: 40MB

    where:

    enforcedBodySizeLimit

    Defines the maximum body size for scraped metrics targets. This example limits the uncompressed size per target scrape to 40 megabytes.

    Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes).

    The default value is 0, which specifies no limit. You can also set the value to automatic to calculate the limit automatically based on cluster capacity.

  3. Save the file to apply the changes. The new configuration is applied automatically.

Ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components.

Configure these limits and requests for core platform monitoring components in the openshift-monitoring namespace.

2.3.1. Specifying limits and requests

Prevent resource exhaustion and ensure stable monitoring operations by setting appropriate CPU and memory limits for each monitoring component in the cluster-monitoring-config config map in the openshift-monitoring namespace.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the ConfigMap object named cluster-monitoring-config.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add values to define resource limits and requests for each component you want to configure.

    Important

    Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        prometheusK8s:
          resources:
            limits:
              cpu: 500m
              memory: 3Gi
            requests:
              cpu: 200m
              memory: 500Mi
        thanosQuerier:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        prometheusOperator:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        metricsServer:
          resources:
            requests:
              cpu: 10m
              memory: 50Mi
            limits:
              cpu: 50m
              memory: 500Mi
        kubeStateMetrics:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        telemeterClient:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        openshiftStateMetrics:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        nodeExporter:
          resources:
            limits:
              cpu: 50m
              memory: 150Mi
            requests:
              cpu: 20m
              memory: 50Mi
        monitoringPlugin:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        prometheusOperatorAdmissionWebhook:
          resources:
            limits:
              cpu: 50m
              memory: 100Mi
            requests:
              cpu: 20m
              memory: 50Mi
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

2.4. Choosing a metrics collection profile

Choose a metrics collection profile for core OpenShift Container Platform monitoring components to balance monitoring coverage with resource consumption by editing the cluster-monitoring-config config map in the openshift-monitoring project.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add the metrics collection profile setting under data/config.yaml/prometheusK8s:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          collectionProfile: <metrics_collection_profile_name>

    where:

    <metrics_collection_profile_name>
    Specifies the name of the metrics collection profile. The available values are full or minimal. If you do not specify a value or if the collectionProfile key name does not exist in the config map, the default setting of full is used.

    The following example sets the metrics collection profile to minimal for the core platform instance of Prometheus:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          collectionProfile: minimal
  3. Save the file to apply the changes. The new configuration is applied automatically.

2.5. Configuring pod topology spread constraints

You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones.

This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You can configure pod topology spread constraints for monitoring pods by using the cluster-monitoring-config config map.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add the following settings under the data/config.yaml field to configure pod topology spread constraints:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          topologySpreadConstraints:
          - maxSkew: <n>
            topologyKey: <key>
            whenUnsatisfiable: <value>
            labelSelector:
              <match_option>

    where:

    <component>
    Specifies a name of the component for which you want to set up pod topology spread constraints.
    <n>
    Specifies a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed.
    <key>
    Specifies a key of node labels for topologyKey. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain.
    <value>
    Specifies a value for whenUnsatisfiable. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    <match_option>
    Specifies labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.

    Example configuration for Prometheus:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          topologySpreadConstraints:
          - maxSkew: 1
            topologyKey: monitoring
            whenUnsatisfiable: DoNotSchedule
            labelSelector:
              matchLabels:
                app.kubernetes.io/name: prometheus
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting.

3.1. Configuring persistent storage

Learn about persistent storage configuration for monitoring components to properly plan and deploy production-ready monitoring infrastructure.

Run cluster monitoring with persistent storage to gain the following benefits:

  • Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
  • Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.

For production environments, it is highly recommended to configure persistent storage.

Important

In multi-node clusters, you must configure persistent storage for Prometheus and Alertmanager to ensure high availability.

3.1.1. Persistent storage prerequisites

  • Dedicate sufficient persistent storage to ensure that the disk does not become full.
  • Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume.

    Important
    • Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes.
    • Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.

3.1.2. Configuring a persistent volume claim

To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add your PVC configuration for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          volumeClaimTemplate:
            spec:
              storageClassName: <storage_class>
              resources:
                requests:
                  storage: <amount_of_storage>

    where:

    <component>
    Specifies the monitoring component for which you want to configure the PVC.
    <storage_class>
    Specifies an existing storage class. If a storage class is not specified, the default storage class is used.
    <amount_of_storage>
    Specifies the amount of required storage.

    The following example configures a PVC that claims persistent storage for Prometheus:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          volumeClaimTemplate:
            spec:
              storageClassName: my-storage-class
              resources:
                requests:
                  storage: 40Gi
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied.

    Warning

    When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage.

3.1.3. Resizing a persistent volume

Resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager to meet your capacity requirements. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured.

Important

You can only expand the size of the PVC. Shrinking the storage size is not possible.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have configured at least one PVC for core OpenShift Container Platform monitoring components.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes.
  2. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  3. Add a new storage size for the PVC configuration for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          volumeClaimTemplate:
            spec:
              resources:
                requests:
                  storage: <amount_of_storage>

    where:

    <component>
    Specifies the component for which you want to change the storage size.
    <amount_of_storage>
    Specifies the new size for the storage volume. It must be greater than the previous value.

    The following example sets the new PVC request to 100 gigabytes for the Prometheus instance:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          volumeClaimTemplate:
            spec:
              resources:
                requests:
                  storage: 100Gi
  4. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

    Warning

    When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage.

Modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. This ensures you maintain necessary metrics while preventing excessive disk space usage.

By default, Prometheus retains metrics data for 15 days for core platform monitoring.

Note

Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add the retention time and size configuration under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          retention: <time_specification>
          retentionSize: <size_specification>

    where:

    <time_specification>
    Specifies the retention time. The number is directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s.
    <size_specification>
    Specifies the retention size. The number is directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes).

    The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          retention: 24h
          retentionSize: 10GB
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

3.3. Configuring audit logs for Metrics Server

You can configure audit logs for Metrics Server to help you troubleshoot issues with the server. Audit logs record the sequence of actions in a cluster. It can record user, application, or control plane activities.

You can configure audit log rules to record specific events and a subset of associated data. The following audit profiles define configuration rules:

  • Metadata (default): This profile logs event metadata including user, timestamps, resource, and verb. It does not record request and response bodies.
  • Request: This profile logs event metadata and request body, but it does not record response body. This configuration does not apply to non-resource requests.
  • RequestResponse: This profile logs event metadata, and request and response bodies. This configuration does not apply to non-resource requests.
  • None: None of the previously described events are recorded.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add audit log configuration for Metrics Server under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        metricsServer:
          audit:
            profile: <audit_log_profile>

    where:

    <audit_log_profile>
    Specifies the audit profile for Metrics Server.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Verification

  • Verify that the audit profile is applied:

    $ oc -n openshift-monitoring get deploy metrics-server -o yaml | grep -- '--audit-policy-file=*'

    Example output:

            - --audit-policy-file=/etc/audit/request-profile.yaml

3.4. Setting log levels for monitoring components

You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Querier and log verbosity for Metrics Server. You can use these settings for troubleshooting and to gain better insight into how the components are functioning.

The following log levels can be applied to the relevant component in the cluster-monitoring-config ConfigMap object:

  • debug. Log debug, informational, warning, and error messages.
  • info (default). Log informational, warning, and error messages.
  • warn. Log warning and error messages only.
  • error. Log error messages only.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add log configuration for a component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          logLevel: <log_level>
        metricsServer:
          verbosity: <verbosity>
        # ...

    where:

    <component>
    Specifies the monitoring stack component for which you are setting a log level. Available component values are prometheusK8s, alertmanagerMain, prometheusOperator, and thanosQuerier.
    <log_level>
    Specifies the log level for the component. The available values are error, warn, info, and debug. The default value is info.
    <verbosity>
    Specifies the verbosity for Metrics Server. Valid values are positive integers. Increasing the number increases the amount of logged events, values over 10 are usually unnecessary. The default value is 0.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Verification

  1. Verify that the log configuration is applied by reviewing the deployment or pod configuration in the related project.

    • The following example checks the log level for the prometheus-operator deployment:

      $ oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep "log-level"

      Example output:

              - --log-level=debug
    • The following example checks the log verbosity for the metrics-server deployment:

      $ oc -n openshift-monitoring get deploy metrics-server -o yaml | grep -- '--v='

      Example output:

              - --v=3
  2. Verify that the pods for the component are running:

    $ oc -n openshift-monitoring get pods
    Note

    If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully.

3.5. Enabling the query log file for Prometheus

Configure Prometheus to write all queries that have been run by the engine to a log file for detailed analysis.

Important

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add the queryLogFile parameter for Prometheus under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          queryLogFile: <path>

    where:

    <path>
    Specifies the full path to the file in which queries will be logged.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Verification

  1. Verify that the pods for the component are running. The following sample command lists the status of pods:

    $ oc -n openshift-monitoring get pods

    Example output:

    ...
    prometheus-operator-567c9bc75c-96wkj   2/2     Running   0          62m
    prometheus-k8s-0                       6/6     Running   1          57m
    prometheus-k8s-1                       6/6     Running   1          57m
    thanos-querier-56c76d7df4-2xkpc        6/6     Running   0          57m
    thanos-querier-56c76d7df4-j5p29        6/6     Running   0          57m
    ...
  2. Read the query log:

    $ oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>
    Important

    Revert the setting in the config map after you have examined the logged query information.

3.6. Enabling query logging for Thanos Querier

Enable query logging for Thanos Querier to troubleshoot default platform monitoring performance issues.

Important

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a thanosQuerier section under data/config.yaml and add values as shown in the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        thanosQuerier:
          enableRequestLogging: <enable_logging>
          logLevel: <log_level>

    where:

    <enable_logging>
    Specifies whether to enable logging. Set it to true to enable and false to disable. The default value is false.
    <log_level>
    Specifies the log level. Supported values are debug, info, warn, and error. If no value exists for logLevel, it defaults to error.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Verification

  1. Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project:

    $ oc -n openshift-monitoring get pods
  2. Run a test query using the following sample commands as a model:

    $ token=`oc create token prometheus-k8s -n openshift-monitoring`
    $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
  3. Run the following command to read the query log:

    $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
    Note

    Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod.

  4. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map.

Configure the collection of metrics to monitor how cluster components and your own workloads are performing.

You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters.

4.1. Configuring remote write storage

Extend metrics retention and centralize monitoring data by sending metrics to external systems, supporting compliance requirements and long-term analytics. Doing so has no impact on how or for how long Prometheus stores metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).
  • You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.

    Important

    Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support.

  • You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-monitoring namespace.

    Warning

    To reduce security risks, use HTTPS and authentication to send metrics to an endpoint.

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a remoteWrite: section under data/config.yaml/prometheusK8s, as shown in the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            <endpoint_authentication_credentials>

    where:

    url
    Defines the URL of the remote write endpoint.
    <endpoint_authentication_credentials>
    Specifies the authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
  3. Add write relabel configuration values after the authentication credentials:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            <endpoint_authentication_credentials>
            writeRelabelConfigs:
            - <your_write_relabel_configs>

    where:

    <your_write_relabel_configs>
    Specifies configuration for metrics that you want to send to the remote endpoint.

    The following example forwards a single metric called my_metric:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels: [__name__]
              regex: 'my_metric'
              action: keep

    The following example forwards metrics called my_metric_1 and my_metric_2 in my_namespace namespace:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels: [__name__,namespace]
              regex: '(my_metric_1|my_metric_2);my_namespace'
              action: keep
  4. Save the file to apply the changes. The new configuration is applied automatically.

Use different methods to securely authenticate with a remote write endpoint.

The following authentication methods are supported:

  • AWS Signature Version 4
  • Basic authentication
  • Authorization
  • OAuth 2.0
  • TLS client

The following table provides details about supported authentication methods for remote write.

Expand
Authentication methodConfig map fieldDescription

AWS Signature Version 4

sigv4

This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method together with authorization, OAuth 2.0, or Basic authentication.

Basic authentication

basicAuth

Basic authentication sets an authorization header with the configured username and password on each remote write request.

authorization

authorization

Authorization sets the Authorization header on each remote write request by using the configured token.

OAuth 2.0

oauth2

An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method together with authorization, AWS Signature Version 4, or Basic authentication.

TLS client

tlsConfig

A TLS client configuration specifies the CA certificate, client certificate, and client key file used to authenticate with the remote write endpoint server using TLS.

The sample configuration requires that you have already created a CA certificate file, a client certificate file, and a client key file.

Learn about different authentication settings you can use to securely connect to a remote write endpoint.

Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring namespace.

The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-monitoring namespace.

apiVersion: v1
kind: Secret
metadata:
  name: sigv4-credentials
  namespace: openshift-monitoring
stringData:
  accessKey: <AWS_access_key>
  secretKey: <AWS_secret_key>
type: Opaque

where:

<AWS_access_key>
Specifies the AWS API access key.
<AWS_secret_key>
Specifies the AWS API secret key.

The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://authorization.example.com/api/write"
        sigv4:
          region: <AWS_region>
          accessKey:
            name: sigv4-credentials
            key: accessKey
          secretKey:
            name: sigv4-credentials
            key: secretKey
          profile: <AWS_profile_name>
          roleArn: <AWS_role_arn>

where:

<AWS_region>
Specifies the AWS region.
accessKey.name, secretKey.name
Define the name of the Secret object containing the AWS API access credentials.
accessKey.key
Defines the key that contains the AWS API access key in the specified Secret object.
secretKey.key
Defines the key that contains the AWS API secret key in the specified Secret object.
<AWS_profile_name>
Specifies the name of the AWS profile that is being used to authenticate.
<AWS_role_arn>
Specifies the unique identifier for the Amazon Resource Name (ARN) assigned to your role.
4.1.2.2. Sample YAML for Basic authentication

The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: rw-basic-auth
  namespace: openshift-monitoring
stringData:
  user: <basic_username>
  password: <basic_password>
type: Opaque

where:

<basic_username>
Specifies the username.
<basic_password>
Specifies the password.

The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint.

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://basicauth.example.com/api/write"
        basicAuth:
          username:
            name: rw-basic-auth
            key: user
          password:
            name: rw-basic-auth
            key: password

where:

username.name, password.name
Define the name of the Secret object that contains the authentication credentials.
username.key
Defines the key that contains the username in the specified Secret object.
password.key
Defines the key that contains the password in the specified Secret object.

The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: rw-bearer-auth
  namespace: openshift-monitoring
stringData:
  token: <authentication_token>
type: Opaque

where:

<authentication_token>
Specifies the authentication token.

The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://authorization.example.com/api/write"
        authorization:
          type: Bearer
          credentials:
            name: rw-bearer-auth
            key: token

where:

authorization.type
Defines the authentication type of the request. The default value is Bearer.
credentials.name
Defines the name of the Secret object that contains the authentication credentials.
credentials.key
Defines the key that contains the authentication token in the specified Secret object.
4.1.2.4. Sample YAML for OAuth 2.0 authentication

The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: oauth2-credentials
  namespace: openshift-monitoring
stringData:
  id: <oauth2_id>
  secret: <oauth2_secret>
type: Opaque

where:

<oauth2_id>
Specifies the Oauth 2.0 ID.
<oauth2_secret>
Specifies the OAuth 2.0 secret.

The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://test.example.com/api/write"
        oauth2:
          clientId:
            secret:
              name: oauth2-credentials
              key: id
          clientSecret:
            name: oauth2-credentials
            key: secret
          tokenUrl: https://example.com/oauth2/token
          scopes:
          - <scope_1>
          - <scope_2>
          endpointParams:
            param1: <parameter_1>
            param2: <parameter_2>

where:

oauth2.clientId.secret.name, oauth2.clientSecret.name
Define the name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object.
oauth2.clientId.secret.key, oauth2.clientSecret.key
Define the key that contains the OAuth 2.0 credentials in the specified Secret object.
tokenUrl
Defines the URL used to fetch a token with the specified clientId and clientSecret.
scopes
Defines the OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
endpointParams
Defines the OAuth 2.0 authorization request parameters required for the authorization server.
4.1.2.5. Sample YAML for TLS client authentication

The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-monitoring namespace.

apiVersion: v1
kind: Secret
metadata:
  name: mtls-bundle
  namespace: openshift-monitoring
data:
  ca.crt: <ca_cert>
  client.crt: <client_cert>
  client.key: <client_key>
type: tls

where:

<ca_cert>
Specifies the CA certificate in the Prometheus container with which to validate the server certificate.
<client_cert>
Specifies the client certificate for authentication with the server.
<client_key>
Specifies the client key.

The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle.

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://remote-write-endpoint.example.com"
        tlsConfig:
          ca:
            secret:
              name: mtls-bundle
              key: ca.crt
          cert:
            secret:
              name: mtls-bundle
              key: client.crt
          keySecret:
            name: mtls-bundle
            key: client.key

where:

ca.secret.name, cert.secret.name, keySecret.name
Define the name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object.
ca.secret.key
Defines the key in the specified Secret object that contains the CA certificate for the endpoint.
cert.secret.key
Defines the key in the specified Secret object that contains the client certificate for the endpoint.
keySecret.key
Defines the key in the specified Secret object that contains the client key secret.

4.1.3. Example remote write queue configuration

Optimize metrics delivery performance and reliability by tuning remote write queue parameters with the queueConfig object.

The following example shows the queue parameters with their default values for default platform monitoring in the openshift-monitoring namespace.

Example configuration of remote write parameters with default values:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://remote-write-endpoint.example.com"
        <endpoint_authentication_credentials>
        queueConfig:
          capacity: 10000
          minShards: 1
          maxShards: 50
          maxSamplesPerSend: 2000
          batchSendDeadline: 5s
          minBackoff: 30ms
          maxBackoff: 5s
          retryOnRateLimit: false
          sampleAgeLimit: 0s

where:

capacity
Defines the number of samples to buffer per shard before they are dropped from the queue.
minShards
Defines the minimum number of shards.
maxShards
Defines the maximum number of shards.
maxSamplesPerSend
Defines the maximum number of samples per send.
batchSendDeadline
Defines the maximum time for a sample to wait in buffer.
minBackoff
Defines the initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time.
maxBackoff
Defines the maximum time to wait before retrying a failed request.
retryOnRateLimit
When set to true, retries a request after receiving a 429 status code from the remote write storage.
sampleAgeLimit
Specifies that samples older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s, the parameter is ignored.

4.1.4. Table of remote write metrics

The following table contains remote write and remote write-adjacent metrics with further descriptions. The metrics help solve issues during remote write configuration.

Expand
MetricDescription

prometheus_remote_storage_highest_timestamp_in_seconds

Shows the newest timestamp that Prometheus stored in the write-ahead log (WAL) for any sample.

prometheus_remote_storage_queue_highest_sent_timestamp_seconds

Shows the newest timestamp that the remote write queue successfully sent.

prometheus_remote_storage_samples_retried_total

The number of samples that remote write failed to send and had to resend to remote storage. A steady high rate for this metric indicates problems with the network or remote storage endpoint.

prometheus_remote_storage_shards

Shows how many shards are currently running for each remote endpoint.

prometheus_remote_storage_shards_desired

Shows the calculated needed number of shards based on the current write throughput and the rate of incoming versus sent samples.

prometheus_remote_storage_shards_max

Shows the maximum number of shards based on the current configuration.

prometheus_remote_storage_shards_min

Shows the minimum number of shards based on the current configuration.

prometheus_tsdb_wal_segment_current

The WAL segment file that Prometheus is currently writing new data to.

prometheus_wal_watcher_current_segment

The WAL segment file that each remote write instance is currently reading from.

4.2. Creating cluster ID labels for metrics

Create cluster ID labels for metrics to uniquely identify and track metrics across clusters and workloads by adding the write_relabel settings for remote write storage in the cluster-monitoring-config config map in the openshift-monitoring namespace.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).
  • You have configured remote write storage.

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. In the writeRelabelConfigs: section under data/config.yaml/prometheusK8s/remoteWrite, add cluster ID relabel configuration values:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            <endpoint_authentication_credentials>
            writeRelabelConfigs:
              - <relabel_config>

    where:

    writeRelabelConfigs
    Specifies a list of write relabel configurations for metrics that you want to send to the remote endpoint.
    <relabel_config>
    Specifies the label configuration for the metrics sent to the remote write endpoint.

    The following sample shows how to forward a metric with the cluster ID label cluster_id:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels:
              - __tmp_openshift_cluster_id__
              targetLabel: cluster_id
              action: replace

    where:

    tmp_openshift_cluster_id
    Is a temporarily applied cluster ID source label. This temporary label gets replaced by the cluster ID label name that you specify.
    targetLabel
    Specifies the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__. The final relabeling step removes labels that use this name.
    action
    Specifies the write relabel action. The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
  3. Save the file to apply the changes. The new configuration is applied automatically.

Configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers to receive timely notifications about the state of your cluster. You can also attach custom labels to all time series and alerts to add useful metadata information.

5.1. Configuring external Alertmanager instances

The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. Add external Alertmanager instances to integrate with existing alerting infrastructure or centralize alert management across multiple clusters.

If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/prometheusK8s:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          additionalAlertmanagerConfigs:
          - <alertmanager_specification>

    where:

    <alertmanager_specification>
    Specifies authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig).

    The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          additionalAlertmanagerConfigs:
          - scheme: https
            pathPrefix: /
            timeout: "30s"
            apiVersion: v1
            bearerToken:
              name: alertmanager-bearer-token
              key: token
            tlsConfig:
              key:
                name: alertmanager-tls
                key: tls.key
              cert:
                name: alertmanager-tls
                key: tls.crt
              ca:
                name: alertmanager-tls
                key: tls.ca
            staticConfigs:
            - external-alertmanager1-remote.com
            - external-alertmanager1-remote2.com
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

5.1.1. Disabling the local Alertmanager

If you do not need the local Alertmanager, for example when you use external Alertmanager instance or when alerts are not required, disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project.

A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config config map.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enabled: false for the alertmanagerMain component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          enabled: false
  3. Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.

5.2. Configuring secrets for Alertmanager

Securely send alerts to authenticated endpoints by configuring Alertmanager secrets, protecting sensitive credentials while maintaining reliable alert delivery to external systems.

The monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, configure Alertmanager to use a secret that contains authentication credentials for the receiver.

For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object.

Add secrets to Alertmanager configuration to enable secure authentication with external alert receivers by editing the cluster-monitoring-config config map in the openshift-monitoring project.

After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config config map.
  • You have created the secret to be configured in Alertmanager in the openshift-monitoring project.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          secrets:
          - <secret_name_1>
          - <secret_name_2>

    where:

    secrets
    Defines the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
    <secret_name_1>
    Specifies the name of the Secret object that contains authentication credentials for the receiver.
    <secret_name_2>
    Optional: Specifies additional secrets. If you add additional secrets, place each one on a new line.

    The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          secrets:
          - test-secret-basic-auth
          - test-secret-api-token
  3. Save the file to apply the changes. The new configuration is applied automatically.

Attach custom labels to all time series and alerts leaving Prometheus by using Prometheus external labels, to organize and identify metrics by environment, region, or other categories.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Define labels you want to add for every metric under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            <key>: <value>

    where:

    <key>: <value>
    Specifies key-value pairs where <key> is a unique name for the new label and <value> is its value.
    Warning
    • Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.
    • Do not use cluster as a key name. Using it can cause issues where you are unable to see data in the developer dashboards.

    For example, to add metadata about the region and environment to all time series and alerts, use the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            region: eu
            environment: prod
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

5.4. Configuring alert notifications

In OpenShift Container Platform 4.21, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers.

Important

Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the alertmanager-main secret.

You can configure Alertmanager to send notifications to receive important alerts coming from the cluster. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main secret in the openshift-monitoring project.

Note

All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Extract the currently active Alertmanager configuration from the alertmanager-main secret and save it as a local alertmanager.yaml file:

    $ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
  2. Open the alertmanager.yaml file.
  3. Edit the Alertmanager configuration:

    1. Optional: Change the default Alertmanager configuration.

      Example of the default Alertmanager secret YAML:

      global:
        resolve_timeout: 5m
        http_config:
          proxy_from_environment: true
      route:
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 12h
        receiver: default
        routes:
        - matchers:
          - "alertname=Watchdog"
          repeat_interval: 2m
          receiver: watchdog
      receivers:
      - name: default
      - name: watchdog

      where:

      proxy_from_environment
      Specifies whether to enable proxying for all alert receivers. If you configured an HTTP cluster-wide proxy, set this parameter to true to enable proxying for all alert receivers.
      group_wait
      Specifies how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification.
      group_interval
      Specifies how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent.
      repeat_interval
      Specifies the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled.
    2. Add your alert receiver configuration:

      # ...
      receivers:
      - name: default
      - name: watchdog
      - name: <receiver>
        <receiver_configuration>
      # ...

      where:

      <receiver>
      Specifies the name of the receiver.
      <receiver_configuration>
      Specifies the receiver configuration. The supported receivers are PagerDuty, webhook, email, Slack, and Microsoft Teams.

      Example of configuring PagerDuty as an alert receiver:

      # ...
      receivers:
      - name: default
      - name: watchdog
      - name: team-frontend-page
        pagerduty_configs:
        - routing_key: xxxxxxxxxx
          http_config:
            proxy_from_environment: true
            authorization:
              credentials: xxxxxxxxxx
      # ...

      where:

      routing_key
      Specifies the PagerDuty integration key.
      http_config
      (Optional) Adds the custom HTTP configuration for a specific receiver. That receiver does not inherit the global HTTP configuration settings.

      Example of configuring email as an alert receiver:

      # ...
      receivers:
      - name: default
      - name: watchdog
      - name: team-frontend-page
        email_configs:
          - to: myemail@example.com
            from: alertmanager@example.com
            smarthost: 'smtp.example.com:587'
            auth_username: alertmanager@example.com
            auth_password: password
            hello: alertmanager
      # ...

      where:

      to
      Specifies an email address to send notifications to.
      from
      Specifies an email address to send notifications from.
      smarthost
      Specifies the SMTP server address used for sending emails, including the port number.
      auth_username, auth_password
      Specify the authentication credentials that Alertmanager uses to connect to the SMTP server. This example uses username and password.
      hello
      Specifies the hostname to identify to the SMTP server. If you do not include this parameter, the hostname defaults to localhost.
      Important

      Alertmanager requires an external SMTP server to send email alerts. To configure email alert receivers, ensure you have the necessary connection details for an external SMTP server.

      Warning

      Alertmanager requires an explicit TLS connection using STARTTLS. This results in the following behavior:

      • Connections that transmit data without encryption to remote SMTP servers (unencrypted SMTP) are not supported.
      • Implicit TLS connections are not supported, for example using smtps:// on port 465.
      • Only explicit TLS connections using STARTTLS are supported, for example using smtp:// on port 587 with STARTTLS enabled.
    3. Add the routing configuration:

      # ...
      route:
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 12h
        receiver: default
        routes:
        - matchers:
          - "alertname=Watchdog"
          repeat_interval: 2m
          receiver: watchdog
        - matchers:
          - "<your_matching_rules>"
          receiver: <receiver>
      # ...

      where:

      matchers
      Defines the matching rules that an alert has to fulfill to match the node. If you define inhibition rules, use target_matchers key name for target matchers and source_matchers key name for source matchers.
      <your_matching_rules>
      Specifies labels to match your alerts.
      <receiver>
      Specifies the name of the receiver to use for the alerts.
      Warning

      Do not use the match, match_re, target_match, target_match_re, source_match, and source_match_re key names, which are deprecated and planned for removal in a future release.

      Example of alert routing:

      # ...
      route:
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 12h
        receiver: default
        routes:
        # ...
        - matchers:
          - "service=example-app"
          routes:
          - matchers:
            - "severity=critical"
            receiver: team-frontend-page
      # ...
      • "service=example-app" matching rule matches alerts from the example-app service.
      • You can specify routes within other routes for more complex alert routing. In this example, route.routes.matchers.routes matches alerts of critical severity that are fired by the example-app service to the team-frontend-page receiver. Typically, these types of alerts are paged to an individual or a critical response team.
  4. Apply the new configuration in the file:

    $ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml |  oc -n openshift-monitoring replace secret --filename=-

Verification

  • Verify your routing configuration by visualizing the routing tree:

    $ oc exec alertmanager-main-0 -n openshift-monitoring -- amtool config routes show --alertmanager.url http://localhost:9093

    Example output:

    Routing tree:
    .
    └── default-route  receiver: default
        ├── {alertname="Watchdog"}  receiver: Watchdog
        └── {service="example-app"}  receiver: default
            └── {severity="critical"}  receiver: team-frontend-page

You can configure alert routing through the OpenShift Container Platform web console to ensure that you learn about important issues with your cluster.

Note

The OpenShift Container Platform web console provides fewer settings to configure alert routing than the alertmanager-main secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts".

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure

  1. In the OpenShift Container Platform web console, go to AdministrationCluster SettingsConfigurationAlertmanager.

    Note

    Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert.

  2. Click Create Receiver in the Receivers section of the page.
  3. In the Create Receiver form, add a Receiver name and choose a Receiver type from the list.
  4. Edit the receiver configuration:

    • For PagerDuty receivers:

      1. Choose an integration type and add a PagerDuty integration key.
      2. Add the URL of your PagerDuty installation.
      3. Click Show advanced configuration if you want to edit the client and incident details or the severity specification.
    • For webhook receivers:

      1. Add the endpoint to send HTTP POST requests to.
      2. Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.
    • For email receivers:

      1. Add the email address to send notifications to.
      2. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.

        Important

        Alertmanager requires an external SMTP server to send email alerts. To configure email alert receivers, ensure you have the necessary connection details for an external SMTP server.

        Warning

        Alertmanager requires an explicit TLS connection using STARTTLS. This results in the following behavior:

        • Connections that transmit data without encryption to remote SMTP servers (unencrypted SMTP) are not supported.
        • Implicit TLS connections are not supported, for example using smtps:// on port 465.
        • Only explicit TLS connections using STARTTLS are supported, for example using smtp:// on port 587 with STARTTLS enabled.
      3. Select whether TLS is required.
      4. Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.
    • For Slack receivers:

      1. Add the URL of the Slack webhook.
      2. Add the Slack channel or user name to send notifications to.
      3. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.
  5. By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps:

    1. Add routing label names and values in the Routing labels section of the form.
    2. Click Add label to add further routing labels.
  6. Click Create to create the receiver.

Configure different alert receivers for platform and user-defined alerts to route notifications to the appropriate teams and reduce notification fatigue.

This configuration ensures the following results:

  • All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
  • All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.

You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts:

  • Use the openshift_io_alert_source="platform" matcher to match default platform alerts.
  • Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts.
Note

This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top