Search

Chapter 3. Configuring external alertmanager instances

download PDF

The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances by configuring the cluster-monitoring-config config map in either the openshift-monitoring project or the user-workload-monitoring-config project.

If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • If you are configuring core OpenShift Container Platform monitoring components in the openshift-monitoring project:

    • You have access to the cluster as a user with the cluster-admin cluster role.
    • You have created the cluster-monitoring-config config map.
  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
    • A cluster administrator has enabled monitoring for user-defined projects.

Procedure

  1. Edit the ConfigMap object.

    • To configure additional Alertmanagers for routing alerts from core OpenShift Container Platform projects:

      1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Add an additionalAlertmanagerConfigs: section under data/config.yaml/prometheusK8s.
      3. Add the configuration details for additional Alertmanagers in this section:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusK8s:
              additionalAlertmanagerConfigs:
              - <alertmanager_specification>

        For <alertmanager_specification>, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusK8s:
              additionalAlertmanagerConfigs:
              - scheme: https
                pathPrefix: /
                timeout: "30s"
                apiVersion: v1
                bearerToken:
                  name: alertmanager-bearer-token
                  key: token
                tlsConfig:
                  key:
                    name: alertmanager-tls
                    key: tls.key
                  cert:
                    name: alertmanager-tls
                    key: tls.crt
                  ca:
                    name: alertmanager-tls
                    key: tls.ca
                staticConfigs:
                - external-alertmanager1-remote.com
                - external-alertmanager1-remote2.com
    • To configure additional Alertmanager instances for routing alerts from user-defined projects:

      1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

        $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Add a <component>/additionalAlertmanagerConfigs: section under data/config.yaml/.
      3. Add the configuration details for additional Alertmanagers in this section:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            <component>:
              additionalAlertmanagerConfigs:
              - <alertmanager_specification>

        For <component>, substitute one of two supported external Alertmanager components: prometheus or thanosRuler.

        For <alertmanager_specification>, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
           thanosRuler:
             additionalAlertmanagerConfigs:
            - scheme: https
              pathPrefix: /
              timeout: "30s"
              apiVersion: v1
              bearerToken:
                name: alertmanager-bearer-token
                key: token
              tlsConfig:
                key:
                  name: alertmanager-tls
                  key: tls.key
                cert:
                  name: alertmanager-tls
                  key: tls.crt
                ca:
                  name: alertmanager-tls
                  key: tls.ca
              staticConfigs:
              - external-alertmanager1-remote.com
              - external-alertmanager1-remote2.com
  2. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

3.1. Attaching additional labels to your time series and alerts

You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.

Prerequisites

  • If you are configuring core OpenShift Container Platform monitoring components:

    • You have access to the cluster as a user with the cluster-admin cluster role.
    • You have created the cluster-monitoring-config ConfigMap object.
  • If you are configuring components that monitor user-defined projects:

    • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
    • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    • To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects:

      1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

        $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
      2. Define a map of labels you want to add for every metric under data/config.yaml:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusK8s:
              externalLabels:
                <key>: <value> 1
        1
        Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value.
        Warning
        • Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.
        • Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards.

        For example, to add metadata about the region and environment to all time series and alerts, use the following example:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cluster-monitoring-config
          namespace: openshift-monitoring
        data:
          config.yaml: |
            prometheusK8s:
              externalLabels:
                region: eu
                environment: prod
      3. Save the file to apply the changes. The new configuration is applied automatically.
    • To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects:

      1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

        $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
      2. Define a map of labels you want to add for every metric under data/config.yaml:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            prometheus:
              externalLabels:
                <key>: <value> 1
        1
        Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value.
        Warning
        • Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.
        • Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards.
        Note

        In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules.

        For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use the following example:

        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: user-workload-monitoring-config
          namespace: openshift-user-workload-monitoring
        data:
          config.yaml: |
            prometheus:
              externalLabels:
                region: eu
                environment: prod
      3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.