Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Configuring alerts and notifications for user workload monitoring


You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.

5.1. Configuring external Alertmanager instances

The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. Add external Alertmanager instances to integrate with existing alerting infrastructure or centralize alert management across multiple clusters.

If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/<component>:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>:
          additionalAlertmanagerConfigs:
          - <alertmanager_specification>

    where:

    <component>
    Specifies one of two supported external Alertmanager components: prometheus or thanosRuler.
    <alertmanager_specification>
    Specifies authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig).

    The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          additionalAlertmanagerConfigs:
          - scheme: https
            pathPrefix: /
            timeout: "30s"
            apiVersion: v1
            bearerToken:
              name: alertmanager-bearer-token
              key: token
            tlsConfig:
              key:
                name: alertmanager-tls
                key: tls.key
              cert:
                name: alertmanager-tls
                key: tls.crt
              ca:
                name: alertmanager-tls
                key: tls.ca
            staticConfigs:
            - external-alertmanager1-remote.com
            - external-alertmanager1-remote2.com
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

5.2. Configuring secrets for Alertmanager

Securely send alerts to authenticated endpoints by configuring Alertmanager secrets, protecting sensitive credentials while maintaining reliable alert delivery to external systems.

The monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, configure Alertmanager to use a secret that contains authentication credentials for the receiver.

For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object.

5.2.1. Adding a secret to the Alertmanager configuration

Add secrets to Alertmanager configuration to enable secure authentication with external alert receivers by editing the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project.

After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add a secrets: section under data/config.yaml/alertmanager with the following configuration:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        alertmanager:
          secrets:
          - <secret_name_1>
          - <secret_name_2>

    where:

    secrets
    Defines the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
    <secret_name_1>
    Specifies the name of the Secret object that contains authentication credentials for the receiver.
    <secret_name_2>
    Optional: Specifies additional secrets. If you add additional secrets, place each one on a new line.

    The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        alertmanager:
          secrets:
          - test-secret-basic-auth
          - test-secret-api-token
  3. Save the file to apply the changes. The new configuration is applied automatically.

Attach custom labels to all time series and alerts leaving Prometheus by using Prometheus external labels, to organize and identify metrics by environment, region, or other categories.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Define labels you want to add for every metric under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          externalLabels:
            <key>: <value>

    where:

    <key>: <value>
    Specifies key-value pairs where <key> is a unique name for the new label and <value> is its value.
    Warning
    • Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.
    • Do not use cluster as a key name. Using it can cause issues where you are unable to see data in the developer dashboards.
    Note

    In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules.

    For example, to add metadata about the region and environment to all time series and alerts, use the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          externalLabels:
            region: eu
            environment: prod
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

5.4. Configuring alert notifications

Configure alert notifications for user-defined projects to ensure developers and teams receive timely notifications when their applications or services experience issues.

In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods:

  • Use the default platform Alertmanager instance.
  • Use a separate Alertmanager instance only for user-defined projects.

Developers and other users with the alert-routing-edit cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers.

Note

Review the following limitations of alert routing for user-defined projects:

  • User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace.
  • When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration.

5.4.1. Configuring alert routing for user-defined projects

If you are a non-administrator user with the alert-routing-edit cluster role, create or edit alert routing for user-defined projects to ensure alerts from your applications reach the appropriate notification systems and team members.

Prerequisites

  • A cluster administrator has enabled monitoring for user-defined projects.
  • A cluster administrator has enabled alert routing for user-defined projects.
  • You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml.
  2. Add an AlertmanagerConfig YAML definition to the file. For example:

    apiVersion: monitoring.coreos.com/v1beta1
    kind: AlertmanagerConfig
    metadata:
      name: example-routing
      namespace: ns1
    spec:
      route:
        receiver: default
        groupBy: [job]
      receivers:
      - name: default
        webhookConfigs:
        - url: https://example.org/post
  3. Save the file.
  4. Apply the resource to the cluster:

    $ oc apply -f example-app-alert-routing.yaml

    The configuration is automatically applied to the Alertmanager pods.

If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace.

Note

All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have enabled a separate instance of Alertmanager for user-defined alert routing.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Print the currently active Alertmanager configuration into the file alertmanager.yaml:

    $ oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
  2. Edit the configuration in alertmanager.yaml:

    global:
      http_config:
        proxy_from_environment: true
    route:
      receiver: Default
      group_by:
      - name: Default
      routes:
      - matchers:
        - "service = prometheus-example-monitor"
        receiver: <receiver>
    receivers:
    - name: Default
    - name: <receiver>
      <receiver_configuration>

    where:

    proxy_from_environment
    Specifies whether to enable proxying for all alert receivers. If you configured an HTTP cluster-wide proxy, set this parameter to true to enable proxying for all alert receivers.
    matchers
    Specifies labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label.
    <receiver>
    Specifies the name of the receiver to use for the alerts group.
    <receiver_configuration>
    Specifies the receiver configuration.
  3. Apply the new configuration in the file:

    $ oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml |  oc -n openshift-user-workload-monitoring replace secret --filename=-

Configure different alert receivers for platform and user-defined alerts to route notifications to the appropriate teams and reduce notification fatigue.

This configuration ensures the following results:

  • All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
  • All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.

You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts:

  • Use the openshift_io_alert_source="platform" matcher to match default platform alerts.
  • Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts.
Note

This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.

Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début