Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Configuring external alertmanager instances
The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances by configuring the
cluster-monitoring-config
openshift-monitoring
user-workload-monitoring-config
If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.
Prerequisites
-
You have installed the OpenShift CLI ().
oc If you are configuring core OpenShift Container Platform monitoring components in the
openshift-monitoringproject:-
You have access to the cluster as a user with the cluster role.
cluster-admin -
You have created the config map.
cluster-monitoring-config
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the cluster role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring - A cluster administrator has enabled monitoring for user-defined projects.
-
You have access to the cluster as a user with the
Procedure
Edit the
object.ConfigMapTo configure additional Alertmanagers for routing alerts from core OpenShift Container Platform projects:
Edit the
config map in thecluster-monitoring-configproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-config-
Add an section under
additionalAlertmanagerConfigs:.data/config.yaml/prometheusK8s Add the configuration details for additional Alertmanagers in this section:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>For
, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (<alertmanager_specification>) and client TLS (bearerToken). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication:tlsConfigapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com
To configure additional Alertmanager instances for routing alerts from user-defined projects:
Edit the
config map in theuser-workload-monitoring-configproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config-
Add a section under
<component>/additionalAlertmanagerConfigs:.data/config.yaml/ Add the configuration details for additional Alertmanagers in this section:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>For
, substitute one of two supported external Alertmanager components:<component>orprometheus.thanosRulerFor
, substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (<alertmanager_specification>) and client TLS (bearerToken). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication:tlsConfigapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.1. Attaching additional labels to your time series and alerts Link kopierenLink in die Zwischenablage kopiert!
You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the cluster role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the cluster role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring - A cluster administrator has enabled monitoring for user-defined projects.
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configDefine a map of labels you want to add for every metric under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value>1 - 1
- Substitute
<key>: <value>with a map of key-value pairs where<key>is a unique name for the new label and<value>is its value.
Warning-
Do not use or
prometheusas key names, because they are reserved and will be overwritten.prometheus_replica -
Do not use or
clusteras key names. Using them can cause issues where you are unable to see data in the developer dashboards.managed_cluster
For example, to add metadata about the region and environment to all time series and alerts, use the following example:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod- Save the file to apply the changes. The new configuration is applied automatically.
To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configDefine a map of labels you want to add for every metric under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value>1 - 1
- Substitute
<key>: <value>with a map of key-value pairs where<key>is a unique name for the new label and<value>is its value.
Warning-
Do not use or
prometheusas key names, because they are reserved and will be overwritten.prometheus_replica -
Do not use as a key name. Using it can cause issues where you are unable to see data in the developer dashboards.
cluster
NoteIn the
project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Settingopenshift-user-workload-monitoringforexternalLabelsin theprometheususer-workload-monitoring-configobject will only configure external labels for metrics and not for any rules.ConfigMapFor example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use the following example:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.