Chapter 3. Configuring core platform monitoring
3.1. Preparing to configure core platform monitoring stack
The OpenShift Container Platform installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens after the installation.
This section explains which monitoring components can be configured and how to prepare for configuring the monitoring stack.
- Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration.
- The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.
3.1.1. Configurable monitoring components
This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config
config map.
Component | cluster-monitoring-config config map key |
---|---|
Prometheus Operator |
|
Prometheus |
|
Alertmanager |
|
Thanos Querier |
|
kube-state-metrics |
|
monitoring-plugin |
|
openshift-state-metrics |
|
Telemeter Client |
|
Metrics Server |
|
Different configuration changes to the ConfigMap
object result in different outcomes:
- The pods are not redeployed. Therefore, there is no service outage.
The affected pods are redeployed:
- For single-node clusters, this results in temporary service outage.
- For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available.
- Configuring and resizing a persistent volume always results in a service outage, regardless of high availability.
Each procedure that requires a change in the config map includes its expected outcome.
3.1.2. Creating a cluster monitoring config map
You can configure the core OpenShift Container Platform monitoring components by creating and updating the cluster-monitoring-config
config map in the openshift-monitoring
project. The Cluster Monitoring Operator (CMO) then configures the core components of the monitoring stack.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Check whether the
cluster-monitoring-config
ConfigMap
object exists:oc -n openshift-monitoring get configmap cluster-monitoring-config
$ oc -n openshift-monitoring get configmap cluster-monitoring-config
Copy to Clipboard Copied! If the
ConfigMap
object does not exist:Create the following YAML manifest. In this example the file is called
cluster-monitoring-config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |
Copy to Clipboard Copied! Apply the configuration to create the
ConfigMap
object:oc apply -f cluster-monitoring-config.yaml
$ oc apply -f cluster-monitoring-config.yaml
Copy to Clipboard Copied!
3.1.3. Granting users permissions for core platform monitoring
As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects.
You can also grant developers and other users different permissions for core platform monitoring. You can grant the permissions by assigning one of the following monitoring roles or cluster roles:
Name | Description | Project |
---|---|---|
| Users with this role have the ability to access Thanos Querier API endpoints. Additionally, it grants access to the core platform Prometheus API and user-defined Thanos Ruler API endpoints. |
|
|
Users with this role can manage |
|
| Users with this role can manage the Alertmanager API for core platform monitoring. They can also manage alert silences in the OpenShift Container Platform web console. |
|
| Users with this role can monitor the Alertmanager API for core platform monitoring. They can also view alert silences in the OpenShift Container Platform web console. |
|
|
Users with this cluster role have the same access rights as |
Must be bound with |
3.1.3.1. Granting user permissions by using the web console
You can grant users permissions for the openshift-monitoring
project or their own projects, by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - The user account that you are assigning the role to already exists.
Procedure
-
In the OpenShift Container Platform web console, go to User Management
RoleBindings Create binding. - In the Binding Type section, select the Namespace Role Binding type.
- In the Name field, enter a name for the role binding.
In the Namespace field, select the project where you want to grant the access.
ImportantThe monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field.
- Select a monitoring role or cluster role from the Role Name list.
- In the Subject section, select User.
- In the Subject Name field, enter the name of the user.
- Select Create to apply the role binding.
3.1.3.2. Granting user permissions by using the CLI
You can grant users permissions for the openshift-monitoring
project or their own projects, by using the OpenShift CLI (oc
).
Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - The user account that you are assigning the role to already exists.
-
You have installed the OpenShift CLI (
oc
).
Procedure
To assign a monitoring role to a user for a project, enter the following command:
oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace>
$ oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace>
1 Copy to Clipboard Copied! - 1
- Substitute
<role>
with the wanted monitoring role,<user>
with the user to whom you want to assign the role, and<namespace>
with the project where you want to grant the access.
To assign a monitoring cluster role to a user for a project, enter the following command:
oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace>
$ oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace>
1 Copy to Clipboard Copied! - 1
- Substitute
<cluster-role>
with the wanted monitoring cluster role,<user>
with the user to whom you want to assign the cluster role, and<namespace>
with the project where you want to grant the access.
3.2. Configuring performance and scalability for core platform monitoring
You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources.
3.2.1. Controlling the placement and distribution of monitoring components
You can move the monitoring stack components to specific nodes:
-
Use the
nodeSelector
constraint with labeled nodes to move any of the monitoring stack components to specific nodes. - Assign tolerations to enable moving components to tainted nodes.
By doing so, you control the placement and distribution of the monitoring components across a cluster.
By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.
3.2.1.1. Moving monitoring components to different nodes
To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector
constraint for the components in the cluster-monitoring-config
config map to match labels assigned to the nodes.
You cannot add a node selector constraint directly to an existing scheduled pod.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:
oc label nodes <node_name> <node_label>
$ oc label nodes <node_name> <node_label>
1 Copy to Clipboard Copied! - 1
- Replace
<node_name>
with the name of the node where you want to add the label. Replace<node_label>
with the name of the wanted label.
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Specify the node labels for the
nodeSelector
constraint for the component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... <component>: nodeSelector: <node_label_1> <node_label_2> # ...
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... <component>:
1 nodeSelector: <node_label_1>
2 <node_label_2>
3 # ...
Copy to Clipboard Copied! - 1
- Substitute
<component>
with the appropriate monitoring stack component name. - 2
- Substitute
<node_label_1>
with the label you added to the node. - 3
- Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.
NoteIf monitoring components remain in a
Pending
state after configuring thenodeSelector
constraint, check the pod events for errors relating to taints and tolerations.- Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed.
3.2.1.2. Assigning tolerations to monitoring components
You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Specify
tolerations
for the component:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>
Copy to Clipboard Copied! Substitute
<component>
and<toleration_specification>
accordingly.For example,
oc adm taint nodes node1 key1=value1:NoSchedule
adds a taint tonode1
with the keykey1
and the valuevalue1
. This prevents monitoring components from deploying pods onnode1
unless a toleration is configured for that taint. The following example configures thealertmanagerMain
component to tolerate the example taint:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule"
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule"
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.2.2. Setting the body size limit for metrics scraping
By default, no limit exists for the uncompressed body size for data returned from scraped metrics targets. You can set a body size limit to help avoid situations in which Prometheus consumes excessive amounts of memory when scraped targets return a response that contains a large amount of data. In addition, by setting a body size limit, you can reduce the impact that a malicious target might have on Prometheus and on the cluster as a whole.
After you set a value for enforcedBodySizeLimit
, the alert PrometheusScrapeBodySizeLimitHit
fires when at least one Prometheus scrape target replies with a response body larger than the configured value.
If metrics data scraped from a target has an uncompressed body size exceeding the configured size limit, the scrape fails. Prometheus then considers this target to be down and sets its up
metric value to 0
, which can trigger the TargetDown
alert.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
namespace:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add a value for
enforcedBodySizeLimit
todata/config.yaml/prometheusK8s
to limit the body size that can be accepted per target scrape:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |- prometheusK8s: enforcedBodySizeLimit: 40MB
1 Copy to Clipboard Copied! - 1
- Specify the maximum body size for scraped metrics targets. This
enforcedBodySizeLimit
example limits the uncompressed size per target scrape to 40 megabytes. Valid numeric values use the Prometheus data size format: B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The default value is0
, which specifies no limit. You can also set the value toautomatic
to calculate the limit automatically based on cluster capacity.
- Save the file to apply the changes. The new configuration is applied automatically.
3.2.3. Managing CPU and memory resources for monitoring components
You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components.
You can configure these limits and requests for core platform monitoring components in the openshift-monitoring
namespace.
3.2.3.1. Specifying limits and requests
To configure CPU and memory resources, specify values for resource limits and requests in the cluster-monitoring-config
ConfigMap
object in the openshift-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
ConfigMap
object namedcluster-monitoring-config
. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add values to define resource limits and requests for each component you want to configure.
ImportantEnsure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run.
Example of setting resource limits and requests
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusK8s: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosQuerier: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperator: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi metricsServer: resources: requests: cpu: 10m memory: 50Mi limits: cpu: 50m memory: 500Mi kubeStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi telemeterClient: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi openshiftStateMetrics: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi nodeExporter: resources: limits: cpu: 50m memory: 150Mi requests: cpu: 20m memory: 50Mi monitoringPlugin: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheusOperatorAdmissionWebhook: resources: limits: cpu: 50m memory: 100Mi requests: cpu: 20m memory: 50Mi
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.2.4. Choosing a metrics collection profile
To choose a metrics collection profile for core OpenShift Container Platform monitoring components, edit the cluster-monitoring-config
ConfigMap
object.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have access to the cluster as a user with the
cluster-admin
cluster role.
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add the metrics collection profile setting under
data/config.yaml/prometheusK8s
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: <metrics_collection_profile_name>
1 Copy to Clipboard Copied! - 1
- The name of the metrics collection profile. The available values are
full
orminimal
. If you do not specify a value or if thecollectionProfile
key name does not exist in the config map, the default setting offull
is used.
The following example sets the metrics collection profile to
minimal
for the core platform instance of Prometheus:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: collectionProfile: minimal
Copy to Clipboard Copied! - Save the file to apply the changes. The new configuration is applied automatically.
3.2.5. Configuring pod topology spread constraints
You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
You can configure pod topology spread constraints for monitoring pods by using the cluster-monitoring-config
config map.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add the following settings under the
data/config.yaml
field to configure pod topology spread constraints:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: topologySpreadConstraints: - maxSkew: <n> topologyKey: <key> whenUnsatisfiable: <value> labelSelector: <match_option>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>:
1 topologySpreadConstraints: - maxSkew: <n>
2 topologyKey: <key>
3 whenUnsatisfiable: <value>
4 labelSelector:
5 <match_option>
Copy to Clipboard Copied! - 1
- Specify a name of the component for which you want to set up pod topology spread constraints.
- 2
- Specify a numeric value for
maxSkew
, which defines the degree to which pods are allowed to be unevenly distributed. - 3
- Specify a key of node labels for
topologyKey
. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. - 4
- Specify a value for
whenUnsatisfiable
. Available options areDoNotSchedule
andScheduleAnyway
. SpecifyDoNotSchedule
if you want themaxSkew
value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. SpecifyScheduleAnyway
if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. - 5
- Specify
labelSelector
to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
Example configuration for Prometheus
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app.kubernetes.io/name: prometheus
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.3. Storing and recording data for core platform monitoring
Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting.
3.3.1. Configuring persistent storage
Run cluster monitoring with persistent storage to gain the following benefits:
- Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
- Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.
For production environments, it is highly recommended to configure persistent storage.
In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability.
3.3.1.1. Persistent storage prerequisites
- Dedicate sufficient persistent storage to ensure that the disk does not become full.
Use
Filesystem
as the storage type value for thevolumeMode
parameter when you configure the persistent volume.Important-
Do not use a raw block volume, which is described with
volumeMode: Block
in thePersistentVolume
resource. Prometheus cannot use raw block volumes. - Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.
-
Do not use a raw block volume, which is described with
3.3.1.2. Configuring a persistent volume claim
To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add your PVC configuration for the component under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>:
1 volumeClaimTemplate: spec: storageClassName: <storage_class>
2 resources: requests: storage: <amount_of_storage>
3 Copy to Clipboard Copied! The following example configures a PVC that claims persistent storage for Prometheus:
Example PVC configuration
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 40Gi
Copy to Clipboard Copied! Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied.
WarningWhen you update the config map with a PVC configuration, the affected
StatefulSet
object is recreated, resulting in a temporary service outage.
3.3.1.3. Resizing a persistent volume
You can resize a persistent volume (PV) for monitoring components, such as Prometheus or Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured.
You can only expand the size of the PVC. Shrinking the storage size is not possible.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. - You have configured at least one PVC for core OpenShift Container Platform monitoring components.
-
You have installed the OpenShift CLI (
oc
).
Procedure
- Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes.
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add a new storage size for the PVC configuration for the component under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>:
1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage>
2 Copy to Clipboard Copied! The following example sets the new PVC request to 100 gigabytes for the Prometheus instance:
Example storage configuration for
prometheusK8s
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: resources: requests: storage: 100Gi
Copy to Clipboard Copied! Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
WarningWhen you update the config map with a new storage size, the affected
StatefulSet
object is recreated, resulting in a temporary service outage.
3.3.2. Modifying retention time and size for Prometheus metrics data
By default, Prometheus retains metrics data for 15 days for core platform monitoring. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses.
Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize
limit. In such cases, the KubePersistentVolumeFillingUp
alert fires until the space on a PV is lower than the retentionSize
limit.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add the retention time and size configuration under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> retentionSize: <size_specification>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification>
1 retentionSize: <size_specification>
2 Copy to Clipboard Copied! - 1
- The retention time: a number directly followed by
ms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years). You can also combine time values for specific times, such as1h30m15s
. - 2
- The retention size: a number directly followed by
B
(bytes),KB
(kilobytes),MB
(megabytes),GB
(gigabytes),TB
(terabytes),PB
(petabytes), andEB
(exabytes).
The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance:
Example of setting retention time for Prometheus
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h retentionSize: 10GB
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.3.3. Configuring audit logs for Metrics Server
You can configure audit logs for Metrics Server to help you troubleshoot issues with the server. Audit logs record the sequence of actions in a cluster. It can record user, application, or control plane activities.
You can set audit log rules, which determine what events are recorded and what data they should include. This can be achieved with the following audit profiles:
- Metadata (default): This profile enables the logging of event metadata including user, timestamps, resource, and verb. It does not record request and response bodies.
- Request: This enables the logging of event metadata and request body, but it does not record response body. This configuration does not apply for non-resource requests.
- RequestResponse: This enables the logging of event metadata, and request and response bodies. This configuration does not apply for non-resource requests.
- None: None of the previously described events are recorded.
You can configure the audit profiles by modifying the cluster-monitoring-config
config map. The following example sets the profile to Request
, allowing the logging of event metadata and request body for Metrics Server:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | metricsServer: audit: profile: Request
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
metricsServer:
audit:
profile: Request
3.3.4. Setting log levels for monitoring components
You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Querier.
The following log levels can be applied to the relevant component in the cluster-monitoring-config
ConfigMap
object:
-
debug
. Log debug, informational, warning, and error messages. -
info
. Log informational, warning, and error messages. -
warn
. Log warning and error messages only. -
error
. Log error messages only.
The default log level is info
.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add
logLevel: <log_level>
for a component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: logLevel: <log_level>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>:
1 logLevel: <log_level>
2 Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the
prometheus-operator
deployment:oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep "log-level"
$ oc -n openshift-monitoring get deploy prometheus-operator -o yaml | grep "log-level"
Copy to Clipboard Copied! Example output
- --log-level=debug
- --log-level=debug
Copy to Clipboard Copied! Check that the pods for the component are running. The following example lists the status of pods:
oc -n openshift-monitoring get pods
$ oc -n openshift-monitoring get pods
Copy to Clipboard Copied! NoteIf an unrecognized
logLevel
value is included in theConfigMap
object, the pods for the component might not restart successfully.
3.3.5. Enabling the query log file for Prometheus
You can configure Prometheus to write all queries that have been run by the engine to a log file.
Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap
object to enable the feature.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add the
queryLogFile
parameter for Prometheus underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path>
1 Copy to Clipboard Copied! - 1
- Add the full path to the file in which queries will be logged.
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Verify that the pods for the component are running. The following sample command lists the status of pods:
oc -n openshift-monitoring get pods
$ oc -n openshift-monitoring get pods
Copy to Clipboard Copied! Example output
... prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m ...
... prometheus-operator-567c9bc75c-96wkj 2/2 Running 0 62m prometheus-k8s-0 6/6 Running 1 57m prometheus-k8s-1 6/6 Running 1 57m thanos-querier-56c76d7df4-2xkpc 6/6 Running 0 57m thanos-querier-56c76d7df4-j5p29 6/6 Running 0 57m ...
Copy to Clipboard Copied! Read the query log:
oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>
$ oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>
Copy to Clipboard Copied! ImportantRevert the setting in the config map after you have examined the logged query information.
3.3.6. Enabling query logging for Thanos Querier
For default platform monitoring in the openshift-monitoring
project, you can enable the Cluster Monitoring Operator (CMO) to log all queries run by Thanos Querier.
Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap
object to enable the feature.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
You can enable query logging for Thanos Querier in the openshift-monitoring
project:
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add a
thanosQuerier
section underdata/config.yaml
and add values as shown in the following example:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> logLevel: <value>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value>
1 logLevel: <value>
2 Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Verification
Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the
openshift-monitoring
project:oc -n openshift-monitoring get pods
$ oc -n openshift-monitoring get pods
Copy to Clipboard Copied! Run a test query using the following sample commands as a model:
token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
$ token=`oc create token prometheus-k8s -n openshift-monitoring` $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
Copy to Clipboard Copied! Run the following command to read the query log:
oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
$ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
Copy to Clipboard Copied! NoteBecause the
thanos-querier
pods are highly available (HA) pods, you might be able to see logs in only one pod.-
After you examine the logged query information, disable query logging by changing the
enableRequestLogging
value tofalse
in the config map.
3.4. Configuring metrics for core platform monitoring
Configure the collection of metrics to monitor how cluster components and your own workloads are performing.
You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters.
3.4.1. Configuring remote write storage
You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.
ImportantRed Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support.
You have set up authentication credentials in a
Secret
object for the remote write endpoint. You must create the secret in theopenshift-monitoring
namespace.WarningTo reduce security risks, use HTTPS and authentication to send metrics to an endpoint.
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add a
remoteWrite:
section underdata/config.yaml/prometheusK8s
, as shown in the following example:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com"
1 <endpoint_authentication_credentials>
2 Copy to Clipboard Copied! - 1
- The URL of the remote write endpoint.
- 2
- The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an
Authorization
request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
Add write relabel configuration values after the authentication credentials:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs>
1 Copy to Clipboard Copied! - 1
- Add configuration for metrics that you want to send to the remote endpoint.
Example of forwarding a single metric called
my_metric
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep
Copy to Clipboard Copied! Example of forwarding metrics called
my_metric_1
andmy_metric_2
inmy_namespace
namespaceapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep
Copy to Clipboard Copied! - Save the file to apply the changes. The new configuration is applied automatically.
3.4.1.1. Supported remote write authentication settings
You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write.
Authentication method | Config map field | Description |
---|---|---|
AWS Signature Version 4 |
| This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. |
Basic authentication |
| Basic authentication sets the authorization header on every remote write request with the configured username and password. |
authorization |
|
Authorization sets the |
OAuth 2.0 |
|
An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from |
TLS client |
| A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. |
3.4.1.2. Example remote write authentication settings
The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret
object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with default platform monitoring in the openshift-monitoring
namespace.
3.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication
The following shows the settings for a sigv4
secret named sigv4-credentials
in the openshift-monitoring
namespace.
apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-monitoring stringData: accessKey: <AWS_access_key> secretKey: <AWS_secret_key> type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: sigv4-credentials
namespace: openshift-monitoring
stringData:
accessKey: <AWS_access_key>
secretKey: <AWS_secret_key>
type: Opaque
The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret
object named sigv4-credentials
in the openshift-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> accessKey: name: sigv4-credentials key: accessKey secretKey: name: sigv4-credentials key: secretKey profile: <AWS_profile_name> roleArn: <AWS_role_arn>
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://authorization.example.com/api/write"
sigv4:
region: <AWS_region>
accessKey:
name: sigv4-credentials
key: accessKey
secretKey:
name: sigv4-credentials
key: secretKey
profile: <AWS_profile_name>
roleArn: <AWS_role_arn>
- 1
- The AWS region.
- 2 4
- The name of the
Secret
object containing the AWS API access credentials. - 3
- The key that contains the AWS API access key in the specified
Secret
object. - 5
- The key that contains the AWS API secret key in the specified
Secret
object. - 6
- The name of the AWS profile that is being used to authenticate.
- 7
- The unique identifier for the Amazon Resource Name (ARN) assigned to your role.
3.4.1.2.2. Sample YAML for Basic authentication
The following shows sample Basic authentication settings for a Secret
object named rw-basic-auth
in the openshift-monitoring
namespace:
apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-monitoring stringData: user: <basic_username> password: <basic_password> type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: rw-basic-auth
namespace: openshift-monitoring
stringData:
user: <basic_username>
password: <basic_password>
type: Opaque
The following sample shows a basicAuth
remote write configuration that uses a Secret
object named rw-basic-auth
in the openshift-monitoring
namespace. It assumes that you have already set up authentication credentials for the endpoint.
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth key: user password: name: rw-basic-auth key: password
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://basicauth.example.com/api/write"
basicAuth:
username:
name: rw-basic-auth
key: user
password:
name: rw-basic-auth
key: password
3.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret
Object
The following shows bearer token settings for a Secret
object named rw-bearer-auth
in the openshift-monitoring
namespace:
apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-monitoring stringData: token: <authentication_token> type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: rw-bearer-auth
namespace: openshift-monitoring
stringData:
token: <authentication_token>
type: Opaque
- 1
- The authentication token.
The following shows sample bearer token config map settings that use a Secret
object named rw-bearer-auth
in the openshift-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true prometheusK8s: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer credentials: name: rw-bearer-auth key: token
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
prometheusK8s:
remoteWrite:
- url: "https://authorization.example.com/api/write"
authorization:
type: Bearer
credentials:
name: rw-bearer-auth
key: token
3.4.1.2.4. Sample YAML for OAuth 2.0 authentication
The following shows sample OAuth 2.0 settings for a Secret
object named oauth2-credentials
in the openshift-monitoring
namespace:
apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-monitoring stringData: id: <oauth2_id> secret: <oauth2_secret> type: Opaque
apiVersion: v1
kind: Secret
metadata:
name: oauth2-credentials
namespace: openshift-monitoring
stringData:
id: <oauth2_id>
secret: <oauth2_secret>
type: Opaque
The following shows an oauth2
remote write authentication sample configuration that uses a Secret
object named oauth2-credentials
in the openshift-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials key: id clientSecret: name: oauth2-credentials key: secret tokenUrl: https://example.com/oauth2/token scopes: - <scope_1> - <scope_2> endpointParams: param1: <parameter_1> param2: <parameter_2>
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://test.example.com/api/write"
oauth2:
clientId:
secret:
name: oauth2-credentials
key: id
clientSecret:
name: oauth2-credentials
key: secret
tokenUrl: https://example.com/oauth2/token
scopes:
- <scope_1>
- <scope_2>
endpointParams:
param1: <parameter_1>
param2: <parameter_2>
- 1 3
- The name of the corresponding
Secret
object. Note thatClientId
can alternatively refer to aConfigMap
object, althoughclientSecret
must refer to aSecret
object. - 2 4
- The key that contains the OAuth 2.0 credentials in the specified
Secret
object. - 5
- The URL used to fetch a token with the specified
clientId
andclientSecret
. - 6
- The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
- 7
- The OAuth 2.0 authorization request parameters required for the authorization server.
3.4.1.2.5. Sample YAML for TLS client authentication
The following shows sample TLS client settings for a tls
Secret
object named mtls-bundle
in the openshift-monitoring
namespace.
apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-monitoring data: ca.crt: <ca_cert> client.crt: <client_cert> client.key: <client_key> type: tls
apiVersion: v1
kind: Secret
metadata:
name: mtls-bundle
namespace: openshift-monitoring
data:
ca.crt: <ca_cert>
client.crt: <client_cert>
client.key: <client_key>
type: tls
The following sample shows a tlsConfig
remote write authentication configuration that uses a TLS Secret
object named mtls-bundle
.
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: client.crt keySecret: name: mtls-bundle key: client.key
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write-endpoint.example.com"
tlsConfig:
ca:
secret:
name: mtls-bundle
key: ca.crt
cert:
secret:
name: mtls-bundle
key: client.crt
keySecret:
name: mtls-bundle
key: client.key
- 1 3 5
- The name of the corresponding
Secret
object that contains the TLS authentication credentials. Note thatca
andcert
can alternatively refer to aConfigMap
object, thoughkeySecret
must refer to aSecret
object. - 2
- The key in the specified
Secret
object that contains the CA certificate for the endpoint. - 4
- The key in the specified
Secret
object that contains the client certificate for the endpoint. - 6
- The key in the specified
Secret
object that contains the client key secret.
3.4.1.3. Example remote write queue configuration
You can use the queueConfig
object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for default platform monitoring in the openshift-monitoring
namespace.
Example configuration of remote write parameters with default values
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 minShards: 1 maxShards: 50 maxSamplesPerSend: 2000 batchSendDeadline: 5s minBackoff: 30ms maxBackoff: 5s retryOnRateLimit: false sampleAgeLimit: 0s
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write-endpoint.example.com"
<endpoint_authentication_credentials>
queueConfig:
capacity: 10000
minShards: 1
maxShards: 50
maxSamplesPerSend: 2000
batchSendDeadline: 5s
minBackoff: 30ms
maxBackoff: 5s
retryOnRateLimit: false
sampleAgeLimit: 0s
- 1
- The number of samples to buffer per shard before they are dropped from the queue.
- 2
- The minimum number of shards.
- 3
- The maximum number of shards.
- 4
- The maximum number of samples per send.
- 5
- The maximum time for a sample to wait in buffer.
- 6
- The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the
maxbackoff
time. - 7
- The maximum time to wait before retrying a failed request.
- 8
- Set this parameter to
true
to retry a request after receiving a 429 status code from the remote write storage. - 9
- The samples that are older than the
sampleAgeLimit
limit are dropped from the queue. If the value is undefined or set to0s
, the parameter is ignored.
3.4.1.4. Table of remote write metrics
The following table contains remote write and remote write-adjacent metrics with further description to help solve issues during remote write configuration.
Metric | Description |
---|---|
| Shows the newest timestamp that Prometheus stored in the write-ahead log (WAL) for any sample. |
| Shows the newest timestamp that the remote write queue successfully sent. |
| The number of samples that remote write failed to send and had to resend to remote storage. A steady high rate for this metric indicates problems with the network or remote storage endpoint. |
| Shows how many shards are currently running for each remote endpoint. |
| Shows the calculated needed number of shards based on the current write throughput and the rate of incoming versus sent samples. |
| Shows the maximum number of shards based on the current configuration. |
| Shows the minimum number of shards based on the current configuration. |
| The WAL segment file that Prometheus is currently writing new data to. |
| The WAL segment file that each remote write instance is currently reading from. |
3.4.2. Creating cluster ID labels for metrics
You can create cluster ID labels for metrics by adding the write_relabel
settings for remote write storage in the cluster-monitoring-config
config map in the openshift-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
). - You have configured remote write storage.
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! In the
writeRelabelConfigs:
section underdata/config.yaml/prometheusK8s/remoteWrite
, add cluster ID relabel configuration values:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <relabel_config>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs:
1 - <relabel_config>
2 Copy to Clipboard Copied! The following sample shows how to forward a metric with the cluster ID label
cluster_id
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ targetLabel: cluster_id action: replace
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__
1 targetLabel: cluster_id
2 action: replace
3 Copy to Clipboard Copied! - 1
- The system initially applies a temporary cluster ID source label named
__tmp_openshift_cluster_id__
. This temporary label gets replaced by the cluster ID label name that you specify. - 2
- Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use
__tmp_openshift_cluster_id__
. The final relabeling step removes labels that use this name. - 3
- The
replace
write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
- Save the file to apply the changes. The new configuration is applied automatically.
3.5. Configuring alerts and notifications for core platform monitoring
You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.
3.5.1. Configuring external Alertmanager instances
The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus.
You can add external Alertmanager instances to route alerts for core OpenShift Container Platform projects.
If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add an
additionalAlertmanagerConfigs
section with configuration details underdata/config.yaml/prometheusK8s
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>
1 Copy to Clipboard Copied! - 1
- Substitute
<alertmanager_specification>
with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken
) and client TLS (tlsConfig
).
The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.5.1.1. Disabling the local Alertmanager
A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring
project of the OpenShift Container Platform monitoring stack.
If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config
config map in the openshift-monitoring
project.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
config map. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add
enabled: false
for thealertmanagerMain
component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false
Copy to Clipboard Copied! - Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.
3.5.2. Configuring secrets for Alertmanager
The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.
For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret
object rather than in the ConfigMap
object.
3.5.2.1. Adding a secret to the Alertmanager configuration
You can add secrets to the Alertmanager configuration by editing the cluster-monitoring-config
config map in the openshift-monitoring
project.
After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name>
within the alertmanager
container for the Alertmanager pods.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
config map. -
You have created the secret to be configured in Alertmanager in the
openshift-monitoring
project. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Add a
secrets:
section underdata/config.yaml/alertmanagerMain
with the following configuration:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - <secret_name_1> - <secret_name_2>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets:
1 - <secret_name_1>
2 - <secret_name_2>
Copy to Clipboard Copied! - 1
- This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
- 2
- The name of the
Secret
object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.
The following sample config map settings configure Alertmanager to use two
Secret
objects namedtest-secret-basic-auth
andtest-secret-api-token
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: secrets: - test-secret-basic-auth - test-secret-api-token
Copy to Clipboard Copied! - Save the file to apply the changes. The new configuration is applied automatically.
3.5.3. Attaching additional labels to your time series and alerts
You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Define labels you want to add for every metric under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value>
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value>
1 Copy to Clipboard Copied! - 1
- Substitute
<key>: <value>
with key-value pairs where<key>
is a unique name for the new label and<value>
is its value.
Warning-
Do not use
prometheus
orprometheus_replica
as key names, because they are reserved and will be overwritten. -
Do not use
cluster
ormanaged_cluster
as key names. Using them can cause issues where you are unable to see data in the developer dashboards.
For example, to add metadata about the region and environment to all time series and alerts, use the following example:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod
Copy to Clipboard Copied! - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.5.4. Configuring alert notifications
In OpenShift Container Platform 4.19, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers.
Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the alertmanager-main
secret.
3.5.4.1. Configuring alert routing for default platform alerts
You can configure Alertmanager to send notifications to receive important alerts coming from your cluster. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main
secret in the openshift-monitoring
namespace.
All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Extract the currently active Alertmanager configuration from the
alertmanager-main
secret and save it as a localalertmanager.yaml
file:oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
$ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
Copy to Clipboard Copied! -
Open the
alertmanager.yaml
file. Edit the Alertmanager configuration:
Optional: Change the default Alertmanager configuration:
Example of the default Alertmanager secret YAML
global: resolve_timeout: 5m http_config: proxy_from_environment: true route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog receivers: - name: default - name: watchdog
global: resolve_timeout: 5m http_config: proxy_from_environment: true
1 route: group_wait: 30s
2 group_interval: 5m
3 repeat_interval: 12h
4 receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog receivers: - name: default - name: watchdog
Copy to Clipboard Copied! - 1
- If you configured an HTTP cluster-wide proxy, set the
proxy_from_environment
parameter totrue
to enable proxying for all alert receivers. - 2
- Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification.
- 3
- Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent.
- 4
- Specify the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the
repeat_interval
value to less than thegroup_interval
value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled.
Add your alert receiver configuration:
# ... receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> # ...
# ... receivers: - name: default - name: watchdog - name: <receiver>
1 <receiver_configuration>
2 # ...
Copy to Clipboard Copied! Example of configuring PagerDuty as an alert receiver
# ... receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - routing_key: xxxxxxxxxx http_config: proxy_from_environment: true authorization: credentials: xxxxxxxxxx # ...
# ... receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - routing_key: xxxxxxxxxx
1 http_config:
2 proxy_from_environment: true authorization: credentials: xxxxxxxxxx # ...
Copy to Clipboard Copied! Example of configuring email as an alert receiver
# ... receivers: - name: default - name: watchdog - name: team-frontend-page email_configs: - to: myemail@example.com from: alertmanager@example.com smarthost: 'smtp.example.com:587' auth_username: alertmanager@example.com auth_password: password hello: alertmanager # ...
# ... receivers: - name: default - name: watchdog - name: team-frontend-page email_configs: - to: myemail@example.com
1 from: alertmanager@example.com
2 smarthost: 'smtp.example.com:587'
3 auth_username: alertmanager@example.com
4 auth_password: password hello: alertmanager
5 # ...
Copy to Clipboard Copied! - 1
- Specify an email address to send notifications to.
- 2
- Specify an email address to send notifications from.
- 3
- Specify the SMTP server address used for sending emails, including the port number.
- 4
- Specify the authentication credentials that Alertmanager uses to connect to the SMTP server. This example uses username and password.
- 5
- Specify the hostname to identify to the SMTP server. If you do not include this parameter, the hostname defaults to
localhost
.
ImportantAlertmanager requires an external SMTP server to send email alerts. To configure email alert receivers, ensure you have the necessary connection details for an external SMTP server.
Add the routing configuration:
# ... route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "<your_matching_rules>" receiver: <receiver> # ...
# ... route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers:
1 - "<your_matching_rules>"
2 receiver: <receiver>
3 # ...
Copy to Clipboard Copied! - 1
- Use the
matchers
key name to specify the matching rules that an alert has to fulfill to match the node. If you define inhibition rules, usetarget_matchers
key name for target matchers andsource_matchers
key name for source matchers. - 2
- Specify labels to match your alerts.
- 3
- Specify the name of the receiver to use for the alerts.
WarningDo not use the
match
,match_re
,target_match
,target_match_re
,source_match
, andsource_match_re
key names, which are deprecated and planned for removal in a future release.Example of alert routing
# ... route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=example-app" routes: - matchers: - "severity=critical" receiver: team-frontend-page # ...
# ... route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers:
1 - "service=example-app" routes:
2 - matchers: - "severity=critical" receiver: team-frontend-page # ...
Copy to Clipboard Copied! The previous example routes alerts of
critical
severity that are fired by theexample-app
service to theteam-frontend-page
receiver. Typically, these types of alerts are paged to an individual or a critical response team.
Apply the new configuration in the file:
oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-
$ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-
Copy to Clipboard Copied! Verify your routing configuration by visualizing the routing tree:
oc exec alertmanager-main-0 -n openshift-monitoring -- amtool config routes show --alertmanager.url http://localhost:9093
$ oc exec alertmanager-main-0 -n openshift-monitoring -- amtool config routes show --alertmanager.url http://localhost:9093
Copy to Clipboard Copied! Example output
Routing tree: . └── default-route receiver: default ├── {alertname="Watchdog"} receiver: Watchdog └── {service="example-app"} receiver: default └── {severity="critical"} receiver: team-frontend-page
Routing tree: . └── default-route receiver: default ├── {alertname="Watchdog"} receiver: Watchdog └── {service="example-app"} receiver: default └── {severity="critical"} receiver: team-frontend-page
Copy to Clipboard Copied!
3.5.4.2. Configuring alert routing with the OpenShift Container Platform web console
You can configure alert routing through the OpenShift Container Platform web console to ensure that you learn about important issues with your cluster.
The OpenShift Container Platform web console provides fewer settings to configure alert routing than the alertmanager-main
secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts".
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role.
Procedure
In the OpenShift Container Platform web console, go to Administration
Cluster Settings Configuration Alertmanager. NoteAlternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert.
- Click Create Receiver in the Receivers section of the page.
- In the Create Receiver form, add a Receiver name and choose a Receiver type from the list.
Edit the receiver configuration:
For PagerDuty receivers:
- Choose an integration type and add a PagerDuty integration key.
- Add the URL of your PagerDuty installation.
- Click Show advanced configuration if you want to edit the client and incident details or the severity specification.
For webhook receivers:
- Add the endpoint to send HTTP POST requests to.
- Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.
For email receivers:
- Add the email address to send notifications to.
Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.
ImportantAlertmanager requires an external SMTP server to send email alerts. To configure email alert receivers, ensure you have the necessary connection details for an external SMTP server.
- Select whether TLS is required.
- Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.
For Slack receivers:
- Add the URL of the Slack webhook.
- Add the Slack channel or user name to send notifications to.
- Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.
By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps:
- Add routing label names and values in the Routing labels section of the form.
- Click Add label to add further routing labels.
- Click Create to create the receiver.
3.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts
You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results:
- All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
- All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.
You can achieve this by using the openshift_io_alert_source="platform"
label that is added by the Cluster Monitoring Operator to all platform alerts:
-
Use the
openshift_io_alert_source="platform"
matcher to match default platform alerts. -
Use the
openshift_io_alert_source!="platform"
or'openshift_io_alert_source=""'
matcher to match user-defined alerts.
This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.