Chapter 4. Configuring user workload monitoring
4.1. Preparing to configure the user workload monitoring stack
This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack.
- Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration.
- The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.
4.1.1. Configurable monitoring components
This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config
config map.
Component | user-workload-monitoring-config config map key |
---|---|
Prometheus Operator |
|
Prometheus |
|
Alertmanager |
|
Thanos Ruler |
|
Different configuration changes to the ConfigMap
object result in different outcomes:
- The pods are not redeployed. Therefore, there is no service outage.
The affected pods are redeployed:
- For single-node clusters, this results in temporary service outage.
- For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available.
- Configuring and resizing a persistent volume always results in a service outage, regardless of high availability.
Each procedure that requires a change in the config map includes its expected outcome.
4.1.2. Enabling monitoring for user-defined projects
In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects.
Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform.
4.1.2.1. Enabling monitoring for user-defined projects
Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true
field in the cluster monitoring ConfigMap
object.
You must remove any custom Prometheus instances before enabling monitoring for user-defined projects.
You must have access to the cluster as a user with the cluster-admin
cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
). -
You have created the
cluster-monitoring-config
ConfigMap
object. You have optionally created and configured the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project. You can add configuration options to thisConfigMap
object for the components that monitor user-defined projects.NoteEvery time you save configuration changes to the
user-workload-monitoring-config
ConfigMap
object, the pods in theopenshift-user-workload-monitoring
project are redeployed. It might sometimes take a while for these components to redeploy.
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add
enableUserWorkload: true
underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1
- 1
- When set to
true
, theenableUserWorkload
parameter enables monitoring for user-defined projects in a cluster.
Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically.
NoteIf you enable monitoring for user-defined projects, the
user-workload-monitoring-config
ConfigMap
object is created by default.Verify that the
prometheus-operator
,prometheus-user-workload
, andthanos-ruler-user-workload
pods are running in theopenshift-user-workload-monitoring
project. It might take a short while for the pods to start:$ oc -n openshift-user-workload-monitoring get pod
Example output
NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h
Additional resources
4.1.2.2. Granting users permission to configure monitoring for user-defined projects
As a cluster administrator, you can assign the user-workload-monitoring-config-edit
role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - The user account that you are assigning the role to already exists.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Assign the
user-workload-monitoring-config-edit
role to a user in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring
Verify that the user is correctly assigned to the
user-workload-monitoring-config-edit
role by displaying the related role binding:$ oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring
Example command
$ oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring
Example output
Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1
- 1
- In this example,
user1
is assigned to theuser-workload-monitoring-config-edit
role.
4.1.3. Enabling alert routing for user-defined projects
In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps:
Enable alert routing for user-defined projects:
- Use the default platform Alertmanager instance.
- Use a separate Alertmanager instance only for user-defined projects.
- Grant users permission to configure alert routing for user-defined projects.
After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects.
Additional resources
4.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing
You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add
enableUserAlertmanagerConfig: true
in thealertmanagerMain
section underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... alertmanagerMain: enableUserAlertmanagerConfig: true 1 # ...
- 1
- Set the
enableUserAlertmanagerConfig
value totrue
to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager.
- Save the file to apply the changes. The new configuration is applied automatically.
4.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing
In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add
enabled: true
andenableAlertmanagerConfig: true
in thealertmanager
section underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2
- 1
- Set the
enabled
value totrue
to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value tofalse
or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value tofalse
or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. - 2
- Set the
enableAlertmanagerConfig
value totrue
to enable users to define their own alert routing configurations withAlertmanagerConfig
objects.
- Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically.
Verification
Verify that the
user-workload
Alertmanager instance has started:# oc -n openshift-user-workload-monitoring get alertmanager
Example output
NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s
4.1.3.3. Granting users permission to configure alert routing for user-defined projects
You can grant users permission to configure alert routing for user-defined projects.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled monitoring for user-defined projects.
- The user account that you are assigning the role to already exists.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Assign the
alert-routing-edit
cluster role to a user in the user-defined project:$ oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1
- 1
- For
<namespace>
, substitute the namespace for the user-defined project, such asns1
. For<user>
, substitute the username for the account to which you want to assign the role.
Additional resources
4.1.4. Granting users permissions for monitoring for user-defined projects
As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects.
You can also grant developers and other users different permissions:
- Monitoring user-defined projects
- Configuring the components that monitor user-defined projects
- Configuring alert routing for user-defined projects
- Managing alerts and silences for user-defined projects
You can grant the permissions by assigning one of the following monitoring roles or cluster roles:
Role name | Description | Project |
---|---|---|
|
Users with this role can edit the |
|
| Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. |
|
| Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. |
|
Cluster role name | Description | Project |
---|---|---|
|
Users with this cluster role have read access to |
Can be bound with |
|
Users with this cluster role can create, modify, and delete |
Can be bound with |
|
Users with this cluster role have the same privileges as users with the |
Can be bound with |
|
Users with this cluster role can create, update, and delete |
Can be bound with |
Additional resources
4.1.4.1. Granting user permissions by using the web console
You can grant users permissions for the openshift-monitoring
project or their own projects, by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - The user account that you are assigning the role to already exists.
Procedure
-
In the Administrator perspective of the OpenShift Container Platform web console, go to User Management
RoleBindings Create binding. - In the Binding Type section, select the Namespace Role Binding type.
- In the Name field, enter a name for the role binding.
In the Namespace field, select the project where you want to grant the access.
ImportantThe monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field.
- Select a monitoring role or cluster role from the Role Name list.
- In the Subject section, select User.
- In the Subject Name field, enter the name of the user.
- Select Create to apply the role binding.
4.1.4.2. Granting user permissions by using the CLI
You can grant users permissions for the openshift-monitoring
project or their own projects, by using the OpenShift CLI (oc
).
Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - The user account that you are assigning the role to already exists.
-
You have installed the OpenShift CLI (
oc
).
Procedure
To assign a monitoring role to a user for a project, enter the following command:
$ oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1
- 1
- Substitute
<role>
with the wanted monitoring role,<user>
with the user to whom you want to assign the role, and<namespace>
with the project where you want to grant the access.
To assign a monitoring cluster role to a user for a project, enter the following command:
$ oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1
- 1
- Substitute
<cluster-role>
with the wanted monitoring cluster role,<user>
with the user to whom you want to assign the cluster role, and<namespace>
with the project where you want to grant the access.
4.1.5. Excluding a user-defined project from monitoring
Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring
label to the project’s namespace with a value of false
.
Procedure
Add the label to the project namespace:
$ oc label namespace my-project 'openshift.io/user-monitoring=false'
To re-enable monitoring, remove the label from the namespace:
$ oc label namespace my-project 'openshift.io/user-monitoring-'
NoteIf there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label.
4.1.6. Disabling monitoring for user-defined projects
After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false
in the cluster monitoring ConfigMap
object.
Alternatively, you can remove enableUserWorkload: true
to disable monitoring for user-defined projects.
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Set
enableUserWorkload:
tofalse
underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false
- Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically.
Check that the
prometheus-operator
,prometheus-user-workload
andthanos-ruler-user-workload
pods are terminated in theopenshift-user-workload-monitoring
project. This might take a short while:$ oc -n openshift-user-workload-monitoring get pod
Example output
No resources found in openshift-user-workload-monitoring project.
The user-workload-monitoring-config
ConfigMap
object in the openshift-user-workload-monitoring
project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap
object.
4.2. Configuring performance and scalability for user workload monitoring
You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources.
4.2.1. Controlling the placement and distribution of monitoring components
You can move the monitoring stack components to specific nodes:
-
Use the
nodeSelector
constraint with labeled nodes to move any of the monitoring stack components to specific nodes. - Assign tolerations to enable moving components to tainted nodes.
By doing so, you control the placement and distribution of the monitoring components across a cluster.
By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.
Additional resources
4.2.1.1. Moving monitoring components to different nodes
You can move any of the components that monitor workloads for user-defined projects to specific worker nodes.
It is not permitted to move components to control plane or infrastructure nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:
$ oc label nodes <node_name> <node_label> 1
- 1
- Replace
<node_name>
with the name of the node where you want to add the label. Replace<node_label>
with the name of the wanted label.
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Specify the node labels for the
nodeSelector
constraint for the component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ...
- 1
- Substitute
<component>
with the appropriate monitoring stack component name. - 2
- Substitute
<node_label_1>
with the label you added to the node. - 3
- Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.
NoteIf monitoring components remain in a
Pending
state after configuring thenodeSelector
constraint, check the pod events for errors relating to taints and tolerations.- Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed.
Additional resources
4.2.1.2. Assigning tolerations to monitoring components
You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Specify
tolerations
for the component:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>
Substitute
<component>
and<toleration_specification>
accordingly.For example,
oc adm taint nodes node1 key1=value1:NoSchedule
adds a taint tonode1
with the keykey1
and the valuevalue1
. This prevents monitoring components from deploying pods onnode1
unless a toleration is configured for that taint. The following example configures thethanosRuler
component to tolerate the example taint:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule"
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Additional resources
- Enabling monitoring for user-defined projects
- Controlling pod placement using node taints
- Taints and Tolerations (Kubernetes documentation)
4.2.2. Managing CPU and memory resources for monitoring components
You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components.
You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring
namespace.
4.2.2.1. Specifying limits and requests
To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config
ConfigMap
object in the openshift-user-workload-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add values to define resource limits and requests for each component you want to configure.
ImportantEnsure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run.
Example of setting resource limits and requests
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Additional resources
- About specifying limits and requests for monitoring components
- Kubernetes requests and limits documentation (Kubernetes documentation)
4.2.3. Controlling the impact of unbound metrics attributes in user-defined projects
Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:
- Limit the number of samples that can be accepted per target scrape in user-defined projects
- Limit the number of scraped labels, the length of label names, and the length of label values
- Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped
Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
Additional resources
4.2.3.1. Setting scrape sample and label limits for user-defined projects
You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values.
If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add the
enforcedSampleLimit
configuration todata/config.yaml
to limit the number of samples that can be accepted per target scrape in user-defined projects:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1
- 1
- A value is required if this parameter is specified. This
enforcedSampleLimit
example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000.
Add the
enforcedLabelLimit
,enforcedLabelNameLengthLimit
, andenforcedLabelValueLengthLimit
configurations todata/config.yaml
to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3
- 1
- Specifies the maximum number of labels per scrape. The default value is
0
, which specifies no limit. - 2
- Specifies the maximum length in characters of a label name. The default value is
0
, which specifies no limit. - 3
- Specifies the maximum length in characters of a label value. The default value is
0
, which specifies no limit.
- Save the file to apply the changes. The limits are applied automatically.
4.2.3.2. Creating scrape sample alerts
You can create alerts that notify you when:
-
The target cannot be scraped or is not available for the specified
for
duration -
A scrape sample threshold is reached or is exceeded for the specified
for
duration
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using
enforcedSampleLimit
. -
You have installed the OpenShift CLI (
oc
).
Procedure
Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called
monitoring-stack-alerts.yaml
:apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ $labels.container }} container of the {{ $labels.pod }} pod in the {{ $labels.namespace }} namespace consumes {{ $value | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11
- 1
- Defines the name of the alerting rule.
- 2
- Specifies the user-defined project where the alerting rule will be deployed.
- 3
- The
TargetDown
alert will fire if the target cannot be scraped or is not available for thefor
duration. - 4
- The message that will be output when the
TargetDown
alert fires. - 5
- The conditions for the
TargetDown
alert must be true for this duration before the alert is fired. - 6
- Defines the severity for the
TargetDown
alert. - 7
- The
ApproachingEnforcedSamplesLimit
alert will fire when the defined scrape sample threshold is reached or exceeded for the specifiedfor
duration. - 8
- The message that will be output when the
ApproachingEnforcedSamplesLimit
alert fires. - 9
- The threshold for the
ApproachingEnforcedSamplesLimit
alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of50000
. Thefor
duration must also have passed before the alert will fire. The<number>
in the expressionscrape_samples_scraped/<number> > <threshold>
must match theenforcedSampleLimit
value defined in theuser-workload-monitoring-config
ConfigMap
object. - 10
- The conditions for the
ApproachingEnforcedSamplesLimit
alert must be true for this duration before the alert is fired. - 11
- Defines the severity for the
ApproachingEnforcedSamplesLimit
alert.
Apply the configuration to the user-defined project:
$ oc apply -f monitoring-stack-alerts.yaml
4.2.4. Configuring pod topology spread constraints
You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config
config map.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add the following settings under the
data/config.yaml
field to configure pod topology spread constraints:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>
- 1
- Specify a name of the component for which you want to set up pod topology spread constraints.
- 2
- Specify a numeric value for
maxSkew
, which defines the degree to which pods are allowed to be unevenly distributed. - 3
- Specify a key of node labels for
topologyKey
. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. - 4
- Specify a value for
whenUnsatisfiable
. Available options areDoNotSchedule
andScheduleAnyway
. SpecifyDoNotSchedule
if you want themaxSkew
value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. SpecifyScheduleAnyway
if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. - 5
- Specify
labelSelector
to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
Example configuration for Thanos Ruler
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Additional resources
4.3. Storing and recording data for user workload monitoring
Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting.
4.3.1. Configuring persistent storage
Run cluster monitoring with persistent storage to gain the following benefits:
- Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
- Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.
For production environments, it is highly recommended to configure persistent storage.
In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability.
4.3.1.1. Persistent storage prerequisites
- Dedicate sufficient persistent storage to ensure that the disk does not become full.
Use
Filesystem
as the storage type value for thevolumeMode
parameter when you configure the persistent volume.Important-
Do not use a raw block volume, which is described with
volumeMode: Block
in thePersistentVolume
resource. Prometheus cannot use raw block volumes. - Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.
-
Do not use a raw block volume, which is described with
4.3.1.2. Configuring a persistent volume claim
To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add your PVC configuration for the component under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3
The following example configures a PVC that claims persistent storage for Thanos Ruler:
Example PVC configuration
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi
NoteStorage requirements for the
thanosRuler
component depend on the number of rules that are evaluated and how many samples each rule generates.Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied.
WarningWhen you update the config map with a PVC configuration, the affected
StatefulSet
object is recreated, resulting in a temporary service outage.
Additional resources
- Understanding persistent storage
- PersistentVolumeClaims (Kubernetes documentation)
4.3.1.3. Resizing a persistent volume
You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured.
You can only expand the size of the PVC. Shrinking the storage size is not possible.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
- You have configured at least one PVC for components that monitor user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
- Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes.
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add a new storage size for the PVC configuration for the component under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2
The following example sets the new PVC request to 20 gigabytes for Thanos Ruler:
Example storage configuration for
thanosRuler
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi
NoteStorage requirements for the
thanosRuler
component depend on the number of rules that are evaluated and how many samples each rule generates.Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
WarningWhen you update the config map with a new storage size, the affected
StatefulSet
object is recreated, resulting in a temporary service outage.
4.3.2. Modifying retention time and size for Prometheus metrics data
By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses.
Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize
limit. In such cases, the KubePersistentVolumeFillingUp
alert fires until the space on a PV is lower than the retentionSize
limit.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add the retention time and size configuration under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2
- 1
- The retention time: a number directly followed by
ms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years). You can also combine time values for specific times, such as1h30m15s
. - 2
- The retention size: a number directly followed by
B
(bytes),KB
(kilobytes),MB
(megabytes),GB
(gigabytes),TB
(terabytes),PB
(petabytes), andEB
(exabytes).
The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance:
Example of setting retention time for Prometheus
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
4.3.2.1. Modifying the retention time for Thanos Ruler metrics data
By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add the retention time configuration under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1
- 1
- Specify the retention time in the following format: a number directly followed by
ms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years). You can also combine time values for specific times, such as1h30m15s
. The default is24h
.
The following example sets the retention time to 10 days for Thanos Ruler data:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
4.3.3. Setting log levels for monitoring components
You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler.
The following log levels can be applied to the relevant component in the user-workload-monitoring-config
ConfigMap
object:
-
debug
. Log debug, informational, warning, and error messages. -
info
. Log informational, warning, and error messages. -
warn
. Log warning and error messages only. -
error
. Log error messages only.
The default log level is info
.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add
logLevel: <log_level>
for a component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the
prometheus-operator
deployment:$ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"
Example output
- --log-level=debug
Check that the pods for the component are running. The following example lists the status of pods:
$ oc -n openshift-user-workload-monitoring get pods
NoteIf an unrecognized
logLevel
value is included in theConfigMap
object, the pods for the component might not restart successfully.
4.3.4. Enabling the query log file for Prometheus
You can configure Prometheus to write all queries that have been run by the engine to a log file.
Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap
object to enable the feature.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add the
queryLogFile
parameter for Prometheus underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1
- 1
- Add the full path to the file in which queries will be logged.
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Verify that the pods for the component are running. The following sample command lists the status of pods:
$ oc -n openshift-user-workload-monitoring get pods
Example output
... prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m ...
Read the query log:
$ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>
ImportantRevert the setting in the config map after you have examined the logged query information.
Additional resources
4.4. Configuring metrics for user workload monitoring
Configure the collection of metrics to monitor how cluster components and your own workloads are performing.
You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters.
Additional resources
4.4.1. Configuring remote write storage
You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.
ImportantRed Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support.
You have set up authentication credentials in a
Secret
object for the remote write endpoint. You must create the secret in theopenshift-user-workload-monitoring
namespace.WarningTo reduce security risks, use HTTPS and authentication to send metrics to an endpoint.
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add a
remoteWrite:
section underdata/config.yaml/prometheus
, as shown in the following example:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2
- 1
- The URL of the remote write endpoint.
- 2
- The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an
Authorization
request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
Add write relabel configuration values after the authentication credentials:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1
- 1
- Add configuration for metrics that you want to send to the remote endpoint.
Example of forwarding a single metric called
my_metric
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep
Example of forwarding metrics called
my_metric_1
andmy_metric_2
inmy_namespace
namespaceapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep
- Save the file to apply the changes. The new configuration is applied automatically.
4.4.1.1. Supported remote write authentication settings
You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write.
Authentication method | Config map field | Description |
---|---|---|
AWS Signature Version 4 |
| This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. |
Basic authentication |
| Basic authentication sets the authorization header on every remote write request with the configured username and password. |
authorization |
|
Authorization sets the |
OAuth 2.0 |
|
An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from |
TLS client |
| A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. |
4.4.1.2. Example remote write authentication settings
The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret
object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring
namespace.
4.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication
The following shows the settings for a sigv4
secret named sigv4-credentials
in the openshift-user-workload-monitoring
namespace.
apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque
The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret
object named sigv4-credentials
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7
- 1
- The AWS region.
- 2 4
- The name of the
Secret
object containing the AWS API access credentials. - 3
- The key that contains the AWS API access key in the specified
Secret
object. - 5
- The key that contains the AWS API secret key in the specified
Secret
object. - 6
- The name of the AWS profile that is being used to authenticate.
- 7
- The unique identifier for the Amazon Resource Name (ARN) assigned to your role.
4.4.1.2.2. Sample YAML for Basic authentication
The following shows sample Basic authentication settings for a Secret
object named rw-basic-auth
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque
The following sample shows a basicAuth
remote write configuration that uses a Secret
object named rw-basic-auth
in the openshift-user-workload-monitoring
namespace. It assumes that you have already set up authentication credentials for the endpoint.
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4
4.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret
Object
The following shows bearer token settings for a Secret
object named rw-bearer-auth
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1
kind: Secret
metadata:
name: rw-bearer-auth
namespace: openshift-user-workload-monitoring
stringData:
token: <authentication_token> 1
type: Opaque
- 1
- The authentication token.
The following shows sample bearer token config map settings that use a Secret
object named rw-bearer-auth
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3
4.4.1.2.4. Sample YAML for OAuth 2.0 authentication
The following shows sample OAuth 2.0 settings for a Secret
object named oauth2-credentials
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque
The following shows an oauth2
remote write authentication sample configuration that uses a Secret
object named oauth2-credentials
in the openshift-user-workload-monitoring
namespace:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>
- 1 3
- The name of the corresponding
Secret
object. Note thatClientId
can alternatively refer to aConfigMap
object, althoughclientSecret
must refer to aSecret
object. - 2 4
- The key that contains the OAuth 2.0 credentials in the specified
Secret
object. - 5
- The URL used to fetch a token with the specified
clientId
andclientSecret
. - 6
- The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
- 7
- The OAuth 2.0 authorization request parameters required for the authorization server.
4.4.1.2.5. Sample YAML for TLS client authentication
The following shows sample TLS client settings for a tls
Secret
object named mtls-bundle
in the openshift-user-workload-monitoring
namespace.
apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls
The following sample shows a tlsConfig
remote write authentication configuration that uses a TLS Secret
object named mtls-bundle
.
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6
- 1 3 5
- The name of the corresponding
Secret
object that contains the TLS authentication credentials. Note thatca
andcert
can alternatively refer to aConfigMap
object, thoughkeySecret
must refer to aSecret
object. - 2
- The key in the specified
Secret
object that contains the CA certificate for the endpoint. - 4
- The key in the specified
Secret
object that contains the client certificate for the endpoint. - 6
- The key in the specified
Secret
object that contains the client key secret.
4.4.1.3. Example remote write queue configuration
You can use the queueConfig
object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring
namespace.
Example configuration of remote write parameters with default values
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9
- 1
- The number of samples to buffer per shard before they are dropped from the queue.
- 2
- The minimum number of shards.
- 3
- The maximum number of shards.
- 4
- The maximum number of samples per send.
- 5
- The maximum time for a sample to wait in buffer.
- 6
- The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the
maxbackoff
time. - 7
- The maximum time to wait before retrying a failed request.
- 8
- Set this parameter to
true
to retry a request after receiving a 429 status code from the remote write storage. - 9
- The samples that are older than the
sampleAgeLimit
limit are dropped from the queue. If the value is undefined or set to0s
, the parameter is ignored.
Additional resources
- Prometheus REST API reference for remote write
- Setting up remote write compatible endpoints (Prometheus documentation)
- Tuning remote write settings (Prometheus documentation)
- Understanding secrets
4.4.2. Creating cluster ID labels for metrics
You can create cluster ID labels for metrics by adding the write_relabel
settings for remote write storage in the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
namespace.
When Prometheus scrapes user workload targets that expose a namespace
label, the system stores this label as exported_namespace
. This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels
field to true
for PodMonitor
or ServiceMonitor
objects.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
). - You have configured remote write storage.
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
In the
writeRelabelConfigs:
section underdata/config.yaml/prometheus/remoteWrite
, add cluster ID relabel configuration values:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2
The following sample shows how to forward a metric with the cluster ID label
cluster_id
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3
- 1
- The system initially applies a temporary cluster ID source label named
__tmp_openshift_cluster_id__
. This temporary label gets replaced by the cluster ID label name that you specify. - 2
- Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use
__tmp_openshift_cluster_id__
. The final relabeling step removes labels that use this name. - 3
- The
replace
write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
- Save the file to apply the changes. The new configuration is applied automatically.
Additional resources
4.4.3. Setting up metrics collection for user-defined projects
You can create a ServiceMonitor
resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics
canonical name.
This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor
resource that defines how that service should be monitored.
4.4.3.1. Deploying a sample service
To test monitoring of a service in a user-defined project, you can deploy a sample service.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with administrative permissions for the namespace.
Procedure
-
Create a YAML file for the service configuration. In this example, it is called
prometheus-example-app.yaml
. Add the following deployment and service configuration details to the file:
apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP
This configuration deploys a service named
prometheus-example-app
in the user-definedns1
project. This service exposes the customversion
metric.Apply the configuration to the cluster:
$ oc apply -f prometheus-example-app.yaml
It takes some time to deploy the service.
You can check that the pod is running:
$ oc -n ns1 get pod
Example output
NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m
4.4.3.2. Specifying how a service is monitored
To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics
endpoint. You can do this using a ServiceMonitor
custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor
CRD that specifies how a pod should be monitored. The former requires a Service
object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod.
This procedure shows you how to create a ServiceMonitor
resource for a service in a user-defined project.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or themonitoring-edit
cluster role. - You have enabled monitoring for user-defined projects.
For this example, you have deployed the
prometheus-example-app
sample service in thens1
project.NoteThe
prometheus-example-app
sample service does not support TLS authentication.
Procedure
-
Create a new YAML configuration file named
example-app-service-monitor.yaml
. Add a
ServiceMonitor
resource to the YAML file. The following example creates a service monitor namedprometheus-example-monitor
to scrape metrics exposed by theprometheus-example-app
service in thens1
namespace:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app
NoteA
ServiceMonitor
resource in a user-defined namespace can only discover services in the same namespace. That is, thenamespaceSelector
field of theServiceMonitor
resource is always ignored.Apply the configuration to the cluster:
$ oc apply -f example-app-service-monitor.yaml
It takes some time to deploy the
ServiceMonitor
resource.Verify that the
ServiceMonitor
resource is running:$ oc -n <namespace> get servicemonitor
Example output
NAME AGE prometheus-example-monitor 81m
4.4.3.3. Example service endpoint authentication settings
You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor
and PodMonitor
custom resource definitions (CRDs).
The following samples show different authentication settings for a ServiceMonitor
resource. Each sample shows how to configure a corresponding Secret
object that contains authentication credentials and other relevant settings.
4.4.3.3.1. Sample YAML authentication with a bearer token
The following sample shows bearer token settings for a Secret
object named example-bearer-auth
in the ns1
namespace:
Example bearer token secret
apiVersion: v1
kind: Secret
metadata:
name: example-bearer-auth
namespace: ns1
stringData:
token: <authentication_token> 1
- 1
- Specify an authentication token.
The following sample shows bearer token authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-bearer-auth
:
Example bearer token authentication settings
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app
Do not use bearerTokenFile
to configure bearer token. If you use the bearerTokenFile
configuration, the ServiceMonitor
resource is rejected.
4.4.3.3.2. Sample YAML for Basic authentication
The following sample shows Basic authentication settings for a Secret
object named example-basic-auth
in the ns1
namespace:
Example Basic authentication secret
apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2
The following sample shows Basic authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-basic-auth
:
Example Basic authentication settings
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app
4.4.3.3.3. Sample YAML authentication with OAuth 2.0
The following sample shows OAuth 2.0 settings for a Secret
object named example-oauth2
in the ns1
namespace:
Example OAuth 2.0 secret
apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2
The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-oauth2
:
Example OAuth 2.0 authentication settings
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app
- 1
- The key that contains the OAuth 2.0 ID in the specified
Secret
object. - 2 4
- The name of the
Secret
object that contains the OAuth 2.0 credentials. - 3
- The key that contains the OAuth 2.0 secret in the specified
Secret
object. - 5
- The URL used to fetch a token with the specified
clientId
andclientSecret
.
Additional resources
- Enabling monitoring for user-defined projects
- Scrape Prometheus metrics using TLS in ServiceMonitor configuration (Red Hat Customer Portal article)
- PodMonitor API
- ServiceMonitor API
4.5. Configuring alerts and notifications for user workload monitoring
You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.
4.5.1. Configuring external Alertmanager instances
The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus.
You can add external Alertmanager instances to route alerts for user-defined projects.
If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add an
additionalAlertmanagerConfigs
section with configuration details underdata/config.yaml/<component>
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2
- 2
- Substitute
<alertmanager_specification>
with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken
) and client TLS (tlsConfig
). - 1
- Substitute
<component>
for one of two supported external Alertmanager components:prometheus
orthanosRuler
.
The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
4.5.2. Configuring secrets for Alertmanager
The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.
For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret
object rather than in the ConfigMap
object.
4.5.2.1. Adding a secret to the Alertmanager configuration
You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
project.
After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name>
within the alertmanager
container for the Alertmanager pods.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have created the secret to be configured in Alertmanager in the
openshift-user-workload-monitoring
project. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Add a
secrets:
section underdata/config.yaml/alertmanager
with the following configuration:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>
- 1
- This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
- 2
- The name of the
Secret
object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.
The following sample config map settings configure Alertmanager to use two
Secret
objects namedtest-secret-basic-auth
andtest-secret-api-token
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token
- Save the file to apply the changes. The new configuration is applied automatically.
4.5.3. Attaching additional labels to your time series and alerts
You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Define labels you want to add for every metric under
data/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1
- 1
- Substitute
<key>: <value>
with key-value pairs where<key>
is a unique name for the new label and<value>
is its value.
Warning-
Do not use
prometheus
orprometheus_replica
as key names, because they are reserved and will be overwritten. -
Do not use
cluster
ormanaged_cluster
as key names. Using them can cause issues where you are unable to see data in the developer dashboards.
NoteIn the
openshift-user-workload-monitoring
project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. SettingexternalLabels
forprometheus
in theuser-workload-monitoring-config
ConfigMap
object will only configure external labels for metrics and not for any rules.For example, to add metadata about the region and environment to all time series and alerts, use the following example:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Additional resources
4.5.4. Configuring alert notifications
In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods:
- Use the default platform Alertmanager instance.
- Use a separate Alertmanager instance only for user-defined projects.
Developers and other users with the alert-routing-edit
cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers.
Review the following limitations of alert routing for user-defined projects:
-
User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace
ns1
only applies toPrometheusRules
resources in the same namespace. -
When a namespace is excluded from user-defined monitoring,
AlertmanagerConfig
resources in the namespace cease to be part of the Alertmanager configuration.
Additional resources
- Understanding alert routing for user-defined projects
- Sending notifications to external systems
- PagerDuty (PagerDuty official site)
- Prometheus Integration Guide (PagerDuty official site)
- Support version matrix for monitoring components
- Enabling alert routing for user-defined projects
4.5.4.1. Configuring alert routing for user-defined projects
If you are a non-administrator user who has been given the alert-routing-edit
cluster role, you can create or edit alert routing for user-defined projects.
Prerequisites
- A cluster administrator has enabled monitoring for user-defined projects.
- A cluster administrator has enabled alert routing for user-defined projects.
-
You are logged in as a user that has the
alert-routing-edit
cluster role for the project for which you want to create alert routing. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
Create a YAML file for alert routing. The example in this procedure uses a file called
example-app-alert-routing.yaml
. Add an
AlertmanagerConfig
YAML definition to the file. For example:apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post
- Save the file.
Apply the resource to the cluster:
$ oc apply -f example-app-alert-routing.yaml
The configuration is automatically applied to the Alertmanager pods.
4.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret
If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload
secret in the openshift-user-workload-monitoring
namespace.
All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled a separate instance of Alertmanager for user-defined alert routing.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Print the currently active Alertmanager configuration into the file
alertmanager.yaml
:$ oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
Edit the configuration in
alertmanager.yaml
:route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3
Apply the new configuration in the file:
$ oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-
4.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts
You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results:
- All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
- All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.
You can achieve this by using the openshift_io_alert_source="platform"
label that is added by the Cluster Monitoring Operator to all platform alerts:
-
Use the
openshift_io_alert_source="platform"
matcher to match default platform alerts. -
Use the
openshift_io_alert_source!="platform"
or'openshift_io_alert_source=""'
matcher to match user-defined alerts.
This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.