Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Configuring the monitoring stack
The OpenShift Container Platform 4 installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens post-installation.
This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios.
2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.
2.2. Maintenance and support for monitoring Copiar enlaceEnlace copiado en el portapapeles!
The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the
cluster-monitoring-operator
2.2.1. Support considerations for monitoring Copiar enlaceEnlace copiado en el portapapeles!
The following modifications are explicitly not supported:
-
Creating additional
ServiceMonitor,PodMonitor, andPrometheusRuleobjects in theopenshift-*andkube-*projects. Modifying any resources or objects deployed in the
openshift-monitoringoropenshift-user-workload-monitoringprojects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.NoteThe Alertmanager configuration is deployed as a secret resource in the
project. To configure additional routes for Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement.openshift-monitoring- Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.
-
Deploying user-defined workloads to
openshift-*, andkube-*projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. - Modifying the monitoring stack Grafana instance.
- Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator.
-
Enabling symptom based monitoring by using the
Probecustom resource definition (CRD) in Prometheus Operator. -
Modifying Alertmanager configurations by using the
AlertmanagerConfigCRD in Prometheus Operator.
Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed.
2.2.2. Support policy for monitoring Operators Copiar enlaceEnlace copiado en el portapapeles!
Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates.
While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
Overriding the Cluster Version Operator
The
spec.overrides
spec.overrides[].unmanaged
true
Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed.
2.3. Preparing to configure the monitoring stack Copiar enlaceEnlace copiado en el portapapeles!
You can configure the monitoring stack by creating and updating monitoring config maps.
2.3.1. Creating a cluster monitoring config map Copiar enlaceEnlace copiado en el portapapeles!
To configure core OpenShift Container Platform monitoring components, you must create the
cluster-monitoring-config
ConfigMap
openshift-monitoring
When you save your changes to the
cluster-monitoring-config
ConfigMap
openshift-monitoring
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
Check whether the
cluster-monitoring-configobject exists:ConfigMap$ oc -n openshift-monitoring get configmap cluster-monitoring-configIf the
object does not exist:ConfigMapCreate the following YAML manifest. In this example the file is called
:cluster-monitoring-config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |Apply the configuration to create the
object:ConfigMap$ oc apply -f cluster-monitoring-config.yaml
2.3.2. Creating a user-defined workload monitoring config map Copiar enlaceEnlace copiado en el portapapeles!
To configure the components that monitor user-defined projects, you must create the
user-workload-monitoring-config
ConfigMap
openshift-user-workload-monitoring
When you save your changes to the
user-workload-monitoring-config
ConfigMap
openshift-user-workload-monitoring
Prerequisites
-
You have access to the cluster as a user with the role.
cluster-admin -
You have installed the OpenShift CLI ().
oc
Procedure
Check whether the
user-workload-monitoring-configobject exists:ConfigMap$ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-configIf the
user-workload-monitoring-configobject does not exist:ConfigMapCreate the following YAML manifest. In this example the file is called
:user-workload-monitoring-config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |Apply the configuration to create the
object:ConfigMap$ oc apply -f user-workload-monitoring-config.yamlNoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMap
2.4. Configuring the monitoring stack Copiar enlaceEnlace copiado en el portapapeles!
In OpenShift Container Platform 4.8, you can configure the monitoring stack using the
cluster-monitoring-config
user-workload-monitoring-config
ConfigMap
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object.ConfigMapTo configure core OpenShift Container Platform monitoring components:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configAdd your configuration under
as a key-value pairdata/config.yaml:<component_name>: <component_configuration>apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component>Substitute
and<component>accordingly.<configuration_for_the_component>The following example
object configures a persistent volume claim (PVC) for Prometheus. This relates to the Prometheus instance that monitors core OpenShift Container Platform components only:ConfigMapapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s:1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi- 1
- Defines the Prometheus component and the subsequent lines define its configuration.
To configure components that monitor user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configAdd your configuration under
as a key-value pairdata/config.yaml:<component_name>: <component_configuration>apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component>Substitute
and<component>accordingly.<configuration_for_the_component>The following example
object configures a data retention period and minimum container resource requests for Prometheus. This relates to the Prometheus instance that monitors user-defined projects only:ConfigMapapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus:1 retention: 24h2 resources: requests: cpu: 200m3 memory: 2Gi4 - 1
- Defines the Prometheus component and the subsequent lines define its configuration.
- 2
- Configures a twenty-four hour data retention period for the Prometheus instance that monitors user-defined projects.
- 3
- Defines a minimum resource request of 200 millicores for the Prometheus container.
- 4
- Defines a minimum pod resource request of 2 GiB of memory for the Prometheus container.
NoteThe Prometheus config map component is called
in theprometheusK8scluster-monitoring-configobject andConfigMapin theprometheususer-workload-monitoring-configobject.ConfigMap
Save the file to apply the changes to the
object. The pods affected by the new configuration are restarted automatically.ConfigMapNoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.5. Configurable monitoring components Copiar enlaceEnlace copiado en el portapapeles!
This table shows the monitoring components you can configure and the keys used to specify the components in the
cluster-monitoring-config
user-workload-monitoring-config
ConfigMap
| Component | cluster-monitoring-config config map key | user-workload-monitoring-config config map key |
|---|---|---|
| Prometheus Operator |
|
|
| Prometheus |
|
|
| Alertmanager |
| |
| kube-state-metrics |
| |
| openshift-state-metrics |
| |
| Grafana |
| |
| Telemeter Client |
| |
| Prometheus Adapter |
| |
| Thanos Querier |
| |
| Thanos Ruler |
|
The Prometheus key is called
prometheusK8s
cluster-monitoring-config
ConfigMap
prometheus
user-workload-monitoring-config
ConfigMap
2.6. Moving monitoring components to different nodes Copiar enlaceEnlace copiado en el portapapeles!
You can move any of the monitoring stack components to specific nodes.
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo move a component that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configSpecify the
constraint for the component undernodeSelector:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: nodeSelector: <node_key>: <node_value> <node_key>: <node_value> <...>Substitute
accordingly and substitute<component>with the map of key-value pairs that specifies a group of destination nodes. Often, only a single key-value pair is used.<node_key>: <node_value>The component can only run on nodes that have each of the specified key-value pairs as labels. The nodes can have additional labels as well.
ImportantMany of the monitoring components are deployed by using multiple pods across different nodes in the cluster to maintain high availability. When moving monitoring components to labeled nodes, ensure that enough matching nodes are available to maintain resilience for the component. If only one label is specified, ensure that enough nodes contain that label to distribute all of the pods for the component across separate nodes. Alternatively, you can specify multiple labels each relating to individual nodes.
NoteIf monitoring components remain in a
state after configuring thePendingconstraint, check the pod logs for errors relating to taints and tolerations.nodeSelectorFor example, to move monitoring components for core OpenShift Container Platform projects to specific nodes that are labeled
,nodename: controlplane1,nodename: worker1, andnodename: worker2, use:nodename: worker2apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusOperator: nodeSelector: nodename: controlplane1 prometheusK8s: nodeSelector: nodename: worker1 nodename: worker2 alertmanagerMain: nodeSelector: nodename: worker1 nodename: worker2 kubeStateMetrics: nodeSelector: nodename: worker1 grafana: nodeSelector: nodename: worker1 telemeterClient: nodeSelector: nodename: worker1 k8sPrometheusAdapter: nodeSelector: nodename: worker1 nodename: worker2 openshiftStateMetrics: nodeSelector: nodename: worker1 thanosQuerier: nodeSelector: nodename: worker1 nodename: worker2
To move a component that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configSpecify the
constraint for the component undernodeSelector:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: nodeSelector: <node_key>: <node_value> <node_key>: <node_value> <...>Substitute
accordingly and substitute<component>with the map of key-value pairs that specifies the destination nodes. Often, only a single key-value pair is used.<node_key>: <node_value>The component can only run on nodes that have each of the specified key-value pairs as labels. The nodes can have additional labels as well.
ImportantMany of the monitoring components are deployed by using multiple pods across different nodes in the cluster to maintain high availability. When moving monitoring components to labeled nodes, ensure that enough matching nodes are available to maintain resilience for the component. If only one label is specified, ensure that enough nodes contain that label to distribute all of the pods for the component across separate nodes. Alternatively, you can specify multiple labels each relating to individual nodes.
NoteIf monitoring components remain in a
state after configuring thePendingconstraint, check the pod logs for errors relating to taints and tolerations.nodeSelectorFor example, to move monitoring components for user-defined projects to specific worker nodes labeled
,nodename: worker1, andnodename: worker2, use:nodename: worker2apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: nodeSelector: nodename: worker1 prometheus: nodeSelector: nodename: worker1 nodename: worker2 thanosRuler: nodeSelector: nodename: worker1 nodename: worker2
Save the file to apply the changes. The components affected by the new configuration are moved to the new nodes automatically.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.7. Assigning tolerations to monitoring components Copiar enlaceEnlace copiado en el portapapeles!
You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo assign tolerations to a component that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configSpecify
for the component:tolerationsapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>Substitute
and<component>accordingly.<toleration_specification>For example,
adds a taint tooc adm taint nodes node1 key1=value1:NoSchedulewith the keynode1and the valuekey1. This prevents monitoring components from deploying pods onvalue1unless a toleration is configured for that taint. The following example configures thenode1component to tolerate the example taint:alertmanagerMainapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule"
To assign tolerations to a component that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configSpecify
for the component:tolerationsapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>Substitute
and<component>accordingly.<toleration_specification>For example,
adds a taint tooc adm taint nodes node1 key1=value1:NoSchedulewith the keynode1and the valuekey1. This prevents monitoring components from deploying pods onvalue1unless a toleration is configured for that taint. The following example configures thenode1component to tolerate the example taint:thanosRulerapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule"
Save the file to apply the changes. The new component placement configuration is applied automatically.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.8. Configuring persistent storage Copiar enlaceEnlace copiado en el portapapeles!
Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.
2.8.1. Persistent storage prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see Prometheus database storage requirements.
- Make sure you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus has two replicas and Alertmanager has three replicas, you need five PVs to support the entire monitoring stack. The PVs should be available from the Local Storage Operator. This does not apply if you enable dynamically provisioned storage.
-
Use as the storage type value for the
Filesystemparameter when you configure the persistent volume.volumeMode Configure local persistent storage.
NoteIf you use a local volume for persistent storage, do not use a raw block volume, which is described with
in thevolumeMode: Blockobject. Prometheus cannot use raw block volumes.LocalVolume
2.8.2. Configuring a local persistent volume claim Copiar enlaceEnlace copiado en el portapapeles!
For monitoring components to use a persistent volume (PV), you must configure a persistent volume claim (PVC).
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo configure a PVC for a component that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configAdd your PVC configuration for the component under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify
.volumeClaimTemplateThe following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors core OpenShift Container Platform components:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40GiIn the above example, the storage class created by the Local Storage Operator is called
.local-storageThe following example configures a PVC that claims local persistent storage for Alertmanager:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi
To configure a PVC for a component that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configAdd your PVC configuration for the component under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify
.volumeClaimTemplateThe following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors user-defined projects:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40GiIn the above example, the storage class created by the Local Storage Operator is called
.local-storageThe following example configures a PVC that claims local persistent storage for Thanos Ruler:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10GiNoteStorage requirements for the
component depend on the number of rules that are evaluated and how many samples each rule generates.thanosRuler
Save the file to apply the changes. The pods affected by the new configuration are restarted automatically and the new storage configuration is applied.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.8.3. Modifying the retention time for Prometheus metrics data Copiar enlaceEnlace copiado en el portapapeles!
By default, the OpenShift Container Platform monitoring stack configures the retention time for Prometheus data to be 15 days. You can modify the retention time to change how soon the data is deleted.
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo modify the retention time for the Prometheus instance that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configAdd your retention time configuration under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification>Substitute
with a number directly followed by<time_specification>(milliseconds),ms(seconds),s(minutes),m(hours),h(days),d(weeks), orw(years).yThe following example sets the retention time to 24 hours for the Prometheus instance that monitors core OpenShift Container Platform components:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h
To modify the retention time for the Prometheus instance that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configAdd your retention time configuration under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification>Substitute
with a number directly followed by<time_specification>(milliseconds),ms(seconds),s(minutes),m(hours),h(days),d(weeks), orw(years).yThe following example sets the retention time to 24 hours for the Prometheus instance that monitors user-defined projects:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h
Save the file to apply the changes. The pods affected by the new configuration are restarted automatically.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.9. Controlling the impact of unbound metrics attributes in user-defined projects Copiar enlaceEnlace copiado en el portapapeles!
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a
customer_id
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:
- Limit the number of samples that can be accepted per target scrape in user-defined projects
- Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped
Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
2.9.1. Setting a scrape sample limit for user-defined projects Copiar enlaceEnlace copiado en el portapapeles!
You can limit the number of samples that can be accepted per target scrape in user-defined projects.
If you set a sample limit, no further sample data is ingested for that target scrape after the limit is reached.
Prerequisites
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap -
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configAdd the
configuration toenforcedSampleLimitto limit the number of samples that can be accepted per target scrape in user-defined projects:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 500001 - 1
- A value is required if this parameter is specified. This
enforcedSampleLimitexample limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000.
Save the file to apply the changes. The limit is applied automatically.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to the
user-workload-monitoring-configobject, the pods and other resources in theConfigMapproject might be redeployed. The running monitoring processes in that project might also be restarted.openshift-user-workload-monitoring
2.9.2. Creating scrape sample alerts Copiar enlaceEnlace copiado en el portapapeles!
You can create alerts that notify you when:
-
The target cannot be scraped or is not available for the specified duration
for -
A scrape sample threshold is reached or is exceeded for the specified duration
for
Prerequisites
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring - You have enabled monitoring for user-defined projects.
-
You have created the
user-workload-monitoring-configobject.ConfigMap -
You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using .
enforcedSampleLimit -
You have installed the OpenShift CLI ().
oc
Procedure
Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called
:monitoring-stack-alerts.yamlapiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts1 namespace: ns12 spec: groups: - name: general.rules rules: - alert: TargetDown3 annotations: message: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m5 labels: severity: warning6 - alert: ApproachingEnforcedSamplesLimit7 annotations: message: '{{ $labels.container }} container of the {{ $labels.pod }} pod in the {{ $labels.namespace }} namespace consumes {{ $value | humanizePercentage }} of the samples limit budget.'8 expr: scrape_samples_scraped/50000 > 0.89 for: 10m10 labels: severity: warning11 - 1
- Defines the name of the alerting rule.
- 2
- Specifies the user-defined project where the alerting rule will be deployed.
- 3
- The
TargetDownalert will fire if the target cannot be scraped or is not available for theforduration. - 4
- The message that will be output when the
TargetDownalert fires. - 5
- The conditions for the
TargetDownalert must be true for this duration before the alert is fired. - 6
- Defines the severity for the
TargetDownalert. - 7
- The
ApproachingEnforcedSamplesLimitalert will fire when the defined scrape sample threshold is reached or exceeded for the specifiedforduration. - 8
- The message that will be output when the
ApproachingEnforcedSamplesLimitalert fires. - 9
- The threshold for the
ApproachingEnforcedSamplesLimitalert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of50000. Theforduration must also have passed before the alert will fire. The<number>in the expressionscrape_samples_scraped/<number> > <threshold>must match theenforcedSampleLimitvalue defined in theuser-workload-monitoring-configConfigMapobject. - 10
- The conditions for the
ApproachingEnforcedSamplesLimitalert must be true for this duration before the alert is fired. - 11
- Defines the severity for the
ApproachingEnforcedSamplesLimitalert.
Apply the configuration to the user-defined project:
$ oc apply -f monitoring-stack-alerts.yaml
2.10. Attaching additional labels to your time series and alerts Copiar enlaceEnlace copiado en el portapapeles!
Using the external labels feature of Prometheus, you can attach custom labels to all time series and alerts leaving Prometheus.
Prerequisites
If you are configuring core OpenShift Container Platform monitoring components:
-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are configuring components that monitor user-defined projects:
-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects:
Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configDefine a map of labels you want to add for every metric under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value>1 - 1
- Substitute
<key>: <value>with a map of key-value pairs where<key>is a unique name for the new label and<value>is its value.
WarningDo not use
orprometheusas key names, because they are reserved and will be overwritten.prometheus_replicaFor example, to add metadata about the region and environment to all time series and alerts, use:
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod
To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects:
Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configDefine a map of labels you want to add for every metric under
:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value>1 - 1
- Substitute
<key>: <value>with a map of key-value pairs where<key>is a unique name for the new label and<value>is its value.
WarningDo not use
orprometheusas key names, because they are reserved and will be overwritten.prometheus_replicaNoteIn the
project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Settingopenshift-user-workload-monitoringforexternalLabelsin theprometheususer-workload-monitoring-configobject will only configure external labels for metrics and not for any rules.ConfigMapFor example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use:
apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod
Save the file to apply the changes. The new configuration is applied automatically.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
2.11. Setting log levels for monitoring components Copiar enlaceEnlace copiado en el portapapeles!
You can configure the log level for Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler.
You cannot use this procedure to configure the log level for the Alertmanager component.
The following log levels can be applied to each of those components in the
cluster-monitoring-config
user-workload-monitoring-config
ConfigMap
-
. Log debug, informational, warning, and error messages.
debug -
. Log informational, warning, and error messages.
info -
. Log warning and error messages only.
warn -
. Log error messages only.
error
The default log level is
info
Prerequisites
If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Querier in the
openshift-monitoringproject:-
You have access to the cluster as a user with the role.
cluster-admin -
You have created the
cluster-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the
openshift-user-workload-monitoringproject:-
You have access to the cluster as a user with the role, or as a user with the
cluster-adminrole in theuser-workload-monitoring-config-editproject.openshift-user-workload-monitoring -
You have created the
user-workload-monitoring-configobject.ConfigMap
-
You have access to the cluster as a user with the
-
You have installed the OpenShift CLI ().
oc
Procedure
Edit the
object:ConfigMapTo set a log level for a component in the
openshift-monitoringproject:Edit the
cluster-monitoring-configobject in theConfigMapproject:openshift-monitoring$ oc -n openshift-monitoring edit configmap cluster-monitoring-configAdd
for a component underlogLevel: <log_level>:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>:1 logLevel: <log_level>2
To set a log level for a component in the
openshift-user-workload-monitoringproject:Edit the
user-workload-monitoring-configobject in theConfigMapproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-configAdd
for a component underlogLevel: <log_level>:data/config.yamlapiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>:1 logLevel: <log_level>2
Save the file to apply the changes. The pods for the component restarts automatically when you apply the log-level change.
NoteConfigurations applied to the
user-workload-monitoring-configobject are not activated unless a cluster administrator has enabled monitoring for user-defined projects.ConfigMapWarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the
deployment in theprometheus-operatorproject:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"Example output
- --log-level=debugCheck that the pods for the component are running. The following example lists the status of pods in the
project:openshift-user-workload-monitoring$ oc -n openshift-user-workload-monitoring get podsNoteIf an unrecognized
value is included in theloglevelobject, the pods for the component might not restart successfully.ConfigMap
2.12. Next steps Copiar enlaceEnlace copiado en el portapapeles!
- Enabling monitoring for user-defined projects
- Learn about remote health reporting and, if necessary, opt out of it