This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.1.2. Configuring the monitoring stack
Prior to OpenShift Container Platform 4, the Prometheus Cluster Monitoring stack was configured through the Ansible inventory file. For that purpose, the stack exposed a subset of its available configuration options as Ansible variables. You configured the stack before you installed OpenShift Container Platform.
In OpenShift Container Platform 4, Ansible is not the primary technology to install OpenShift Container Platform anymore. The installation program provides only a very low number of configuration options before installation. Configuring most OpenShift framework components, including the Prometheus Cluster Monitoring stack, happens post-installation.
This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios.
1.2.1. Prerequisites 复制链接链接已复制到粘贴板!
- The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.
1.2.2. Maintenance and support 复制链接链接已复制到粘贴板!
The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the cluster-monitoring-operator reconciles any differences. The operator reverses everything to the defined state by default and by design.
Explicitly unsupported cases include:
-
Creating additional
ServiceMonitor
objects in theopenshift-*
namespaces. This extends the targets the cluster monitoring Prometheus instance scrapes, which can cause collisions and load differences that cannot be accounted for. These factors might make the Prometheus setup unstable. -
Creating unexpected
ConfigMap
objects orPrometheusRule
objects. This causes the cluster monitoring Prometheus instance to include additional alerting and recording rules. - Modifying resources of the stack. The Prometheus Monitoring Stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.
- Using resources of the stack for your purposes. The resources created by the Prometheus Cluster Monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.
- Stopping the Cluster Monitoring Operator from reconciling the monitoring stack.
- Adding new alerting rules.
- Modifying the monitoring stack Grafana instance.
1.2.3. Creating a cluster monitoring config map 复制链接链接已复制到粘贴板!
To configure the OpenShift Container Platform monitoring stack, you must create the cluster monitoring ConfigMap
object.
Prerequisites
- You have access to the cluster as a user with the cluster-admin role.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Check whether the
cluster-monitoring-config
ConfigMap
object exists:oc -n openshift-monitoring get configmap cluster-monitoring-config
$ oc -n openshift-monitoring get configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
ConfigMap
object does not exist:Create the following YAML manifest. In this example the file is called
cluster-monitoring-config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration to create the
ConfigMap
object:oc apply -f cluster-monitoring-config.yaml
$ oc apply -f cluster-monitoring-config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.4. Configuring the cluster monitoring stack 复制链接链接已复制到粘贴板!
You can configure the Prometheus Cluster Monitoring stack using config maps. Config maps configure the Cluster Monitoring Operator, which in turn configures components of the stack.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap
object:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Put your configuration under
data/config.yaml
as key-value pair<component_name>: <component_configuration>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<component>
and<configuration_for_the_component>
accordingly.For example, create this
ConfigMap
object to configure a Persistent Volume Claim (PVC) for Prometheus:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here, prometheusK8s defines the Prometheus component and the following lines define its configuration.
- Save the file to apply the changes. The pods affected by the new configuration are restarted automatically.
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object.
1.2.5. Configurable monitoring components 复制链接链接已复制到粘贴板!
This table shows the monitoring components you can configure and the keys used to specify the components in the config map:
Component | Key |
---|---|
Prometheus Operator |
|
Prometheus |
|
Alertmanager |
|
kube-state-metrics |
|
openshift-state-metrics |
|
Grafana |
|
Telemeter Client |
|
Prometheus Adapter |
|
Thanos Querier |
|
From this list, only Prometheus and Alertmanager have extensive configuration options. All other components usually provide only the nodeSelector
field for being deployed on a specified node.
1.2.6. Moving monitoring components to different nodes 复制链接链接已复制到粘贴板!
You can move any of the monitoring stack components to specific nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap
object:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the
nodeSelector
constraint for the component underdata/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<component>
accordingly and substitute<node_key>: <node_value>
with the map of key-value pairs that specifies the destination node. Often, only a single key-value pair is used.The component can only run on a node that has each of the specified key-value pairs as labels. The node can have additional labels as well.
For example, to move components to the node that is labeled
foo: bar
, use:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The components affected by the new configuration are moved to new nodes automatically.
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object. - See Placing pods on specific nodes using node selectors for more information about using node selectors.
-
See the Kubernetes documentation for details on the
nodeSelector
constraint.
1.2.7. Assigning tolerations to monitoring components 复制链接链接已复制到粘贴板!
You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap
object:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify
tolerations
for the component:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<component>
and<toleration_specification>
accordingly.For example, a
oc adm taint nodes node1 key1=value1:NoSchedule
taint prevents the scheduler from placing pods in thefoo: bar
node. To make thealertmanagerMain
component ignore that taint and to placealertmanagerMain
infoo: bar
normally, use this toleration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The new component placement configuration is applied automatically.
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object. - See the OpenShift Container Platform documentation on taints and tolerations.
- See the Kubernetes documentation on taints and tolerations.
1.2.8. Configuring persistent storage 复制链接链接已复制到粘贴板!
Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage. If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block
in the LocalVolume
object. Elasticsearch cannot use raw block volumes.
1.2.9. Prerequisites 复制链接链接已复制到粘贴板!
- Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see Prometheus database storage requirements.
- Make sure you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus has two replicas and Alertmanager has three replicas, you need five PVs to support the entire monitoring stack. The PVs should be available from the Local Storage Operator. This does not apply if you enable dynamically provisioned storage.
- Use the block type of storage.
- Configure local persistent storage.
1.2.9.1. Configuring a local persistent volume claim 复制链接链接已复制到粘贴板!
For the Prometheus or Alertmanager to use a persistent volume (PV), you first must configure a persistent volume claim (PVC).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Put your PVC configuration for the component under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify
volumeClaimTemplate
.For example, to configure a PVC that claims local persistent storage for Prometheus, use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above example, the storage class created by the Local Storage Operator is called
local-storage
.To configure a PVC that claims local persistent storage for Alertmanager, use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are restarted automatically and the new storage configuration is applied.
By default, the Prometheus Cluster Monitoring stack configures the retention time for Prometheus data to be 15 days. You can modify the retention time to change how soon the data is deleted.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap
object:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Put your retention time configuration under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<time_specification>
with a number directly followed byms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years).For example, to configure retention time to be 24 hours, use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are restarted automatically.
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object. - Understanding persistent storage
- Optimizing storage
1.2.10. Configuring Alertmanager 复制链接链接已复制到粘贴板!
The Prometheus Alertmanager is a component that manages incoming alerts, including:
- Alert silencing
- Alert inhibition
- Alert aggregation
- Reliable deduplication of alerts
- Grouping alerts
- Sending grouped alerts as notifications through receivers such as email, PagerDuty, and HipChat
1.2.10.1. Alertmanager default configuration 复制链接链接已复制到粘贴板!
The default configuration of the OpenShift Container Platform Monitoring Alertmanager cluster is this:
OpenShift Container Platform monitoring ships with the Watchdog alert, which fires continuously. Alertmanager repeatedly sends notifications for the Watchdog alert to the notification provider, for example, to PagerDuty. The provider is usually configured to notify the administrator when it stops receiving the Watchdog alert. This mechanism helps ensure continuous operation of Prometheus as well as continuous communication between Alertmanager and the notification provider.
1.2.10.2. Applying custom Alertmanager configuration 复制链接链接已复制到粘贴板!
You can overwrite the default Alertmanager configuration by editing the alertmanager-main
secret inside the openshift-monitoring
namespace.
Prerequisites
-
An installed
jq
tool for processing JSON data
Procedure
Print the currently active Alertmanager configuration into file
alertmanager.yaml
:oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
$ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the configuration in file
alertmanager.yaml
to your new configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, this listing configures PagerDuty for notifications:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With this configuration, alerts of
critical
severity fired by theexample-app
service are sent using theteam-frontend-page
receiver, which means that these alerts are paged to a chosen person.Apply the new configuration in the file:
oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run -o=yaml | oc -n openshift-monitoring replace secret --filename=-
$ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run -o=yaml | oc -n openshift-monitoring replace secret --filename=-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- See the PagerDuty official site for more information on PagerDuty.
-
See the PagerDuty Prometheus Integration Guide to learn how to retrieve the
service_key
. - See Alertmanager configuration for configuring alerting through different alert receivers.
1.2.10.3. Alerting rules 复制链接链接已复制到粘贴板!
OpenShift Container Platform Cluster Monitoring by default ships with a set of pre-defined alerting rules.
Note that:
- The default alerting rules are used specifically for the OpenShift Container Platform cluster and nothing else. For example, you get alerts for a persistent volume in the cluster, but you do not get them for persistent volume in your custom namespace.
- Currently you cannot add custom alerting rules.
- Some alerting rules have identical names. This is intentional. They are sending alerts about the same event with different thresholds, with different severity, or both.
- With the inhibition rules, the lower severity is inhibited when the higher severity is firing.
1.2.10.4. Listing acting alerting rules 复制链接链接已复制到粘贴板!
You can list the alerting rules that currently apply to the cluster.
Procedure
Configure the necessary port forwarding:
oc -n openshift-monitoring port-forward svc/prometheus-operated 9090
$ oc -n openshift-monitoring port-forward svc/prometheus-operated 9090
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the JSON object containing acting alerting rules and their properties:
curl -s http://localhost:9090/api/v1/rules | jq '[.data.groups[].rules[] | select(.type=="alerting")]'
$ curl -s http://localhost:9090/api/v1/rules | jq '[.data.groups[].rules[] | select(.type=="alerting")]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
- See also the Alertmanager documentation.
Using the external labels feature of Prometheus, you can attach additional custom labels to all time series and alerts leaving the Prometheus cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the OpenShift CLI (oc).
-
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
Start editing the
cluster-monitoring-config
ConfigMap:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a map of labels you want to add for every metric under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Substitute
<key>: <value>
with a map of key-value pairs where<key>
is a unique name of the new label and<value>
is its value.
警告Do not use
prometheus
orprometheus_replica
as key names, because they are reserved and would be overwritten.For example, to add metadata about the region and environment to all time series and alerts, use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The new configuration is applied automatically.
Additional resources
-
See Creating a cluster monitoring config map to learn how to create the
cluster-monitoring-config
ConfigMap
object.
1.2.12. Next steps 复制链接链接已复制到粘贴板!
- Manage cluster alerts.
- Learn about remote health reporting and, if necessary, opt out of it.