Monitoring
Monitoring projects on Red Hat OpenShift Service on AWS
Abstract
Chapter 1. About monitoring Copy linkLink copied to clipboard!
1.1. About Red Hat OpenShift Service on AWS monitoring Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can monitor your own projects in isolation from Red Hat Site Reliability Engineering (SRE) platform metrics. You can monitor your own projects without the need for an additional monitoring solution.
The Red Hat OpenShift Service on AWS (ROSA) monitoring stack is based on the Prometheus open source project and its wider ecosystem.
1.2. Monitoring stack architecture Copy linkLink copied to clipboard!
You can learn about the monitoring stack architecture, which includes default monitoring components and components for monitoring user-defined projects.
1.2.1. Understanding the monitoring stack Copy linkLink copied to clipboard!
The monitoring stack includes the following components:
- Default platform monitoring components
A set of platform monitoring components are installed in the
openshift-monitoring
project by default during an Red Hat OpenShift Service on AWS installation. This provides monitoring for core cluster components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters.You can see these components in the Installed by default section in the following diagram.
- Components for monitoring user-defined projects
If you enable monitoring for user-defined projects, additional monitoring components are installed in the
openshift-user-workload-monitoring
project. This provides optional monitoring for user-defined projects.You can see these components in the User section in the following diagram.
1.2.1.1. Default monitoring targets Copy linkLink copied to clipboard!
In addition to the components of the stack itself, the default monitoring stack monitors additional platform components.
The following are examples of monitoring targets:
- CoreDNS
- etcd
- HAProxy
- Image registry
- Kubelets
- Kubernetes API server
- Kubernetes controller manager
- Kubernetes scheduler
- OpenShift API server
- OpenShift Controller Manager
- Operator Lifecycle Manager (OLM)
- The exact list of targets can vary depending on your cluster capabilities and installed components.
- Each Red Hat OpenShift Service on AWS component is responsible for its monitoring configuration. For problems with the monitoring of an Red Hat OpenShift Service on AWS component, open a Jira issue against that component, not against the general monitoring component.
Other Red Hat OpenShift Service on AWS framework components might be exposing metrics as well. For details, see their respective documentation.
1.2.2. Components for monitoring user-defined projects Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS includes an optional enhancement to the monitoring stack that helps you monitor services and pods in user-defined projects. This feature includes the following components:
Component | Description |
---|---|
Prometheus Operator |
The Prometheus Operator in the |
Prometheus | Prometheus is the monitoring system that provides monitoring for user-defined projects. Prometheus sends alerts to Alertmanager for processing. |
Thanos Ruler | The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In Red Hat OpenShift Service on AWS , Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. |
Alertmanager | The Alertmanager service handles alerts received from Prometheus and Thanos Ruler. Alertmanager is also responsible for sending user-defined alerts to external notification systems. Deploying this service is optional. |
The components in the preceding table are deployed after you enable monitoring for user-defined projects.
The monitoring stack monitors all components for user-defined projects. The components are automatically updated when Red Hat OpenShift Service on AWS is updated.
1.2.2.1. Monitoring targets for user-defined projects Copy linkLink copied to clipboard!
When monitoring is enabled for user-defined projects, you can monitor:
- Metrics provided through service endpoints in user-defined projects.
- Pods running in user-defined projects.
1.2.3. The monitoring stack in high-availability clusters Copy linkLink copied to clipboard!
By default, in multi-node clusters, the following components run in high-availability (HA) mode to prevent data loss and service interruption:
- Prometheus
- Alertmanager
- Thanos Ruler
- Thanos Querier
- Metrics Server
- Monitoring plugin
The component is replicated across two pods, each running on a separate node. This means that the monitoring stack can tolerate the loss of one pod.
- Prometheus in HA mode
- Both replicas independently scrape the same targets and evaluate the same rules.
- The replicas do not communicate with each other. Therefore, data might differ between the pods.
- Alertmanager in HA mode
- The two replicas synchronize notification and silence states with each other. This ensures that each notification is sent at least once.
- If the replicas fail to communicate or if there is an issue on the receiving side, notifications are still sent, but they might be duplicated.
Prometheus, Alertmanager, and Thanos Ruler are stateful components. To ensure high availability, you must configure them with persistent storage.
1.2.4. Glossary of common terms for Red Hat OpenShift Service on AWS monitoring Copy linkLink copied to clipboard!
This glossary defines common terms that are used in Red Hat OpenShift Service on AWS architecture.
- Alertmanager
- Alertmanager handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems.
- Alerting rules
- Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed.
- Cluster Monitoring Operator
- The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances such as, the Thanos Querier, the Telemeter Client, and metrics targets to ensure that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO).
- Cluster Version Operator
- The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in Red Hat OpenShift Service on AWS by default.
- config map
-
A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type
ConfigMap
. Applications running in a pod can use this data. - Container
- A container is a lightweight and executable image that includes software and all its dependencies. Containers virtualize the operating system. As a result, you can run containers anywhere from a data center to a public or private cloud as well as a developer’s laptop.
- custom resource (CR)
- A CR is an extension of the Kubernetes API. You can create custom resources.
- etcd
- etcd is the key-value store for Red Hat OpenShift Service on AWS, which stores the state of all resource objects.
- Fluentd
Fluentd is a log collector that resides on each Red Hat OpenShift Service on AWS node. It gathers application, infrastructure, and audit logs and forwards them to different outputs.
NoteFluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
- Kubelets
- Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running.
- Kubernetes API server
- Kubernetes API server validates and configures data for the API objects.
- Kubernetes controller manager
- Kubernetes controller manager governs the state of the cluster.
- Kubernetes scheduler
- Kubernetes scheduler allocates pods to nodes.
- labels
- Labels are key-value pairs that you can use to organize and select subsets of objects such as a pod.
- Metrics Server
-
The Metrics Server monitoring component collects resource metrics and exposes them in the
metrics.k8s.io
Metrics API service for use by other tools and APIs, which frees the core platform Prometheus stack from handling this functionality. - node
- A compute machine in the Red Hat OpenShift Service on AWS cluster. A node is either a virtual machine (VM) or a physical machine.
- Operator
- The preferred method of packaging, deploying, and managing a Kubernetes application in an Red Hat OpenShift Service on AWS cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers.
- Operator Lifecycle Manager (OLM)
- OLM helps you install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way.
- Persistent storage
- Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data.
- Persistent volume claim (PVC)
- You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment.
- pod
- The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node.
- Prometheus
- Prometheus is the monitoring system on which the Red Hat OpenShift Service on AWS monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
- Prometheus Operator
-
The Prometheus Operator in the
openshift-monitoring
project creates, configures, and manages platform Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. - Silences
- A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue.
- storage
- Red Hat OpenShift Service on AWS supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an Red Hat OpenShift Service on AWS cluster.
- Thanos Ruler
- The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In Red Hat OpenShift Service on AWS, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects.
- Vector
- Vector is a log collector that deploys to each Red Hat OpenShift Service on AWS node. It collects log data from each node, transforms the data, and forwards it to configured outputs.
- web console
- A user interface (UI) to manage Red Hat OpenShift Service on AWS.
1.3. Understanding the monitoring stack - key concepts Copy linkLink copied to clipboard!
Get familiar with the Red Hat OpenShift Service on AWS monitoring concepts and terms. Learn about how you can improve performance and scale of your cluster, store and record data, manage metrics and alerts, and more.
1.3.1. About performance and scalability Copy linkLink copied to clipboard!
You can optimize the performance and scale of your clusters. You can configure the monitoring stack by performing any of the following actions:
Control the placement and distribution of monitoring components:
- Use node selectors to move components to specific nodes.
- Assign tolerations to enable moving components to tainted nodes.
- Use pod topology spread constraints.
- Manage CPU and memory resources.
1.3.1.1. Using node selectors to move monitoring components Copy linkLink copied to clipboard!
By using the nodeSelector
constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster.
By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.
How node selectors work with other constraints
If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster:
- Topology spread constraints might be in place to control pod placement.
- Hard anti-affinity rules are in place for Prometheus, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available.
When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes.
Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node.
To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component.
1.3.1.2. About pod topology spread constraints for monitoring Copy linkLink copied to clipboard!
You can use pod topology spread constraints to control how the monitoring pods are spread across a network topology when Red Hat OpenShift Service on AWS pods are deployed in multiple availability zones.
Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios.
You can configure pod topology spread constraints for all the pods deployed by the Cluster Monitoring Operator to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
1.3.1.3. About specifying limits and requests for monitoring components Copy linkLink copied to clipboard!
You can configure resource limits and requests for the following components that monitor user-defined projects:
- Alertmanager
- Prometheus
- Thanos Ruler
By defining the resource limits, you limit a container’s resource usage, which prevents the container from exceeding the specified maximum values for CPU and memory resources.
By defining the resource requests, you specify that a container can be scheduled only on a node that has enough CPU and memory resources available to match the requested resources.
1.3.2. About storing and recording data Copy linkLink copied to clipboard!
You can store and record data to help you protect the data and use them for troubleshooting. You can configure the monitoring stack by performing any of the following actions:
Configure persistent storage:
- Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
- Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.
- Modify the retention time and size for Prometheus and Thanos Ruler metrics data.
Configure logging to help you troubleshoot issues with your cluster:
- Set log levels for monitoring.
- Enable the query logging for Prometheus and Thanos Querier.
1.3.2.1. Retention time and size for Prometheus metrics Copy linkLink copied to clipboard!
By default, Prometheus retains metrics data for the following durations:
- Core platform monitoring: 15 days
- Monitoring for user-defined projects: 24 hours
You can modify the retention time for the Prometheus instance to change how soon the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. If the data reaches this size limit, Prometheus deletes the oldest data first until the disk space used is again below the limit.
Note the following behaviors of these data retention settings:
-
The size-based retention policy applies to all data block directories in the
/prometheus
directory, including persistent blocks, write-ahead log (WAL) data, and m-mapped chunks. -
Data in the
/wal
and/head_chunks
directories counts toward the retention size limit, but Prometheus never purges data from these directories based on size- or time-based retention policies. Thus, if you set a retention size limit lower than the maximum size set for the/wal
and/head_chunks
directories, you have configured the system not to retain any data blocks in the/prometheus
data directories. - The size-based retention policy is applied only when Prometheus cuts a new data block, which occurs every two hours after the WAL contains at least three hours of data.
-
If you do not explicitly define values for either
retention
orretentionSize
, retention time defaults to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. Retention size is not set. -
If you define values for both
retention
andretentionSize
, both values apply. If any data blocks exceed the defined retention time or the defined size limit, Prometheus purges these data blocks. -
If you define a value for
retentionSize
and do not defineretention
, only theretentionSize
value applies. -
If you do not define a value for
retentionSize
and only define a value forretention
, only theretention
value applies. -
If you set the
retentionSize
orretention
value to0
, the default settings apply. The default settings set retention time to 15 days for core platform monitoring and 24 hours for user-defined project monitoring. By default, retention size is not set.
Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize
limit. In such cases, the KubePersistentVolumeFillingUp
alert fires until the space on a PV is lower than the retentionSize
limit.
1.3.3. Understanding metrics Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS 4, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing.
You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level.
In Red Hat OpenShift Service on AWS, metrics are exposed through an HTTP service endpoint under the /metrics
canonical name. You can list all available metrics for a service by running a curl
query against http://<endpoint>/metrics
. For instance, you can expose a route to the prometheus-example-app
example application and then run the following to view all of its available metrics:
curl http://<example_app_endpoint>/metrics
$ curl http://<example_app_endpoint>/metrics
Example output
1.3.3.1. Controlling the impact of unbound metrics attributes in user-defined projects Copy linkLink copied to clipboard!
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id
attribute is unbound because it has an infinite number of possible values.
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:
- Limit the number of samples that can be accepted per target scrape in user-defined projects
- Limit the number of scraped labels, the length of label names, and the length of label values
- Configure the intervals between consecutive scrapes and between Prometheus rule evaluations
- Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped
Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
1.3.3.2. Adding cluster ID labels to metrics Copy linkLink copied to clipboard!
If you manage multiple Red Hat OpenShift Service on AWS clusters and use the remote write feature to send metrics data from these clusters to an external storage location, you can add cluster ID labels to identify the metrics data coming from different clusters. You can then query these labels to identify the source cluster for a metric and distinguish that data from similar metrics data sent by other clusters.
This way, if you manage many clusters for multiple customers and send metrics data to a single centralized storage system, you can use cluster ID labels to query metrics for a particular cluster or customer.
Creating and using cluster ID labels involves three general steps:
- Configuring the write relabel settings for remote write storage.
- Adding cluster ID labels to the metrics.
- Querying these labels to identify the source cluster or customer for a metric.
1.3.4. About monitoring dashboards Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads.
Starting with Red Hat OpenShift Service on AWS 4.19, the perspectives in the web console have unified. The Developer perspective is no longer enabled by default.
All users can interact with all Red Hat OpenShift Service on AWS web console features. However, if you are not the cluster owner, you might need to request permission to access certain features from the cluster owner.
You can still enable the Developer perspective. On the Getting Started pane in the web console, you can take a tour of the console, find information on setting up your cluster, view a quick start for enabling the Developer perspective, and follow links to explore new features and capabilities.
As an administrator, you can access dashboards for the core Red Hat OpenShift Service on AWS components, including the following items:
- API performance
- etcd
- Kubernetes compute resources
- Kubernetes network resources
- Prometheus
- USE method dashboards relating to cluster and node performance
- Node performance metrics
1.3.5. Managing alerts Copy linkLink copied to clipboard!
In the Red Hat OpenShift Service on AWS, the Alerting UI enables you to manage alerts, silences, and alerting rules.
- Alerting rules. Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed.
- Alerts. An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an Red Hat OpenShift Service on AWS cluster.
- Silences. A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the issue.
The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin
role, you can access all alerts, silences, and alerting rules.
1.3.5.1. Managing silences Copy linkLink copied to clipboard!
You can create a silence for an alert in the Red Hat OpenShift Service on AWS web console. After you create a silence, you will not receive notifications about an alert when the alert fires.
Creating silences is useful in scenarios where you have received an initial alert notification, and you do not want to receive further notifications during the time in which you resolve the underlying issue causing the alert to fire.
When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires.
After you create silences, you can view, edit, and expire them.
When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time.
1.3.5.2. Creating alerting rules for user-defined projects Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.
If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules:
A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project.
For example, an alerting rule for the
ns1
user-defined project can use metrics exposed by thens1
project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a differentns2
user-defined project.-
By default, when you create an alerting rule, the
namespace
label is enforced on it even if a rule with the same name exists in another project. To create alerting rules that are not bound to their project of origin, see "Creating cross-project alerting rules for user-defined projects". To reduce latency and to minimize the load on core platform monitoring components, you can add the
openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus
label to a rule. This label forces only the Prometheus instance deployed in theopenshift-user-workload-monitoring
project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so.ImportantIf an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts.
1.3.5.3. Managing alerting rules for user-defined projects Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can view, edit, and remove alerting rules in user-defined projects.
Alerting rule considerations
- The default alerting rules are used specifically for the Red Hat OpenShift Service on AWS cluster.
- Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both.
- Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing.
1.3.5.4. Optimizing alerting for user-defined projects Copy linkLink copied to clipboard!
You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules:
- Minimize the number of alerting rules that you create for your project. Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you.
- Create alerting rules for symptoms instead of causes. Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed.
- Plan before you write your alerting rules. Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom.
- Provide clear alert messaging. State the symptom and recommended actions in the alert message.
- Include severity levels in your alerting rules. The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team.
1.3.5.5. Searching and filtering alerts, silences, and alerting rules Copy linkLink copied to clipboard!
You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options.
1.3.5.5.1. Understanding alert filters Copy linkLink copied to clipboard!
The Alerts page in the Alerting UI provides details about alerts relating to default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown.
You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option:
State filters:
-
Firing. The alert is firing because the alert condition is true and the optional
for
duration has passed. The alert continues to fire while the condition remains true. - Pending. The alert is active but is waiting for the duration that is specified in the alerting rule before it fires.
- Silenced. The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions.
-
Firing. The alert is firing because the alert condition is true and the optional
Severity filters:
- Critical. The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team.
- Warning. The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review.
- Info. The alert is provided for informational purposes only.
- None. The alert has no defined severity.
- You can also create custom severity definitions for alerts relating to user-defined projects.
Source filters:
- Platform. Platform-level alerts relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality.
- User. User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads.
1.3.5.5.2. Understanding silence filters Copy linkLink copied to clipboard!
The Silences page in the Alerting UI provides details about silences applied to alerts in default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends.
You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option:
State filters:
- Active. The silence is active and the alert will be muted until the silence is expired.
- Pending. The silence has been scheduled and it is not yet active.
- Expired. The silence has expired and notifications will be sent if the conditions for an alert are true.
1.3.5.5.3. Understanding alerting rule filters Copy linkLink copied to clipboard!
The Alerting rules page in the Alerting UI provides details about alerting rules relating to default Red Hat OpenShift Service on AWS and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule.
You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option:
Alert state filters:
-
Firing. The alert is firing because the alert condition is true and the optional
for
duration has passed. The alert continues to fire while the condition remains true. - Pending. The alert is active but is waiting for the duration that is specified in the alerting rule before it fires.
- Silenced. The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions.
- Not Firing. The alert is not firing.
-
Firing. The alert is firing because the alert condition is true and the optional
Severity filters:
- Critical. The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team.
- Warning. The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review.
- Info. The alerting rule provides informational alerts only.
- None. The alerting rule has no defined severity.
- You can also create custom severity definitions for alerting rules relating to user-defined projects.
Source filters:
- Platform. Platform-level alerting rules relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality.
- User. User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads.
1.3.6. Understanding alert routing for user-defined projects Copy linkLink copied to clipboard!
As a cluster administrator, you can enable alert routing for user-defined projects. With this feature, you can allow users with the alert-routing-edit
cluster role to configure alert notification routing and receivers for user-defined projects. These notifications are routed by the default Alertmanager instance or, if enabled, an optional Alertmanager instance dedicated to user-defined monitoring.
Users can then create and configure user-defined alert routing by creating or editing the AlertmanagerConfig
objects for their user-defined projects without the help of an administrator.
After a user has defined alert routing for a user-defined project, user-defined alert notifications are routed as follows:
-
To the
alertmanager-main
pods in theopenshift-monitoring
namespace if using the default platform Alertmanager instance. -
To the
alertmanager-user-workload
pods in theopenshift-user-workload-monitoring
namespace if you have enabled a separate instance of Alertmanager for user-defined projects.
Review the following limitations of alert routing for user-defined projects:
-
For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace
ns1
only applies toPrometheusRules
resources in the same namespace. -
When a namespace is excluded from user-defined monitoring,
AlertmanagerConfig
resources in the namespace cease to be part of the Alertmanager configuration.
1.3.7. Sending notifications to external systems Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS 4, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure Red Hat OpenShift Service on AWS to send alerts to the following receiver types:
- PagerDuty
- Webhook
- Slack
- Microsoft Teams
Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review.
Checking that alerting is operational by using the watchdog alert
Red Hat OpenShift Service on AWS monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider.
Chapter 2. Getting started Copy linkLink copied to clipboard!
2.1. Maintenance and support for monitoring Copy linkLink copied to clipboard!
Not all configuration options for the monitoring stack are exposed. The only supported way of configuring Red Hat OpenShift Service on AWS monitoring is by configuring the Cluster Monitoring Operator (CMO) using the options described in the Config map reference for the Cluster Monitoring Operator. Do not use other configurations, as they are unsupported.
Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in the Config map reference for the Cluster Monitoring Operator, your changes will disappear because the CMO automatically reconciles any differences and resets any unsupported changes back to the originally defined state by default and by design.
Installing another Prometheus instance is not supported by the Red Hat Site Reliability Engineers (SRE).
2.1.1. Support considerations for monitoring Copy linkLink copied to clipboard!
Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed.
The following modifications are explicitly not supported:
-
Creating additional
ServiceMonitor
,PodMonitor
, andPrometheusRule
objects in theopenshift-*
andkube-*
projects. Modifying any resources or objects deployed in the
openshift-monitoring
oropenshift-user-workload-monitoring
projects. The resources created by the Red Hat OpenShift Service on AWS monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.NoteThe Alertmanager configuration is deployed as the
alertmanager-main
secret resource in theopenshift-monitoring
namespace. If you have enabled a separate Alertmanager instance for user-defined alert routing, an Alertmanager configuration is also deployed as thealertmanager-user-workload
secret resource in theopenshift-user-workload-monitoring
namespace. To configure additional routes for any instance of Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement.- Modifying resources of the stack. The Red Hat OpenShift Service on AWS monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.
-
Deploying user-defined workloads to
openshift-*
, andkube-*
projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. -
Enabling symptom based monitoring by using the
Probe
custom resource definition (CRD) in Prometheus Operator. -
Manually deploying monitoring resources into namespaces that have the
openshift.io/cluster-monitoring: "true"
label. -
Adding the
openshift.io/cluster-monitoring: "true"
label to namespaces. This label is reserved only for the namespaces with core Red Hat OpenShift Service on AWS components and Red Hat certified components. - Installing custom Prometheus instances on Red Hat OpenShift Service on AWS. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator.
2.1.2. Support version matrix for monitoring components Copy linkLink copied to clipboard!
The following matrix contains information about versions of monitoring components for Red Hat OpenShift Service on AWS 4.12 and later releases:
Red Hat OpenShift Service on AWS | Prometheus Operator | Prometheus | Metrics Server | Alertmanager | kube-state-metrics agent | monitoring-plugin | node-exporter agent | Thanos |
---|---|---|---|---|---|---|---|---|
4.19 | 0.81.0 | 3.2.1 | 0.7.2 | 0.28.1 | 2.15.0 | 1.0.0 | 1.9.1 | 0.37.2 |
4.18 | 0.78.1 | 2.55.1 | 0.7.2 | 0.27.0 | 2.13.0 | 1.0.0 | 1.8.2 | 0.36.1 |
4.17 | 0.75.2 | 2.53.1 | 0.7.1 | 0.27.0 | 2.13.0 | 1.0.0 | 1.8.2 | 0.35.1 |
4.16 | 0.73.2 | 2.52.0 | 0.7.1 | 0.26.0 | 2.12.0 | 1.0.0 | 1.8.0 | 0.35.0 |
4.15 | 0.70.0 | 2.48.0 | 0.6.4 | 0.26.0 | 2.10.1 | 1.0.0 | 1.7.0 | 0.32.5 |
4.14 | 0.67.1 | 2.46.0 | N/A | 0.25.0 | 2.9.2 | 1.0.0 | 1.6.1 | 0.30.2 |
4.13 | 0.63.0 | 2.42.0 | N/A | 0.25.0 | 2.8.1 | N/A | 1.5.0 | 0.30.2 |
4.12 | 0.60.1 | 2.39.1 | N/A | 0.24.0 | 2.6.0 | N/A | 1.4.0 | 0.28.1 |
The openshift-state-metrics agent and Telemeter Client are OpenShift-specific components. Therefore, their versions correspond with the versions of Red Hat OpenShift Service on AWS.
2.2. Accessing monitoring for user-defined projects Copy linkLink copied to clipboard!
When you install a Red Hat OpenShift Service on AWS cluster, monitoring for user-defined projects is enabled by default. With monitoring for user-defined projects enabled, you can monitor your own projects without the need for an additional monitoring solution.
The dedicated-admin
user has default permissions to configure and access monitoring for user-defined projects.
Custom Prometheus instances and the Prometheus Operator installed through Operator Lifecycle Manager (OLM) can cause issues with user-defined project monitoring if it is enabled. Custom Prometheus instances are not supported.
Optionally, you can disable monitoring for user-defined projects during or after a cluster installation.
2.3. Disabling monitoring for user-defined projects Copy linkLink copied to clipboard!
As a dedicated-admin
, you can disable monitoring for user-defined projects. You can also exclude individual projects from user workload monitoring.
2.3.1. Disabling monitoring for user-defined projects Copy linkLink copied to clipboard!
By default, monitoring for user-defined projects is enabled. If you do not want to use the built-in monitoring stack to monitor user-defined projects, you can disable it.
Prerequisites
- You logged in to OpenShift Cluster Manager.
Procedure
- From the OpenShift Cluster Manager Hybrid Cloud Console, select a cluster.
- Click the Settings tab.
Click the Enable user workload monitoring check box to unselect the option, and then click Save.
User workload monitoring is disabled. The Prometheus, Prometheus Operator, and Thanos Ruler components are stopped in the
openshift-user-workload-monitoring
project.
2.3.2. Excluding a user-defined project from monitoring Copy linkLink copied to clipboard!
Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring
label to the project’s namespace with a value of false
.
Procedure
Add the label to the project namespace:
oc label namespace my-project 'openshift.io/user-monitoring=false'
$ oc label namespace my-project 'openshift.io/user-monitoring=false'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To re-enable monitoring, remove the label from the namespace:
oc label namespace my-project 'openshift.io/user-monitoring-'
$ oc label namespace my-project 'openshift.io/user-monitoring-'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label.
Chapter 3. Configuring user workload monitoring Copy linkLink copied to clipboard!
3.1. Preparing to configure the user workload monitoring stack Copy linkLink copied to clipboard!
This section explains which user-defined monitoring components can be configured and how to prepare for configuring the user workload monitoring stack.
- Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration.
You cannot disable workload monitoring due to requirements for the Cluster Monitoring Operator.
3.1.1. Configurable monitoring components Copy linkLink copied to clipboard!
This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config
config map.
Component | user-workload-monitoring-config config map key |
---|---|
Prometheus Operator |
|
Prometheus |
|
Alertmanager |
|
Thanos Ruler |
|
Different configuration changes to the ConfigMap
object result in different outcomes:
- The pods are not redeployed. Therefore, there is no service outage.
The affected pods are redeployed:
- For single-node clusters, this results in temporary service outage.
- For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available.
- Configuring and resizing a persistent volume always results in a service outage, regardless of high availability.
Each procedure that requires a change in the config map includes its expected outcome.
3.1.2. Enabling alert routing for user-defined projects Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, an administrator can enable alert routing for user-defined projects. This process consists of the following steps:
- Enable alert routing for user-defined projects to use a separate Alertmanager instance.
- Grant users permission to configure alert routing for user-defined projects.
After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects.
3.1.2.1. Enabling a separate Alertmanager instance for user-defined alert routing Copy linkLink copied to clipboard!
In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
enabled: true
andenableAlertmanagerConfig: true
in thealertmanager
section underdata/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the
enabled
value totrue
to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value tofalse
or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value tofalse
or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. - 2
- Set the
enableAlertmanagerConfig
value totrue
to enable users to define their own alert routing configurations withAlertmanagerConfig
objects.
- Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically.
Verification
Verify that the
user-workload
Alertmanager instance has started:oc -n openshift-user-workload-monitoring get alertmanager
$ oc -n openshift-user-workload-monitoring get alertmanager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s
NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.2. Granting users permission to configure alert routing for user-defined projects Copy linkLink copied to clipboard!
You can grant users permission to configure alert routing for user-defined projects.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled monitoring for user-defined projects.
- The user account that you are assigning the role to already exists.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Assign the
alert-routing-edit
cluster role to a user in the user-defined project:oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user>
$ oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<namespace>
, substitute the namespace for the user-defined project, such asns1
. For<user>
, substitute the username for the account to which you want to assign the role.
3.2. Configuring performance and scalability for user workload monitoring Copy linkLink copied to clipboard!
You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources.
3.2.1. Controlling the placement and distribution of monitoring components Copy linkLink copied to clipboard!
You can move the monitoring stack components to specific nodes:
-
Use the
nodeSelector
constraint with labeled nodes to move any of the monitoring stack components to specific nodes. - Assign tolerations to enable moving components to tainted nodes.
By doing so, you control the placement and distribution of the monitoring components across a cluster.
By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.
3.2.1.1. Moving monitoring components to different nodes Copy linkLink copied to clipboard!
You can move any of the components that monitor workloads for user-defined projects to specific worker nodes.
It is not permitted to move components to control plane or infrastructure nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:
oc label nodes <node_name> <node_label>
$ oc label nodes <node_name> <node_label>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<node_name>
with the name of the node where you want to add the label. Replace<node_label>
with the name of the wanted label.
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the node labels for the
nodeSelector
constraint for the component underdata/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Substitute
<component>
with the appropriate monitoring stack component name. - 2
- Substitute
<node_label_1>
with the label you added to the node. - 3
- Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.
NoteIf monitoring components remain in a
Pending
state after configuring thenodeSelector
constraint, check the pod events for errors relating to taints and tolerations.- Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed.
3.2.1.2. Assigning tolerations to monitoring components Copy linkLink copied to clipboard!
You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify
tolerations
for the component:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Substitute
<component>
and<toleration_specification>
accordingly.For example,
oc adm taint nodes node1 key1=value1:NoSchedule
adds a taint tonode1
with the keykey1
and the valuevalue1
. This prevents monitoring components from deploying pods onnode1
unless a toleration is configured for that taint. The following example configures thethanosRuler
component to tolerate the example taint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.2.2. Managing CPU and memory resources for monitoring components Copy linkLink copied to clipboard!
You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components.
You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring
namespace.
3.2.2.1. Specifying limits and requests Copy linkLink copied to clipboard!
To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config
ConfigMap
object in the openshift-user-workload-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add values to define resource limits and requests for each component you want to configure.
ImportantEnsure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run.
Example of setting resource limits and requests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.2.3. Controlling the impact of unbound metrics attributes in user-defined projects Copy linkLink copied to clipboard!
Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:
- Limit the number of samples that can be accepted per target scrape in user-defined projects
- Limit the number of scraped labels, the length of label names, and the length of label values
- Configure the intervals between consecutive scrapes and between Prometheus rule evaluations
Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
3.2.3.1. Setting scrape intervals, evaluation intervals, and enforced limits for user-defined projects Copy linkLink copied to clipboard!
You can set the following scrape and label limits for user-defined projects:
- Limit the number of samples that can be accepted per target scrape
- Limit the number of scraped labels
- Limit the length of label names and label values
You can also set an interval between consecutive scrapes and between Prometheus rule evaluations.
If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the enforced limit and time interval configurations to
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A value is required if this parameter is specified. This
enforcedSampleLimit
example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. - 2
- Specifies the maximum number of labels per scrape. The default value is
0
, which specifies no limit. - 3
- Specifies the maximum character length for a label name. The default value is
0
, which specifies no limit. - 4
- Specifies the maximum character length for a label value. The default value is
0
, which specifies no limit. - 5
- Specifies the interval between consecutive scrapes. The interval must be set between 5 seconds and 5 minutes. The default value is
30s
. - 6
- Specifies the interval between Prometheus rule evaluations. The interval must be set between 5 seconds and 5 minutes. The default value for Prometheus is
30s
.
NoteYou can also configure the
evaluationInterval
property for Thanos Ruler through thedata/config.yaml/thanosRuler
field. The default value for Thanos Ruler is15s
.- Save the file to apply the changes. The limits are applied automatically.
3.2.4. Configuring pod topology spread constraints Copy linkLink copied to clipboard!
You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config
config map.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following settings under the
data/config.yaml
field to configure pod topology spread constraints:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name of the component for which you want to set up pod topology spread constraints.
- 2
- Specify a numeric value for
maxSkew
, which defines the degree to which pods are allowed to be unevenly distributed. - 3
- Specify a key of node labels for
topologyKey
. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. - 4
- Specify a value for
whenUnsatisfiable
. Available options areDoNotSchedule
andScheduleAnyway
. SpecifyDoNotSchedule
if you want themaxSkew
value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. SpecifyScheduleAnyway
if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. - 5
- Specify
labelSelector
to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
Example configuration for Thanos Ruler
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.3. Storing and recording data for user workload monitoring Copy linkLink copied to clipboard!
Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting.
3.3.1. Configuring persistent storage Copy linkLink copied to clipboard!
Run cluster monitoring with persistent storage to gain the following benefits:
- Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
- Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.
For production environments, it is highly recommended to configure persistent storage.
In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability.
3.3.1.1. Persistent storage prerequisites Copy linkLink copied to clipboard!
- Dedicate sufficient persistent storage to ensure that the disk does not become full.
Use
Filesystem
as the storage type value for thevolumeMode
parameter when you configure the persistent volume.Important-
Do not use a raw block volume, which is described with
volumeMode: Block
in thePersistentVolume
resource. Prometheus cannot use raw block volumes. - Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.
-
Do not use a raw block volume, which is described with
3.3.1.2. Configuring a persistent volume claim Copy linkLink copied to clipboard!
To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add your PVC configuration for the component under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example configures a PVC that claims persistent storage for Thanos Ruler:
Example PVC configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteStorage requirements for the
thanosRuler
component depend on the number of rules that are evaluated and how many samples each rule generates.Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied.
WarningWhen you update the config map with a PVC configuration, the affected
StatefulSet
object is recreated, resulting in a temporary service outage.
3.3.2. Modifying retention time and size for Prometheus metrics data Copy linkLink copied to clipboard!
By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses.
Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize
limit. In such cases, the KubePersistentVolumeFillingUp
alert fires until the space on a PV is lower than the retentionSize
limit.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the retention time and size configuration under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The retention time: a number directly followed by
ms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years). You can also combine time values for specific times, such as1h30m15s
. - 2
- The retention size: a number directly followed by
B
(bytes),KB
(kilobytes),MB
(megabytes),GB
(gigabytes),TB
(terabytes),PB
(petabytes), andEB
(exabytes).
The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance:
Example of setting retention time for Prometheus
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.3.2.1. Modifying the retention time for Thanos Ruler metrics data Copy linkLink copied to clipboard!
By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
namespace.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
ConfigMap
object in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the retention time configuration under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the retention time in the following format: a number directly followed by
ms
(milliseconds),s
(seconds),m
(minutes),h
(hours),d
(days),w
(weeks), ory
(years). You can also combine time values for specific times, such as1h30m15s
. The default is24h
.
The following example sets the retention time to 10 days for Thanos Ruler data:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.3.3. Setting log levels for monitoring components Copy linkLink copied to clipboard!
You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler.
The following log levels can be applied to the relevant component in the user-workload-monitoring-config
ConfigMap
object:
-
debug
. Log debug, informational, warning, and error messages. -
info
. Log informational, warning, and error messages. -
warn
. Log warning and error messages only. -
error
. Log error messages only.
The default log level is info
.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add
logLevel: <log_level>
for a component underdata/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the
prometheus-operator
deployment:oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"
$ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
- --log-level=debug
- --log-level=debug
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the pods for the component are running. The following example lists the status of pods:
oc -n openshift-user-workload-monitoring get pods
$ oc -n openshift-user-workload-monitoring get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf an unrecognized
logLevel
value is included in theConfigMap
object, the pods for the component might not restart successfully.
3.3.4. Enabling the query log file for Prometheus Copy linkLink copied to clipboard!
You can configure Prometheus to write all queries that have been run by the engine to a log file.
Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap
object to enable the feature.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
queryLogFile
parameter for Prometheus underdata/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the full path to the file in which queries will be logged.
- Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
Verify that the pods for the component are running. The following sample command lists the status of pods:
oc -n openshift-user-workload-monitoring get pods
$ oc -n openshift-user-workload-monitoring get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Read the query log:
oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>
$ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantRevert the setting in the config map after you have examined the logged query information.
3.4. Configuring metrics for user workload monitoring Copy linkLink copied to clipboard!
Configure the collection of metrics to monitor how cluster components and your own workloads are performing.
You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters.
3.4.1. Configuring remote write storage Copy linkLink copied to clipboard!
You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.
ImportantRed Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support.
You have set up authentication credentials in a
Secret
object for the remote write endpoint. You must create the secret in theopenshift-user-workload-monitoring
namespace.WarningTo reduce security risks, use HTTPS and authentication to send metrics to an endpoint.
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
remoteWrite:
section underdata/config.yaml/prometheus
, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The URL of the remote write endpoint.
- 2
- The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an
Authorization
request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
Add write relabel configuration values after the authentication credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add configuration for metrics that you want to send to the remote endpoint.
Example of forwarding a single metric called
my_metric
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example of forwarding metrics called
my_metric_1
andmy_metric_2
inmy_namespace
namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The new configuration is applied automatically.
3.4.1.1. Supported remote write authentication settings Copy linkLink copied to clipboard!
You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write.
Authentication method | Config map field | Description |
---|---|---|
AWS Signature Version 4 |
| This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. |
Basic authentication |
| Basic authentication sets the authorization header on every remote write request with the configured username and password. |
authorization |
|
Authorization sets the |
OAuth 2.0 |
|
An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from |
TLS client |
| A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. |
3.4.1.2. Example remote write authentication settings Copy linkLink copied to clipboard!
The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret
object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring
namespace.
3.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication Copy linkLink copied to clipboard!
The following shows the settings for a sigv4
secret named sigv4-credentials
in the openshift-user-workload-monitoring
namespace.
The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret
object named sigv4-credentials
in the openshift-user-workload-monitoring
namespace:
- 1
- The AWS region.
- 2 4
- The name of the
Secret
object containing the AWS API access credentials. - 3
- The key that contains the AWS API access key in the specified
Secret
object. - 5
- The key that contains the AWS API secret key in the specified
Secret
object. - 6
- The name of the AWS profile that is being used to authenticate.
- 7
- The unique identifier for the Amazon Resource Name (ARN) assigned to your role.
3.4.1.2.2. Sample YAML for Basic authentication Copy linkLink copied to clipboard!
The following shows sample Basic authentication settings for a Secret
object named rw-basic-auth
in the openshift-user-workload-monitoring
namespace:
The following sample shows a basicAuth
remote write configuration that uses a Secret
object named rw-basic-auth
in the openshift-user-workload-monitoring
namespace. It assumes that you have already set up authentication credentials for the endpoint.
3.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object Copy linkLink copied to clipboard!
The following shows bearer token settings for a Secret
object named rw-bearer-auth
in the openshift-user-workload-monitoring
namespace:
- 1
- The authentication token.
The following shows sample bearer token config map settings that use a Secret
object named rw-bearer-auth
in the openshift-user-workload-monitoring
namespace:
3.4.1.2.4. Sample YAML for OAuth 2.0 authentication Copy linkLink copied to clipboard!
The following shows sample OAuth 2.0 settings for a Secret
object named oauth2-credentials
in the openshift-user-workload-monitoring
namespace:
The following shows an oauth2
remote write authentication sample configuration that uses a Secret
object named oauth2-credentials
in the openshift-user-workload-monitoring
namespace:
- 1 3
- The name of the corresponding
Secret
object. Note thatClientId
can alternatively refer to aConfigMap
object, althoughclientSecret
must refer to aSecret
object. - 2 4
- The key that contains the OAuth 2.0 credentials in the specified
Secret
object. - 5
- The URL used to fetch a token with the specified
clientId
andclientSecret
. - 6
- The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
- 7
- The OAuth 2.0 authorization request parameters required for the authorization server.
3.4.1.2.5. Sample YAML for TLS client authentication Copy linkLink copied to clipboard!
The following shows sample TLS client settings for a tls
Secret
object named mtls-bundle
in the openshift-user-workload-monitoring
namespace.
The following sample shows a tlsConfig
remote write authentication configuration that uses a TLS Secret
object named mtls-bundle
.
- 1 3 5
- The name of the corresponding
Secret
object that contains the TLS authentication credentials. Note thatca
andcert
can alternatively refer to aConfigMap
object, thoughkeySecret
must refer to aSecret
object. - 2
- The key in the specified
Secret
object that contains the CA certificate for the endpoint. - 4
- The key in the specified
Secret
object that contains the client certificate for the endpoint. - 6
- The key in the specified
Secret
object that contains the client key secret.
3.4.1.3. Example remote write queue configuration Copy linkLink copied to clipboard!
You can use the queueConfig
object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring
namespace.
Example configuration of remote write parameters with default values
- 1
- The number of samples to buffer per shard before they are dropped from the queue.
- 2
- The minimum number of shards.
- 3
- The maximum number of shards.
- 4
- The maximum number of samples per send.
- 5
- The maximum time for a sample to wait in buffer.
- 6
- The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the
maxbackoff
time. - 7
- The maximum time to wait before retrying a failed request.
- 8
- Set this parameter to
true
to retry a request after receiving a 429 status code from the remote write storage. - 9
- The samples that are older than the
sampleAgeLimit
limit are dropped from the queue. If the value is undefined or set to0s
, the parameter is ignored.
3.4.1.4. Table of remote write metrics Copy linkLink copied to clipboard!
The following table contains remote write and remote write-adjacent metrics with further description to help solve issues during remote write configuration.
Metric | Description |
---|---|
| Shows the newest timestamp that Prometheus stored in the write-ahead log (WAL) for any sample. |
| Shows the newest timestamp that the remote write queue successfully sent. |
| The number of samples that remote write failed to send and had to resend to remote storage. A steady high rate for this metric indicates problems with the network or remote storage endpoint. |
| Shows how many shards are currently running for each remote endpoint. |
| Shows the calculated needed number of shards based on the current write throughput and the rate of incoming versus sent samples. |
| Shows the maximum number of shards based on the current configuration. |
| Shows the minimum number of shards based on the current configuration. |
| The WAL segment file that Prometheus is currently writing new data to. |
| The WAL segment file that each remote write instance is currently reading from. |
3.4.2. Creating cluster ID labels for metrics Copy linkLink copied to clipboard!
You can create cluster ID labels for metrics by adding the write_relabel
settings for remote write storage in the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
namespace.
When Prometheus scrapes user workload targets that expose a namespace
label, the system stores this label as exported_namespace
. This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels
field to true
for PodMonitor
or ServiceMonitor
objects.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role, or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
). - You have configured remote write storage.
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
writeRelabelConfigs:
section underdata/config.yaml/prometheus/remoteWrite
, add cluster ID relabel configuration values:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following sample shows how to forward a metric with the cluster ID label
cluster_id
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The system initially applies a temporary cluster ID source label named
__tmp_openshift_cluster_id__
. This temporary label gets replaced by the cluster ID label name that you specify. - 2
- Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use
__tmp_openshift_cluster_id__
. The final relabeling step removes labels that use this name. - 3
- The
replace
write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
- Save the file to apply the changes. The new configuration is applied automatically.
3.4.3. Setting up metrics collection for user-defined projects Copy linkLink copied to clipboard!
You can create a ServiceMonitor
resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics
canonical name.
This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor
resource that defines how that service should be monitored.
3.4.3.1. Deploying a sample service Copy linkLink copied to clipboard!
To test monitoring of a service in a user-defined project, you can deploy a sample service.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with administrative permissions for the namespace.
Procedure
-
Create a YAML file for the service configuration. In this example, it is called
prometheus-example-app.yaml
. Add the following deployment and service configuration details to the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This configuration deploys a service named
prometheus-example-app
in the user-definedns1
project. This service exposes the customversion
metric.Apply the configuration to the cluster:
oc apply -f prometheus-example-app.yaml
$ oc apply -f prometheus-example-app.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It takes some time to deploy the service.
You can check that the pod is running:
oc -n ns1 get pod
$ oc -n ns1 get pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m
NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3.2. Specifying how a service is monitored Copy linkLink copied to clipboard!
To use the metrics exposed by your service, you must configure Red Hat OpenShift Service on AWS monitoring to scrape metrics from the /metrics
endpoint. You can do this using a ServiceMonitor
custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor
CRD that specifies how a pod should be monitored. The former requires a Service
object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod.
This procedure shows you how to create a ServiceMonitor
resource for a service in a user-defined project.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or themonitoring-edit
cluster role. - You have enabled monitoring for user-defined projects.
For this example, you have deployed the
prometheus-example-app
sample service in thens1
project.NoteThe
prometheus-example-app
sample service does not support TLS authentication.
Procedure
-
Create a new YAML configuration file named
example-app-service-monitor.yaml
. Add a
ServiceMonitor
resource to the YAML file. The following example creates a service monitor namedprometheus-example-monitor
to scrape metrics exposed by theprometheus-example-app
service in thens1
namespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA
ServiceMonitor
resource in a user-defined namespace can only discover services in the same namespace. That is, thenamespaceSelector
field of theServiceMonitor
resource is always ignored.Apply the configuration to the cluster:
oc apply -f example-app-service-monitor.yaml
$ oc apply -f example-app-service-monitor.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It takes some time to deploy the
ServiceMonitor
resource.Verify that the
ServiceMonitor
resource is running:oc -n <namespace> get servicemonitor
$ oc -n <namespace> get servicemonitor
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE prometheus-example-monitor 81m
NAME AGE prometheus-example-monitor 81m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3.3. Example service endpoint authentication settings Copy linkLink copied to clipboard!
You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor
and PodMonitor
custom resource definitions (CRDs).
The following samples show different authentication settings for a ServiceMonitor
resource. Each sample shows how to configure a corresponding Secret
object that contains authentication credentials and other relevant settings.
3.4.3.3.1. Sample YAML authentication with a bearer token Copy linkLink copied to clipboard!
The following sample shows bearer token settings for a Secret
object named example-bearer-auth
in the ns1
namespace:
Example bearer token secret
- 1
- Specify an authentication token.
The following sample shows bearer token authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-bearer-auth
:
Example bearer token authentication settings
Do not use bearerTokenFile
to configure bearer token. If you use the bearerTokenFile
configuration, the ServiceMonitor
resource is rejected.
3.4.3.3.2. Sample YAML for Basic authentication Copy linkLink copied to clipboard!
The following sample shows Basic authentication settings for a Secret
object named example-basic-auth
in the ns1
namespace:
Example Basic authentication secret
The following sample shows Basic authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-basic-auth
:
Example Basic authentication settings
3.4.3.3.3. Sample YAML authentication with OAuth 2.0 Copy linkLink copied to clipboard!
The following sample shows OAuth 2.0 settings for a Secret
object named example-oauth2
in the ns1
namespace:
Example OAuth 2.0 secret
The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor
CRD. The example uses a Secret
object named example-oauth2
:
Example OAuth 2.0 authentication settings
- 1
- The key that contains the OAuth 2.0 ID in the specified
Secret
object. - 2 4
- The name of the
Secret
object that contains the OAuth 2.0 credentials. - 3
- The key that contains the OAuth 2.0 secret in the specified
Secret
object. - 5
- The URL used to fetch a token with the specified
clientId
andclientSecret
.
3.5. Configuring alerts and notifications for user workload monitoring Copy linkLink copied to clipboard!
You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.
3.5.1. Configuring external Alertmanager instances Copy linkLink copied to clipboard!
The Red Hat OpenShift Service on AWS monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus.
You can add external Alertmanager instances to route alerts for user-defined projects.
If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add an
additionalAlertmanagerConfigs
section with configuration details underdata/config.yaml/<component>
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 2
- Substitute
<alertmanager_specification>
with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken
) and client TLS (tlsConfig
). - 1
- Substitute
<component>
for one of two supported external Alertmanager components:prometheus
orthanosRuler
.
The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.5.2. Configuring secrets for Alertmanager Copy linkLink copied to clipboard!
The Red Hat OpenShift Service on AWS monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.
For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret
object rather than in the ConfigMap
object.
3.5.2.1. Adding a secret to the Alertmanager configuration Copy linkLink copied to clipboard!
You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
project.
After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name>
within the alertmanager
container for the Alertmanager pods.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have created the secret to be configured in Alertmanager in the
openshift-user-workload-monitoring
project. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
secrets:
section underdata/config.yaml/alertmanager
with the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
- 2
- The name of the
Secret
object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.
The following sample config map settings configure Alertmanager to use two
Secret
objects namedtest-secret-basic-auth
andtest-secret-api-token
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The new configuration is applied automatically.
3.5.3. Attaching additional labels to your time series and alerts Copy linkLink copied to clipboard!
You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or as a user with theuser-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project. - A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define labels you want to add for every metric under
data/config.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Substitute
<key>: <value>
with key-value pairs where<key>
is a unique name for the new label and<value>
is its value.
Warning-
Do not use
prometheus
orprometheus_replica
as key names, because they are reserved and will be overwritten. -
Do not use
cluster
ormanaged_cluster
as key names. Using them can cause issues where you are unable to see data in the developer dashboards.
NoteIn the
openshift-user-workload-monitoring
project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. SettingexternalLabels
forprometheus
in theuser-workload-monitoring-config
ConfigMap
object will only configure external labels for metrics and not for any rules.For example, to add metadata about the region and environment to all time series and alerts, use the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
3.5.4. Configuring alert notifications Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, the dedicated-admin
user can enable alert routing for user-defined projects by using a separate Alertmanager instance for user-defined projects.
Developers and other users with the alert-routing-edit
cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers.
Review the following limitations of alert routing for user-defined projects:
-
User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace
ns1
only applies toPrometheusRules
resources in the same namespace. -
When a namespace is excluded from user-defined monitoring,
AlertmanagerConfig
resources in the namespace cease to be part of the Alertmanager configuration.
3.5.4.1. Configuring alert routing for user-defined projects Copy linkLink copied to clipboard!
If you are a non-administrator user who has been given the alert-routing-edit
cluster role, you can create or edit alert routing for user-defined projects.
Prerequisites
- A cluster administrator has enabled monitoring for user-defined projects.
- A cluster administrator has enabled alert routing for user-defined projects.
-
You are logged in as a user that has the
alert-routing-edit
cluster role for the project for which you want to create alert routing. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
Create a YAML file for alert routing. The example in this procedure uses a file called
example-app-alert-routing.yaml
. Add an
AlertmanagerConfig
YAML definition to the file. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file.
Apply the resource to the cluster:
oc apply -f example-app-alert-routing.yaml
$ oc apply -f example-app-alert-routing.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration is automatically applied to the Alertmanager pods.
3.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret Copy linkLink copied to clipboard!
If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload
secret in the openshift-user-workload-monitoring
namespace.
All features of a supported version of upstream Alertmanager are also supported in an Red Hat OpenShift Service on AWS Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. - You have enabled a separate instance of Alertmanager for user-defined alert routing.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Print the currently active Alertmanager configuration into the file
alertmanager.yaml
:oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
$ oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the configuration in
alertmanager.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you configured an HTTP cluster-wide proxy, set the
proxy_from_environment
parameter totrue
to enable proxying for all alert receivers. - 2
- Specify labels to match your alerts. This example targets all alerts that have the
service="prometheus-example-monitor"
label. - 3
- Specify the name of the receiver to use for the alerts group.
- 4
- Specify the receiver configuration.
Apply the new configuration in the file:
oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-
$ oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts Copy linkLink copied to clipboard!
You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results:
- All default platform alerts are sent to a receiver owned by the team in charge of these alerts.
- All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.
You can achieve this by using the openshift_io_alert_source="platform"
label that is added by the Cluster Monitoring Operator to all platform alerts:
-
Use the
openshift_io_alert_source="platform"
matcher to match default platform alerts. -
Use the
openshift_io_alert_source!="platform"
or'openshift_io_alert_source=""'
matcher to match user-defined alerts.
This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.
Chapter 4. Accessing metrics Copy linkLink copied to clipboard!
4.1. Accessing metrics as an administrator Copy linkLink copied to clipboard!
You can access metrics to monitor the performance of cluster components and your workloads.
4.1.1. Querying metrics for all projects with the Red Hat OpenShift Service on AWS web console Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default Red Hat OpenShift Service on AWS and user-defined projects in the Metrics UI.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet for all projects. You can also run custom Prometheus Query Language (PromQL) queries.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc
).
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Observe → Metrics.
To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
4.1.2. Getting detailed information about a metrics target Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Service on AWS web console to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when Red Hat OpenShift Service on AWS monitoring is not able to scrape metrics from a targeted component.
The Metrics targets page shows targets for default Red Hat OpenShift Service on AWS projects and for user-defined projects.
Prerequisites
- You have access to the cluster as an administrator for the project for which you want to view metrics targets.
Procedure
In the Red Hat OpenShift Service on AWS web console, go to Observe → Targets. The Metrics targets page opens with a list of all service endpoint targets that are being scraped for metrics.
This page shows details about targets for default Red Hat OpenShift Service on AWS and user-defined projects. This page lists the following information for each target:
- Service endpoint URL being scraped
-
The
ServiceMonitor
resource being monitored - The up or down status of the target
- Namespace
- Last scrape time
- Duration of the last scrape
Optional: To find a specific target, perform any of the following actions:
Expand Option Description Filter the targets by status and source.
Choose filters in the Filter list.
The following filtering options are available:
Status filters:
- Up. The target is currently up and being actively scraped for metrics.
- Down. The target is currently down and not being scraped for metrics.
Source filters:
- Platform. Platform-level targets relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality.
- User. User targets relate to user-defined projects. These projects are user-created and can be customized.
Search for a target by name or label.
Enter a search term in the Text or Label field next to the search box.
Sort the targets.
Click one or more of the Endpoint Status, Namespace, Last Scrape, and Scrape Duration column headers.
Click the URL in the Endpoint column for a target to go to its Target details page. This page provides information about the target, including the following information:
- The endpoint URL being scraped for metrics
- The current Up or Down status of the target
- A link to the namespace
-
A link to the
ServiceMonitor
resource details - Labels attached to the target
- The most recent time that the target was scraped for metrics
4.1.3. Reviewing monitoring dashboards as a cluster administrator Copy linkLink copied to clipboard!
As an administrator, you can view dashboards relating to core Red Hat OpenShift Service on AWS cluster components.
Starting with Red Hat OpenShift Service on AWS 4.19, the perspectives in the web console have unified. The Developer perspective is no longer enabled by default.
All users can interact with all Red Hat OpenShift Service on AWS web console features. However, if you are not the cluster owner, you might need to request permission to access certain features from the cluster owner.
You can still enable the Developer perspective. On the Getting Started pane in the web console, you can take a tour of the console, find information on setting up your cluster, view a quick start for enabling the Developer perspective, and follow links to explore new features and capabilities.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role.
Procedure
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Dashboards.
- Choose a dashboard in the Dashboard list. Some dashboards, such as etcd and Prometheus dashboards, produce additional sub-menus when selected.
Optional: Select a time range for the graphs in the Time Range list.
- Select a predefined time period.
Set a custom time range by clicking Custom time range in the Time Range list.
- Input or select the From and To dates and times.
- Click Save to save the custom time range.
- Optional: Select a Refresh Interval.
- Hover over each of the graphs within a dashboard to display detailed information about specific items.
4.2. Accessing metrics as a developer Copy linkLink copied to clipboard!
You can access metrics to monitor the performance of your cluster workloads.
4.2.1. Querying metrics for user-defined projects with the Red Hat OpenShift Service on AWS web console Copy linkLink copied to clipboard!
You can use the Red Hat OpenShift Service on AWS metrics query browser to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about any user-defined workloads that you are monitoring.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
The Metrics UI includes predefined queries, for example, CPU, memory, bandwidth, or network packet. These queries are restricted to the selected project. You can also run custom Prometheus Query Language (PromQL) queries for the project.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitor
custom resource definition (CRD) for the service to define how the service is monitored.
Procedure
- In the Red Hat OpenShift Service on AWS web console, click Observe → Metrics.
To add one or more queries, perform any of the following actions:
Expand Option Description Select an existing query.
From the Select query drop-down list, select an existing query.
Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. Use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. Move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Click Add query.
Duplicate an existing query.
Click the options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Click the options menu
next to the query and choose Disable query.
To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
Note- When drawing time series graphs, queries that operate on large amounts of data might time out or overload the browser. To avoid this, click Hide graph and calibrate your query by using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
- By default, the query table shows an expanded view that lists every metric and its current value. Click the ˅ down arrowhead to minimize the expanded view for a query.
- Optional: Save the page URL to use this set of queries again in the future.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. Select which metrics are shown by performing any of the following actions:
Expand Option Description Hide all metrics from a query.
Click the options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Perform one of the following actions:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu to select the time range.
Reset the time range.
Click Reset zoom.
Display outputs for all queries at a specific point in time.
Hover over the plot at the point you are interested in. The query outputs appear in a pop-up box.
Hide the plot.
Click Hide graph.
4.2.2. Reviewing monitoring dashboards as a developer Copy linkLink copied to clipboard!
As a developer, you can view dashboards relating to projects you have permissions for.
Starting with Red Hat OpenShift Service on AWS 4.19, the perspectives in the web console have unified. The Developer perspective is no longer enabled by default.
All users can interact with all Red Hat OpenShift Service on AWS web console features. However, if you are not the cluster owner, you might need to request permission to access certain features from the cluster owner.
You can still enable the Developer perspective. On the Getting Started pane in the web console, you can take a tour of the console, find information on setting up your cluster, view a quick start for enabling the Developer perspective, and follow links to explore new features and capabilities.
Prerequisites
- You have access to the cluster as a developer or as a user.
- You have view permissions for the project that you are viewing the dashboard for.
- A cluster administrator has enabled the Developer perspective in the web console.
Procedure
- In the the Developer perspective of the Red Hat OpenShift Service on AWS web console, click Observe and go to the Dashboards tab.
- Select a project from the Project: drop-down list.
- Select a dashboard from the Dashboard drop-down list to see the filtered metrics.
Optional: Select a time range for the graphs in the Time Range list.
- Select a predefined time period.
Set a custom time range by clicking Custom time range in the Time Range list.
- Input or select the From and To dates and times.
- Click Save to save the custom time range.
- Optional: Select a Refresh Interval.
- Hover over each of the graphs within a dashboard to display detailed information about specific items.
4.3. Accessing monitoring APIs by using the CLI Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can access web service APIs for some monitoring components from the command-line interface (CLI).
In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data.
To avoid these issues, consider the following recommendations:
- Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds.
-
Do not retrieve all metrics data through the
/federate
endpoint for Prometheus. Query the endpoint only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation.
4.3.1. About accessing monitoring web service APIs Copy linkLink copied to clipboard!
You can directly access web service API endpoints from the command line for the following monitoring stack components:
- Prometheus
- Alertmanager
- Thanos Ruler
- Thanos Querier
To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have get
permission on the namespaces resource, which can be granted by binding the cluster-monitoring-view
cluster role to the account.
When you access web service API endpoints for monitoring components, be aware of the following limitations:
- You can only use bearer token authentication to access API endpoints.
-
You can only access endpoints in the
/api
path for a route. If you try to access an API endpoint in a web browser, anApplication is not available
error occurs. To access monitoring features in a web browser, use the Red Hat OpenShift Service on AWS web console to review monitoring dashboards.
4.3.2. Accessing a monitoring web service API Copy linkLink copied to clipboard!
The following example shows how to query the service API receivers for the Alertmanager service used in core platform monitoring. You can use a similar method to access the prometheus-k8s
service for core platform Prometheus and the thanos-ruler
service for Thanos Ruler.
Prerequisites
-
You are logged in to an account that is bound against the
monitoring-alertmanager-edit
role in theopenshift-monitoring
namespace. You are logged in to an account that has permission to get the Alertmanager API route.
NoteIf your account does not have permission to get the Alertmanager API route, a cluster administrator can provide the URL for the route.
Procedure
Extract an authentication token by running the following command:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
alertmanager-main
API route URL by running the following command:HOST=$(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}')
$ HOST=$(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the service API receivers for Alertmanager by running the following command:
curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v2/receivers"
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v2/receivers"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Querying metrics by using the federation endpoint for Prometheus Copy linkLink copied to clipboard!
You can use the federation endpoint for Prometheus to scrape platform and user-defined metrics from a network location outside the cluster. To do so, access the Prometheus /federate
endpoint for the cluster via an Red Hat OpenShift Service on AWS route.
A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics.
Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations:
- Do not try to retrieve all metrics data via the federation endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation.
- Avoid frequent querying of the federation endpoint for Prometheus. Limit queries to a maximum of one every 30 seconds.
If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). You have access to the cluster as a user with the
cluster-monitoring-view
cluster role or have obtained a bearer token withget
permission on thenamespaces
resource.NoteYou can only use bearer token authentication to access the Prometheus federation endpoint.
You are logged in to an account that has permission to get the Prometheus federation route.
NoteIf your account does not have permission to get the Prometheus federation route, a cluster administrator can provide the URL for the route.
Procedure
Retrieve the bearer token by running the following the command:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the Prometheus federation route URL by running the following command:
HOST=$(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}')
$ HOST=$(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query metrics from the
/federate
route. The following example command queriesup
metrics:curl -G -k -H "Authorization: Bearer $TOKEN" https://$HOST/federate --data-urlencode 'match[]=up'
$ curl -G -k -H "Authorization: Bearer $TOKEN" https://$HOST/federate --data-urlencode 'match[]=up'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
TYPE up untyped
# TYPE up untyped up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.4. Accessing metrics from outside the cluster for custom applications Copy linkLink copied to clipboard!
You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier
route.
This access only supports using a bearer token for authentication.
Prerequisites
- You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure.
-
You are logged in to an account with the
cluster-monitoring-view
cluster role, which provides permission to access the Thanos Querier API. You are logged in to an account that has permission to get the Thanos Querier API route.
NoteIf your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route.
Procedure
Extract an authentication token to connect to Prometheus by running the following command:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
thanos-querier
API route URL by running the following command:HOST=$(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')
$ HOST=$(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the namespace to the namespace in which your service is running by using the following command:
NAMESPACE=ns1
$ NAMESPACE=ns1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the metrics of your own services in the command line by running the following command:
curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/query?" --data-urlencode "query=up{namespace='$NAMESPACE'}"
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/query?" --data-urlencode "query=up{namespace='$NAMESPACE'}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows the status for each application pod that Prometheus is scraping:
The formatted example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
The formatted example output uses a filtering tool, such as
jq
, to provide the formatted indented JSON. See the jq Manual (jq documentation) for more information about usingjq
. - The command requests an instant query endpoint of the Thanos Querier service, which evaluates selectors at one point in time.
-
The formatted example output uses a filtering tool, such as
4.3.5. Resources reference for the Cluster Monitoring Operator Copy linkLink copied to clipboard!
This document describes the following resources deployed and managed by the Cluster Monitoring Operator (CMO):
Use this information when you want to configure API endpoint connections to retrieve, send, or query metrics data.
In certain situations, accessing endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data.
To avoid these issues, follow these recommendations:
- Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds.
-
Do not try to retrieve all metrics data via the
/federate
endpoint. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation.
4.3.5.1. CMO routes resources Copy linkLink copied to clipboard!
4.3.5.1.1. openshift-monitoring/alertmanager-main Copy linkLink copied to clipboard!
Expose the /api
endpoints of the alertmanager-main
service via a router.
4.3.5.1.2. openshift-monitoring/prometheus-k8s Copy linkLink copied to clipboard!
Expose the /api
endpoints of the prometheus-k8s
service via a router.
4.3.5.1.3. openshift-monitoring/prometheus-k8s-federate Copy linkLink copied to clipboard!
Expose the /federate
endpoint of the prometheus-k8s
service via a router.
4.3.5.1.4. openshift-user-workload-monitoring/federate Copy linkLink copied to clipboard!
Expose the /federate
endpoint of the prometheus-user-workload
service via a router.
4.3.5.1.5. openshift-monitoring/thanos-querier Copy linkLink copied to clipboard!
Expose the /api
endpoints of the thanos-querier
service via a router.
4.3.5.1.6. openshift-user-workload-monitoring/thanos-ruler Copy linkLink copied to clipboard!
Expose the /api
endpoints of the thanos-ruler
service via a router.
4.3.5.2. CMO services resources Copy linkLink copied to clipboard!
4.3.5.2.1. openshift-monitoring/prometheus-operator-admission-webhook Copy linkLink copied to clipboard!
Expose the admission webhook service which validates PrometheusRules
and AlertmanagerConfig
custom resources on port 8443.
4.3.5.2.2. openshift-user-workload-monitoring/alertmanager-user-workload Copy linkLink copied to clipboard!
Expose the user-defined Alertmanager web server within the cluster on the following ports:
-
Port 9095 provides access to the Alertmanager endpoints. Granting access requires binding a user to the
monitoring-alertmanager-api-reader
role (for read-only operations) or themonitoring-alertmanager-api-writer
role in theopenshift-user-workload-monitoring
project. -
Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the
monitoring-rules-edit
cluster role ormonitoring-edit
cluster role in the project. -
Port 9097 provides access to the
/metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.3. openshift-monitoring/alertmanager-main Copy linkLink copied to clipboard!
Expose the Alertmanager web server within the cluster on the following ports:
-
Port 9094 provides access to all the Alertmanager endpoints. Granting access requires binding a user to the
monitoring-alertmanager-view
role (for read-only operations) or themonitoring-alertmanager-edit
role in theopenshift-monitoring
project. -
Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the
monitoring-rules-edit
cluster role ormonitoring-edit
cluster role in the project. -
Port 9097 provides access to the
/metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.4. openshift-monitoring/kube-state-metrics Copy linkLink copied to clipboard!
Expose kube-state-metrics /metrics
endpoints within the cluster on the following ports:
- Port 8443 provides access to the Kubernetes resource metrics. This port is for internal use, and no other usage is guaranteed.
- Port 9443 provides access to the internal kube-state-metrics metrics. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.5. openshift-monitoring/metrics-server Copy linkLink copied to clipboard!
Expose the metrics-server web server on port 443. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.6. openshift-monitoring/monitoring-plugin Copy linkLink copied to clipboard!
Expose the monitoring plugin service on port 9443. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.7. openshift-monitoring/node-exporter Copy linkLink copied to clipboard!
Expose the /metrics
endpoint on port 9100. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.8. openshift-monitoring/openshift-state-metrics Copy linkLink copied to clipboard!
Expose openshift-state-metrics /metrics
endpoints within the cluster on the following ports:
- Port 8443 provides access to the OpenShift resource metrics. This port is for internal use, and no other usage is guaranteed.
-
Port 9443 provides access to the internal
openshift-state-metrics
metrics. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.9. openshift-monitoring/prometheus-k8s Copy linkLink copied to clipboard!
Expose the Prometheus web server within the cluster on the following ports:
-
Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the
cluster-monitoring-view
cluster role. -
Port 9092 provides access to the
/metrics
and/federate
endpoints only. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.10. openshift-user-workload-monitoring/prometheus-operator Copy linkLink copied to clipboard!
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.11. openshift-monitoring/prometheus-operator Copy linkLink copied to clipboard!
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.12. openshift-user-workload-monitoring/prometheus-user-workload Copy linkLink copied to clipboard!
Expose the Prometheus web server within the cluster on the following ports:
-
Port 9091 provides access to the
/metrics
endpoint only. This port is for internal use, and no other usage is guaranteed. -
Port 9092 provides access to the
/federate
endpoint only. Granting access requires binding a user to thecluster-monitoring-view
cluster role.
This also exposes the /metrics
endpoint of the Thanos sidecar web server on port 10902. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.13. openshift-monitoring/telemeter-client Copy linkLink copied to clipboard!
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.14. openshift-monitoring/thanos-querier Copy linkLink copied to clipboard!
Expose the Thanos Querier web server within the cluster on the following ports:
-
Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the
cluster-monitoring-view
cluster role. -
Port 9092 provides access to the
/api/v1/query
,/api/v1/query_range/
,/api/v1/labels
,/api/v1/label/*/values
, and/api/v1/series
endpoints restricted to a given project. Granting access requires binding a user to theview
cluster role in the project. -
Port 9093 provides access to the
/api/v1/alerts
, and/api/v1/rules
endpoints restricted to a given project. Granting access requires binding a user to themonitoring-rules-edit
,monitoring-edit
, ormonitoring-rules-view
cluster role in the project. -
Port 9094 provides access to the
/metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.15. openshift-user-workload-monitoring/thanos-ruler Copy linkLink copied to clipboard!
Expose the Thanos Ruler web server within the cluster on the following ports:
-
Port 9091 provides access to all Thanos Ruler endpoints. Granting access requires binding a user to the
cluster-monitoring-view
cluster role. -
Port 9092 provides access to the
/metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
This also exposes the gRPC endpoints on port 10901. This port is for internal use, and no other usage is guaranteed.
4.3.5.2.16. openshift-monitoring/cluster-monitoring-operator Copy linkLink copied to clipboard!
Expose the /metrics
and /validate-webhook
endpoints on port 8443. This port is for internal use, and no other usage is guaranteed.
Chapter 5. Managing alerts Copy linkLink copied to clipboard!
5.1. Managing alerts as an Administrator Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, the Alerting UI enables you to manage alerts, silences, and alerting rules.
Starting with Red Hat OpenShift Service on AWS 4.19, the perspectives in the web console have unified. The Developer perspective is no longer enabled by default.
All users can interact with all Red Hat OpenShift Service on AWS web console features. However, if you are not the cluster owner, you might need to request permission to access certain features from the cluster owner.
You can still enable the Developer perspective. On the Getting Started pane in the web console, you can take a tour of the console, find information on setting up your cluster, view a quick start for enabling the Developer perspective, and follow links to explore new features and capabilities.
The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as an administrator, you can access all alerts, silences, and alerting rules.
5.1.1. Accessing the Alerting UI Copy linkLink copied to clipboard!
The Alerting UI is accessible in the Red Hat OpenShift Service on AWS web console.
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting rules pages.
5.1.2. Getting information about alerts, silences, and alerting rules Copy linkLink copied to clipboard!
The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.
Prerequisites
- You have access to the cluster as a user with view permissions for the project that you are viewing alerts for.
Procedure
To obtain information about alerts:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Alerts page.
- Optional: Search for alerts by name by using the Name field in the search list.
- Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list.
- Optional: Sort the alerts by clicking one or more of the Name, Severity, State, and Source column headers.
Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert:
- A description of the alert
- Messages associated with the alert
- A link to the runbook page on GitHub for the alert, if the page exists
- Labels attached to the alert
- A link to its governing alerting rule
- Silences for the alert, if any exist
To obtain information about silences:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Silences page.
- Optional: Filter the silences by name using the Search by name field.
- Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied.
- Optional: Sort the silences by clicking one or more of the Name, Firing alerts, State, and Creator column headers.
Select the name of a silence to view its Silence details page. The page includes the following details:
- Alert specification
- Start time
- End time
- Silence state
- Number and list of firing alerts
To obtain information about alerting rules:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Alerting rules page.
- Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list.
- Optional: Sort the alerting rules by clicking one or more of the Name, Severity, Alert state, and Source column headers.
Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule:
- Alerting rule name, severity, and description.
- The expression that defines the condition for firing the alert.
- The time for which the condition should be true for an alert to fire.
- A graph for each alert governed by the alerting rule, showing the value with which the alert is firing.
- A table of all alerts governed by the alerting rule.
5.1.3. Managing silences Copy linkLink copied to clipboard!
You can create a silence for an alert in the Red Hat OpenShift Service on AWS web console. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires.
When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time.
5.1.3.1. Silencing alerts Copy linkLink copied to clipboard!
You can silence a specific alert or silence alerts that match a specification that you define.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
To silence a specific alert:
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Alerts.
-
For the alert that you want to silence, click
and select Silence alert to open the Silence alert page with a default configuration for the chosen alert.
Optional: Change the default configuration details for the silence.
NoteYou must add a comment before saving a silence.
- To save the silence, click Silence.
To silence a set of alerts:
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Silences.
- Click Create silence.
On the Create silence page, set the schedule, duration, and label details for an alert.
NoteYou must add a comment before saving a silence.
- To create silences for alerts that match the labels that you entered, click Silence.
5.1.3.2. Editing silences Copy linkLink copied to clipboard!
You can edit a silence, which expires the existing silence and creates a new one with the changed configuration.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Silences.
For the silence you want to modify, click
and select Edit silence.
Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence.
- On the Edit silence page, make changes and click Silence. Doing so expires the existing silence and creates one with the updated configuration.
5.1.3.3. Expiring silences Copy linkLink copied to clipboard!
You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently.
You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
- Go to Observe → Alerting → Silences.
- For the silence or silences you want to expire, select the checkbox in the corresponding row.
Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected.
Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence.
5.1.4. Managing alerting rules for user-defined projects Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.
5.1.4.1. Creating alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.
To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value.
Prerequisites
- You have enabled monitoring for user-defined projects.
-
You are logged in as a cluster administrator or as a user that has the
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
Create a YAML file for alerting rules. In this example, it is called
example-app-alerting-rule.yaml
. Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named
example-alert
. The alerting rule fires an alert when theversion
metric exposed by the sample service becomes0
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration file to the cluster:
oc apply -f example-app-alerting-rule.yaml
$ oc apply -f example-app-alerting-rule.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.4.2. Creating cross-project alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can create alerting rules that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config
config map. The PrometheusRule
objects created in these projects are then applicable to all projects.
Therefore, you can have generic alerting rules that apply to multiple user-defined projects instead of having individual PrometheusRule
objects in each user project. You can filter which projects are included or excluded from the alerting rule by using PromQL queries in the PrometheusRule
object.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
cluster role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
user-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project to edit theuser-workload-monitoring-config
config map. -
The
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule.
-
The
- A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure projects in which you want to create alerting rules that are not bound to a specific project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the
namespace
label inPrometheusRule
objects created in these projects, making thePrometheusRule
objects applicable to all projects.
-
Create a YAML file for alerting rules. In this example, it is called
example-cross-project-alerting-rule.yaml
. Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called
example-security
. The alerting rule fires when a user project does not enforce the restricted pod security policy:Example cross-project alerting rule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that you specify the project that you defined in the
namespacesWithoutLabelEnforcement
field. - 2
- The name of the alerting rule you want to create.
- 3
- The duration for which the condition should be true before an alert is fired.
- 4
- The PromQL query expression that defines the new rule. You can use label matchers on the
namespace
label to filter which projects are included or excluded from the alerting rule. - 5
- The message associated with the alert.
- 6
- The severity that alerting rule assigns to the alert.
ImportantEnsure that you create a specific cross-project alerting rule in only one of the projects that you specified in the
namespacesWithoutLabelEnforcement
field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts.Apply the configuration file to the cluster:
oc apply -f example-cross-project-alerting-rule.yaml
$ oc apply -f example-cross-project-alerting-rule.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.4.3. Listing alerting rules for all projects in a single view Copy linkLink copied to clipboard!
As a cluster administrator, you can list alerting rules for core Red Hat OpenShift Service on AWS and user-defined projects together in a single view.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Alerting rules.
Select the Platform and User sources in the Filter drop-down menu.
NoteThe Platform source is selected by default.
5.1.4.4. Removing alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can remove alerting rules for user-defined projects.
Prerequisites
- You have enabled monitoring for user-defined projects.
-
You are logged in as a cluster administrator or as a user that has the
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule. -
You have installed the OpenShift CLI (
oc
).
Procedure
To remove rule
<alerting_rule>
in<namespace>
, run the following:oc -n <namespace> delete prometheusrule <alerting_rule>
$ oc -n <namespace> delete prometheusrule <alerting_rule>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.4.5. Disabling cross-project alerting rules for user-defined projects Copy linkLink copied to clipboard!
Creating cross-project alerting rules for user-defined projects is enabled by default. Cluster administrators can disable the capability in the cluster-monitoring-config
config map for the following reasons:
- To prevent user-defined monitoring from overloading the cluster monitoring stack.
- To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
cluster-monitoring-config
config map, disable the option to create cross-project alerting rules by setting therulesWithoutLabelEnforcementAllowed
value underdata/config.yaml/userWorkload
tofalse
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file to apply the changes.
5.2. Managing alerts as a Developer Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, the Alerting UI enables you to manage alerts, silences, and alerting rules.
Starting with Red Hat OpenShift Service on AWS 4.19, the perspectives in the web console have unified. The Developer perspective is no longer enabled by default.
All users can interact with all Red Hat OpenShift Service on AWS web console features. However, if you are not the cluster owner, you might need to request permission to access certain features from the cluster owner.
You can still enable the Developer perspective. On the Getting Started pane in the web console, you can take a tour of the console, find information on setting up your cluster, view a quick start for enabling the Developer perspective, and follow links to explore new features and capabilities.
The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to.
5.2.1. Accessing the Alerting UI Copy linkLink copied to clipboard!
The Alerting UI is accessible in the Red Hat OpenShift Service on AWS web console.
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting rules pages.
5.2.2. Getting information about alerts, silences, and alerting rules Copy linkLink copied to clipboard!
The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.
Prerequisites
- You have access to the cluster as a user with view permissions for the project that you are viewing alerts for.
Procedure
To obtain information about alerts:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Alerts page.
- Optional: Search for alerts by name by using the Name field in the search list.
- Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list.
- Optional: Sort the alerts by clicking one or more of the Name, Severity, State, and Source column headers.
Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert:
- A description of the alert
- Messages associated with the alert
- A link to the runbook page on GitHub for the alert, if the page exists
- Labels attached to the alert
- A link to its governing alerting rule
- Silences for the alert, if any exist
To obtain information about silences:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Silences page.
- Optional: Filter the silences by name using the Search by name field.
- Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied.
- Optional: Sort the silences by clicking one or more of the Name, Firing alerts, State, and Creator column headers.
Select the name of a silence to view its Silence details page. The page includes the following details:
- Alert specification
- Start time
- End time
- Silence state
- Number and list of firing alerts
To obtain information about alerting rules:
- In the Red Hat OpenShift Service on AWS web console, go to the Observe → Alerting → Alerting rules page.
- Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list.
- Optional: Sort the alerting rules by clicking one or more of the Name, Severity, Alert state, and Source column headers.
Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule:
- Alerting rule name, severity, and description.
- The expression that defines the condition for firing the alert.
- The time for which the condition should be true for an alert to fire.
- A graph for each alert governed by the alerting rule, showing the value with which the alert is firing.
- A table of all alerts governed by the alerting rule.
5.2.3. Managing silences Copy linkLink copied to clipboard!
You can create a silence for an alert in the Red Hat OpenShift Service on AWS web console. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires.
When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time.
5.2.3.1. Silencing alerts Copy linkLink copied to clipboard!
You can silence a specific alert or silence alerts that match a specification that you define.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
To silence a specific alert:
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Alerts.
-
For the alert that you want to silence, click
and select Silence alert to open the Silence alert page with a default configuration for the chosen alert.
Optional: Change the default configuration details for the silence.
NoteYou must add a comment before saving a silence.
- To save the silence, click Silence.
To silence a set of alerts:
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Silences.
- Click Create silence.
On the Create silence page, set the schedule, duration, and label details for an alert.
NoteYou must add a comment before saving a silence.
- To create silences for alerts that match the labels that you entered, click Silence.
5.2.3.2. Editing silences Copy linkLink copied to clipboard!
You can edit a silence, which expires the existing silence and creates a new one with the changed configuration.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Alerting → Silences.
For the silence you want to modify, click
and select Edit silence.
Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence.
- On the Edit silence page, make changes and click Silence. Doing so expires the existing silence and creates one with the updated configuration.
5.2.3.3. Expiring silences Copy linkLink copied to clipboard!
You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently.
You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
cluster-monitoring-view
cluster role, which allows you to access Alertmanager. -
The
monitoring-alertmanager-edit
role, which permits you to create and silence alerts.
-
The
Procedure
- Go to Observe → Alerting → Silences.
- For the silence or silences you want to expire, select the checkbox in the corresponding row.
Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected.
Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence.
5.2.4. Managing alerting rules for user-defined projects Copy linkLink copied to clipboard!
In Red Hat OpenShift Service on AWS, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.
5.2.4.1. Creating alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics.
To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value.
Prerequisites
- You have enabled monitoring for user-defined projects.
-
You are logged in as a cluster administrator or as a user that has the
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule. -
You have installed the OpenShift CLI (
oc
).
Procedure
-
Create a YAML file for alerting rules. In this example, it is called
example-app-alerting-rule.yaml
. Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named
example-alert
. The alerting rule fires an alert when theversion
metric exposed by the sample service becomes0
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the configuration file to the cluster:
oc apply -f example-app-alerting-rule.yaml
$ oc apply -f example-app-alerting-rule.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4.2. Creating cross-project alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can create alerting rules that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config
config map. The PrometheusRule
objects created in these projects are then applicable to all projects.
Therefore, you can have generic alerting rules that apply to multiple user-defined projects instead of having individual PrometheusRule
objects in each user project. You can filter which projects are included or excluded from the alerting rule by using PromQL queries in the PrometheusRule
object.
Prerequisites
-
If you are a cluster administrator, you have access to the cluster as a user with the
cluster-admin
cluster role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles:
-
The
user-workload-monitoring-config-edit
role in theopenshift-user-workload-monitoring
project to edit theuser-workload-monitoring-config
config map. -
The
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule.
-
The
- A cluster administrator has enabled monitoring for user-defined projects.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
user-workload-monitoring-config
config map in theopenshift-user-workload-monitoring
project:oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure projects in which you want to create alerting rules that are not bound to a specific project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the
namespace
label inPrometheusRule
objects created in these projects, making thePrometheusRule
objects applicable to all projects.
-
Create a YAML file for alerting rules. In this example, it is called
example-cross-project-alerting-rule.yaml
. Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called
example-security
. The alerting rule fires when a user project does not enforce the restricted pod security policy:Example cross-project alerting rule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Ensure that you specify the project that you defined in the
namespacesWithoutLabelEnforcement
field. - 2
- The name of the alerting rule you want to create.
- 3
- The duration for which the condition should be true before an alert is fired.
- 4
- The PromQL query expression that defines the new rule. You can use label matchers on the
namespace
label to filter which projects are included or excluded from the alerting rule. - 5
- The message associated with the alert.
- 6
- The severity that alerting rule assigns to the alert.
ImportantEnsure that you create a specific cross-project alerting rule in only one of the projects that you specified in the
namespacesWithoutLabelEnforcement
field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts.Apply the configuration file to the cluster:
oc apply -f example-cross-project-alerting-rule.yaml
$ oc apply -f example-cross-project-alerting-rule.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4.3. Accessing alerting rules for user-defined projects Copy linkLink copied to clipboard!
To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view
cluster role for the project.
Prerequisites
- You have enabled monitoring for user-defined projects.
-
You are logged in as a user that has the
monitoring-rules-view
cluster role for your project. -
You have installed the OpenShift CLI (
oc
).
Procedure
To list alerting rules in
<project>
:oc -n <project> get prometheusrule
$ oc -n <project> get prometheusrule
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list the configuration of an alerting rule, run the following:
oc -n <project> get prometheusrule <rule> -o yaml
$ oc -n <project> get prometheusrule <rule> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4.4. Removing alerting rules for user-defined projects Copy linkLink copied to clipboard!
You can remove alerting rules for user-defined projects.
Prerequisites
- You have enabled monitoring for user-defined projects.
-
You are logged in as a cluster administrator or as a user that has the
monitoring-rules-edit
cluster role for the project where you want to create an alerting rule. -
You have installed the OpenShift CLI (
oc
).
Procedure
To remove rule
<alerting_rule>
in<namespace>
, run the following:oc -n <namespace> delete prometheusrule <alerting_rule>
$ oc -n <namespace> delete prometheusrule <alerting_rule>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Troubleshooting monitoring issues Copy linkLink copied to clipboard!
Find troubleshooting steps for common issues with user-defined project monitoring.
6.1. Determining why user-defined project metrics are unavailable Copy linkLink copied to clipboard!
If metrics are not displaying when monitoring user-defined projects, follow these steps to troubleshoot the issue.
Procedure
Query the metric name and verify that the project is correct:
- In the Developer perspective of the web console, click Observe and go to the Metrics tab.
- Select the project that you want to view metrics for in the Project: list.
Select an existing query from the Select query list, or run a custom query by adding a PromQL query to the Expression field.
The metrics are displayed in a chart.
Queries must be done on a per-project basis. The metrics that are shown relate to the project that you have selected.
Verify that the pod that you want metrics from is actively serving metrics. Run the following
oc exec
command into a pod to target thepodIP
,port
, and/metrics
.oc exec <sample_pod> -n <sample_namespace> -- curl <target_pod_IP>:<port>/metrics
$ oc exec <sample_pod> -n <sample_namespace> -- curl <target_pod_IP>:<port>/metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must run the command on a pod that has
curl
installed.The following example output shows a result with a valid version metric.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow An invalid output indicates that there is a problem with the corresponding application.
-
If you are using a
PodMonitor
CRD, verify that thePodMonitor
CRD is configured to point to the correct pods using label matching. For more information, see the Prometheus Operator documentation. If you are using a
ServiceMonitor
CRD, and if the/metrics
endpoint of the pod is showing metric data, follow these steps to verify the configuration:Verify that the service is pointed to the correct
/metrics
endpoint. The servicelabels
in this output must match the services monitorlabels
and the/metrics
endpoint defined by the service in the subsequent steps.oc get service
$ oc get service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the
serviceIP
,port
, and/metrics
endpoints to see if the same metrics from thecurl
command you ran on the pod previously:Run the following command to find the service IP:
oc get service -n <target_namespace>
$ oc get service -n <target_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the
/metrics
endpoint:oc exec <sample_pod> -n <sample_namespace> -- curl <service_IP>:<port>/metrics
$ oc exec <sample_pod> -n <sample_namespace> -- curl <service_IP>:<port>/metrics
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Valid metrics are returned in the following example.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use label matching to verify that the
ServiceMonitor
object is configured to point to the desired service. To do this, compare theService
object from theoc get service
output to theServiceMonitor
object from theoc get servicemonitor
output. The labels must match for the metrics to be displayed.For example, from the previous steps, notice how the
Service
object has theapp: prometheus-example-app
label and theServiceMonitor
object has the sameapp: prometheus-example-app
match label.
- If everything looks valid and the metrics are still unavailable, please contact the support team for further help.
6.2. Determining why Prometheus is consuming a lot of disk space Copy linkLink copied to clipboard!
Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id
attribute is unbound because it has an infinite number of possible values.
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space.
You can use the following measures when Prometheus consumes a lot of disk:
- Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges.
- Check the number of scrape samples that are being collected.
Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics.
NoteUsing attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.
- Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
- In the Red Hat OpenShift Service on AWS web console, go to Observe → Metrics.
Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption:
By running the following query, you can identify the ten jobs that have the highest number of scrape samples:
topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))
Copy to Clipboard Copied! Toggle word wrap Toggle overflow By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour:
topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts:
- If the metrics relate to a user-defined project, review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels.
- If the metrics relate to a core Red Hat OpenShift Service on AWS project, create a Red Hat support case on the Red Hat Customer Portal.
Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a
dedicated-admin
:Get the Prometheus API route URL by running the following command:
HOST=$(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')
$ HOST=$(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract an authentication token by running the following command:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Query the TSDB status for Prometheus by running the following command:
curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/status/tsdb"
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/status/tsdb"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus Copy linkLink copied to clipboard!
As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp
alert being triggered for Prometheus.
The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-*
pod in the openshift-monitoring
project has less than 3% total space remaining. This can cause Prometheus to function abnormally.
There are two KubePersistentVolumeFillingUp
alerts:
-
Critical alert: The alert with the
severity="critical"
label is triggered when the mounted PV has less than 3% total space remaining. -
Warning alert: The alert with the
severity="warning"
label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days.
To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV.
Prerequisites
-
You have access to the cluster as a user with the
dedicated-admin
role. -
You have installed the OpenShift CLI (
oc
).
Procedure
List the size of all TSDB blocks, sorted from oldest to newest, by running the following command:
oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ -c prometheus --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs $(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")'
$ oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \
1 -c prometheus --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \
2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs $(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the
prometheus-k8s-0
pod:oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=$(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/$BLOCK; done'
$ oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=$(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/$BLOCK; done'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the usage of the mounted PV and ensure there is enough space available by running the following command:
oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/
$ oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \
1 --image=$(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \
2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example output shows the mounted PV claimed by the
prometheus-k8s-0
pod that has 63% of space remaining:Example output
Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ...
Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Config map reference for the Cluster Monitoring Operator Copy linkLink copied to clipboard!
7.1. Cluster Monitoring Operator configuration reference Copy linkLink copied to clipboard!
Parts of Red Hat OpenShift Service on AWS cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps.
-
To configure monitoring components that monitor user-defined projects, edit the
ConfigMap
object nameduser-workload-monitoring-config
in theopenshift-user-workload-monitoring
namespace. These configurations are defined by UserWorkloadConfiguration.
The configuration file is always defined under the config.yaml
key in the config map data.
- Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in this reference are supported for configuration. For more information about supported configurations, see Maintenance and support for monitoring.
- Configuring cluster monitoring is optional.
- If a configuration does not exist or is empty, default values are used.
-
If the configuration has invalid YAML data, or if it contains unsupported or duplicated fields that bypassed early validation, the Cluster Monitoring Operator stops reconciling the resources and reports the
Degraded=True
status in the status conditions of the Operator.
7.2. AdditionalAlertmanagerConfig Copy linkLink copied to clipboard!
7.2.1. Description Copy linkLink copied to clipboard!
The AdditionalAlertmanagerConfig
resource defines settings for how a component communicates with additional Alertmanager instances.
7.2.2. Required Copy linkLink copied to clipboard!
-
apiVersion
Appears in: PrometheusK8sConfig, PrometheusRestrictedConfig, ThanosRulerConfig
Property | Type | Description |
---|---|---|
apiVersion | string |
Defines the API version of Alertmanager. |
bearerToken | *v1.SecretKeySelector | Defines the secret key reference containing the bearer token to use when authenticating to Alertmanager. |
pathPrefix | string | Defines the path prefix to add in front of the push endpoint path. |
scheme | string |
Defines the URL scheme to use when communicating with Alertmanager instances. Possible values are |
staticConfigs | []string |
A list of statically configured Alertmanager endpoints in the form of |
timeout | *string | Defines the timeout value used when sending alerts. |
tlsConfig | Defines the TLS settings to use for Alertmanager connections. |
7.3. AlertmanagerMainConfig Copy linkLink copied to clipboard!
7.3.1. Description Copy linkLink copied to clipboard!
The AlertmanagerMainConfig
resource defines settings for the Alertmanager component in the openshift-monitoring
namespace.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
enabled | *bool |
A Boolean flag that enables or disables the main Alertmanager instance in the |
enableUserAlertmanagerConfig | bool |
A Boolean flag that enables or disables user-defined namespaces to be selected for |
logLevel | string |
Defines the log level setting for Alertmanager. The possible values are: |
nodeSelector | map[string]string | Defines the nodes on which the Pods are scheduled. |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Alertmanager container. |
secrets | []string |
Defines a list of secrets to be mounted into Alertmanager. The secrets must reside within the same namespace as the Alertmanager object. They are added as volumes named |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
volumeClaimTemplate | *monv1.EmbeddedPersistentVolumeClaim | Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size, and name. |
7.4. AlertmanagerUserWorkloadConfig Copy linkLink copied to clipboard!
7.4.1. Description Copy linkLink copied to clipboard!
The AlertmanagerUserWorkloadConfig
resource defines the settings for the Alertmanager instance used for user-defined projects.
Appears in: UserWorkloadConfiguration
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables a dedicated instance of Alertmanager for user-defined alerts in the |
enableAlertmanagerConfig | bool |
A Boolean flag to enable or disable user-defined namespaces to be selected for |
logLevel | string |
Defines the log level setting for Alertmanager for user workload monitoring. The possible values are |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Alertmanager container. |
secrets | []string |
Defines a list of secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. They are added as volumes named |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
volumeClaimTemplate | *monv1.EmbeddedPersistentVolumeClaim | Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size and name. |
7.5. ClusterMonitoringConfiguration Copy linkLink copied to clipboard!
7.5.1. Description Copy linkLink copied to clipboard!
The ClusterMonitoringConfiguration
resource defines settings that customize the default platform monitoring stack through the cluster-monitoring-config
config map in the openshift-monitoring
namespace.
Property | Type | Description |
---|---|---|
alertmanagerMain |
| |
enableUserWorkload | *bool |
|
userWorkload |
| |
kubeStateMetrics |
| |
metricsServer |
| |
prometheusK8s |
| |
prometheusOperator |
| |
prometheusOperatorAdmissionWebhook |
| |
openshiftStateMetrics |
| |
telemeterClient |
| |
thanosQuerier |
| |
nodeExporter |
| |
monitoringPlugin |
|
7.6. KubeStateMetricsConfig Copy linkLink copied to clipboard!
7.6.1. Description Copy linkLink copied to clipboard!
The KubeStateMetricsConfig
resource defines settings for the kube-state-metrics
agent.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
7.7. MetricsServerConfig Copy linkLink copied to clipboard!
7.7.1. Description Copy linkLink copied to clipboard!
The MetricsServerConfig
resource defines settings for the Metrics Server component.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
audit | *Audit |
Defines the audit configuration used by the Metrics Server instance. Possible profile values are |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Metrics Server container. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
7.8. MonitoringPluginConfig Copy linkLink copied to clipboard!
7.8.1. Description Copy linkLink copied to clipboard!
The MonitoringPluginConfig
resource defines settings for the web console plugin component in the openshift-monitoring
namespace.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
7.9. NodeExporterCollectorBuddyInfoConfig Copy linkLink copied to clipboard!
7.9.1. Description Copy linkLink copied to clipboard!
The NodeExporterCollectorBuddyInfoConfig
resource works as an on/off switch for the buddyinfo
collector of the node-exporter
agent. By default, the buddyinfo
collector is disabled.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.10. NodeExporterCollectorConfig Copy linkLink copied to clipboard!
7.10.1. Description Copy linkLink copied to clipboard!
The NodeExporterCollectorConfig
resource defines settings for individual collectors of the node-exporter
agent.
Appears in: NodeExporterConfig
Property | Type | Description |
---|---|---|
cpufreq |
Defines the configuration of the | |
tcpstat |
Defines the configuration of the | |
netdev |
Defines the configuration of the | |
netclass |
Defines the configuration of the | |
buddyinfo |
Defines the configuration of the | |
mountstats |
Defines the configuration of the | |
ksmd |
Defines the configuration of the | |
processes |
Defines the configuration of the | |
systemd |
Defines the configuration of the |
7.11. NodeExporterCollectorCpufreqConfig Copy linkLink copied to clipboard!
7.11.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorCpufreqConfig
resource to enable or disable the cpufreq
collector of the node-exporter
agent. By default, the cpufreq
collector is disabled. Under certain circumstances, enabling the cpufreq
collector increases CPU usage on machines with many cores. If you enable this collector and have machines with many cores, monitor your systems closely for excessive CPU usage.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.12. NodeExporterCollectorKSMDConfig Copy linkLink copied to clipboard!
7.12.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorKSMDConfig
resource to enable or disable the ksmd
collector of the node-exporter
agent. By default, the ksmd
collector is disabled.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.13. NodeExporterCollectorMountStatsConfig Copy linkLink copied to clipboard!
7.13.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorMountStatsConfig
resource to enable or disable the mountstats
collector of the node-exporter
agent. By default, the mountstats
collector is disabled. If you enable the collector, the following metrics become available: node_mountstats_nfs_read_bytes_total
, node_mountstats_nfs_write_bytes_total
, and node_mountstats_nfs_operations_requests_total
. Be aware that these metrics can have a high cardinality. If you enable this collector, closely monitor any increases in memory usage for the prometheus-k8s
pods.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.14. NodeExporterCollectorNetClassConfig Copy linkLink copied to clipboard!
7.14.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorNetClassConfig
resource to enable or disable the netclass
collector of the node-exporter
agent. By default, the netclass
collector is enabled. If you disable this collector, these metrics become unavailable: node_network_info
, node_network_address_assign_type
, node_network_carrier
, node_network_carrier_changes_total
, node_network_carrier_up_changes_total
, node_network_carrier_down_changes_total
, node_network_device_id
, node_network_dormant
, node_network_flags
, node_network_iface_id
, node_network_iface_link
, node_network_iface_link_mode
, node_network_mtu_bytes
, node_network_name_assign_type
, node_network_net_dev_group
, node_network_speed_bytes
, node_network_transmit_queue_length
, and node_network_protocol_type
.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
useNetlink | bool |
A Boolean flag that activates the |
7.15. NodeExporterCollectorNetDevConfig Copy linkLink copied to clipboard!
7.15.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorNetDevConfig
resource to enable or disable the netdev
collector of the node-exporter
agent. By default, the netdev
collector is enabled. If disabled, these metrics become unavailable: node_network_receive_bytes_total
, node_network_receive_compressed_total
, node_network_receive_drop_total
, node_network_receive_errs_total
, node_network_receive_fifo_total
, node_network_receive_frame_total
, node_network_receive_multicast_total
, node_network_receive_nohandler_total
, node_network_receive_packets_total
, node_network_transmit_bytes_total
, node_network_transmit_carrier_total
, node_network_transmit_colls_total
, node_network_transmit_compressed_total
, node_network_transmit_drop_total
, node_network_transmit_errs_total
, node_network_transmit_fifo_total
, and node_network_transmit_packets_total
.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.16. NodeExporterCollectorProcessesConfig Copy linkLink copied to clipboard!
7.16.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorProcessesConfig
resource to enable or disable the processes
collector of the node-exporter
agent. If the collector is enabled, the following metrics become available: node_processes_max_processes
, node_processes_pids
, node_processes_state
, node_processes_threads
, node_processes_threads_state
. The metric node_processes_state
and node_processes_threads_state
can have up to five series each, depending on the state of the processes and threads. The possible states of a process or a thread are: D
(UNINTERRUPTABLE_SLEEP), R
(RUNNING & RUNNABLE), S
(INTERRUPTABLE_SLEEP), T
(STOPPED), or Z
(ZOMBIE). By default, the processes
collector is disabled.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.17. NodeExporterCollectorSystemdConfig Copy linkLink copied to clipboard!
7.17.1. Description Copy linkLink copied to clipboard!
Use the NodeExporterCollectorSystemdConfig
resource to enable or disable the systemd
collector of the node-exporter
agent. By default, the systemd
collector is disabled. If enabled, the following metrics become available: node_systemd_system_running
, node_systemd_units
, node_systemd_version
. If the unit uses a socket, it also generates the following metrics: node_systemd_socket_accepted_connections_total
, node_systemd_socket_current_connections
, node_systemd_socket_refused_connections_total
. You can use the units
parameter to select the systemd
units to be included by the systemd
collector. The selected units are used to generate the node_systemd_unit_state
metric, which shows the state of each systemd
unit. However, this metric’s cardinality might be high (at least five series per unit per node). If you enable this collector with a long list of selected units, closely monitor the prometheus-k8s
deployment for excessive memory usage. Note that the node_systemd_timer_last_trigger_seconds
metric is only shown if you have configured the value of the units
parameter as logrotate.timer
.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
units | []string |
A list of regular expression (regex) patterns that match systemd units to be included by the |
7.18. NodeExporterCollectorTcpStatConfig Copy linkLink copied to clipboard!
7.18.1. Description Copy linkLink copied to clipboard!
The NodeExporterCollectorTcpStatConfig
resource works as an on/off switch for the tcpstat
collector of the node-exporter
agent. By default, the tcpstat
collector is disabled.
Appears in: NodeExporterCollectorConfig
Property | Type | Description |
---|---|---|
enabled | bool |
A Boolean flag that enables or disables the |
7.19. NodeExporterConfig Copy linkLink copied to clipboard!
7.19.1. Description Copy linkLink copied to clipboard!
The NodeExporterConfig
resource defines settings for the node-exporter
agent.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
collectors | Defines which collectors are enabled and their additional configuration parameters. | |
maxProcs | uint32 |
The target number of CPUs on which the node-exporter’s process will run. The default value is |
ignoredNetworkDevices | *[]string |
A list of network devices, defined as regular expressions, that you want to exclude from the relevant collector configuration such as |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
7.20. OpenShiftStateMetricsConfig Copy linkLink copied to clipboard!
7.20.1. Description Copy linkLink copied to clipboard!
The OpenShiftStateMetricsConfig
resource defines settings for the openshift-state-metrics
agent.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
7.21. PrometheusK8sConfig Copy linkLink copied to clipboard!
7.21.1. Description Copy linkLink copied to clipboard!
The PrometheusK8sConfig
resource defines settings for the Prometheus component.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
additionalAlertmanagerConfigs | Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. | |
enforcedBodySizeLimit | string |
Enforces a body size limit for Prometheus scraped metrics. If a scraped target’s body response is larger than the limit, the scrape will fail. The following values are valid: an empty value to specify no limit, a numeric value in Prometheus size format (such as |
externalLabels | map[string]string | Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. |
logLevel | string |
Defines the log level setting for Prometheus. The possible values are: |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
queryLogFile | string |
Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an |
remoteWrite | Defines the remote write configuration, including URL, authentication, and relabeling settings. | |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
retention | string |
Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: |
retentionSize | string |
Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
collectionProfile | CollectionProfile |
Defines the metrics collection profile that Prometheus uses to collect metrics from the platform components. Supported values are |
volumeClaimTemplate | *monv1.EmbeddedPersistentVolumeClaim | Defines persistent storage for Prometheus. Use this setting to configure the persistent volume claim, including storage class, volume size and name. |
7.22. PrometheusOperatorConfig Copy linkLink copied to clipboard!
7.22.1. Description Copy linkLink copied to clipboard!
The PrometheusOperatorConfig
resource defines settings for the Prometheus Operator component.
Appears in: ClusterMonitoringConfiguration, UserWorkloadConfiguration
Property | Type | Description |
---|---|---|
logLevel | string |
Defines the log level settings for Prometheus Operator. The possible values are |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
7.23. PrometheusOperatorAdmissionWebhookConfig Copy linkLink copied to clipboard!
7.23.1. Description Copy linkLink copied to clipboard!
The PrometheusOperatorAdmissionWebhookConfig
resource defines settings for the admission webhook workload for Prometheus Operator.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines a pod’s topology spread constraints. |
7.24. PrometheusRestrictedConfig Copy linkLink copied to clipboard!
7.24.1. Description Copy linkLink copied to clipboard!
The PrometheusRestrictedConfig
resource defines the settings for the Prometheus component that monitors user-defined projects.
Appears in: UserWorkloadConfiguration
Property | Type | Description |
---|---|---|
scrapeInterval | string |
Configures the default interval between consecutive scrapes in case the |
evaluationInterval | string |
Configures the default interval between rule evaluations in case the |
additionalAlertmanagerConfigs | Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. | |
enforcedLabelLimit | *uint64 |
Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is |
enforcedLabelNameLengthLimit | *uint64 |
Specifies a per-scrape limit on the length of a label name for a sample. If the length of a label name exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is |
enforcedLabelValueLengthLimit | *uint64 |
Specifies a per-scrape limit on the length of a label value for a sample. If the length of a label value exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is |
enforcedSampleLimit | *uint64 |
Specifies a global limit on the number of scraped samples that will be accepted. This setting overrides the |
enforcedTargetLimit | *uint64 |
Specifies a global limit on the number of scraped targets. This setting overrides the |
externalLabels | map[string]string | Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. |
logLevel | string |
Defines the log level setting for Prometheus. The possible values are |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
queryLogFile | string |
Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an |
remoteWrite | Defines the remote write configuration, including URL, authentication, and relabeling settings. | |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Prometheus container. |
retention | string |
Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: |
retentionSize | string |
Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
volumeClaimTemplate | *monv1.EmbeddedPersistentVolumeClaim | Defines persistent storage for Prometheus. Use this setting to configure the storage class and size of a volume. |
7.25. RemoteWriteSpec Copy linkLink copied to clipboard!
7.25.1. Description Copy linkLink copied to clipboard!
The RemoteWriteSpec
resource defines the settings for remote write storage.
7.25.2. Required Copy linkLink copied to clipboard!
-
url
Appears in: PrometheusK8sConfig, PrometheusRestrictedConfig
Property | Type | Description |
---|---|---|
authorization | *monv1.SafeAuthorization | Defines the authorization settings for remote write storage. |
basicAuth | *monv1.BasicAuth | Defines Basic authentication settings for the remote write endpoint URL. |
bearerTokenFile | string | Defines the file that contains the bearer token for the remote write endpoint. However, because you cannot mount secrets in a pod, in practice you can only reference the token of the service account. |
headers | map[string]string | Specifies the custom HTTP headers to be sent along with each remote write request. Headers set by Prometheus cannot be overwritten. |
metadataConfig | *monv1.MetadataConfig | Defines settings for sending series metadata to remote write storage. |
name | string | Defines the name of the remote write queue. This name is used in metrics and logging to differentiate queues. If specified, this name must be unique. |
oauth2 | *monv1.OAuth2 | Defines OAuth2 authentication settings for the remote write endpoint. |
proxyUrl | string | Defines an optional proxy URL. If the cluster-wide proxy is enabled, it replaces the proxyUrl setting. The cluster-wide proxy supports both HTTP and HTTPS proxies, with HTTPS taking precedence. |
queueConfig | *monv1.QueueConfig | Allows tuning configuration for remote write queue parameters. |
remoteTimeout | string | Defines the timeout value for requests to the remote write endpoint. |
sendExemplars | *bool | Enables sending exemplars via remote write. When enabled, this setting configures Prometheus to store a maximum of 100,000 exemplars in memory. This setting only applies to user-defined monitoring and is not applicable to core platform monitoring. |
sigv4 | *monv1.Sigv4 | Defines AWS Signature Version 4 authentication settings. |
tlsConfig | *monv1.SafeTLSConfig | Defines TLS authentication settings for the remote write endpoint. |
url | string | Defines the URL of the remote write endpoint to which samples will be sent. |
writeRelabelConfigs | []monv1.RelabelConfig | Defines the list of remote write relabel configurations. |
7.26. TLSConfig Copy linkLink copied to clipboard!
7.26.1. Description Copy linkLink copied to clipboard!
The TLSConfig
resource configures the settings for TLS connections.
7.26.2. Required Copy linkLink copied to clipboard!
-
insecureSkipVerify
Appears in: AdditionalAlertmanagerConfig
Property | Type | Description |
---|---|---|
ca | *v1.SecretKeySelector | Defines the secret key reference containing the Certificate Authority (CA) to use for the remote host. |
cert | *v1.SecretKeySelector | Defines the secret key reference containing the public certificate to use for the remote host. |
key | *v1.SecretKeySelector | Defines the secret key reference containing the private key to use for the remote host. |
serverName | string | Used to verify the hostname on the returned certificate. |
insecureSkipVerify | bool |
When set to |
7.27. TelemeterClientConfig Copy linkLink copied to clipboard!
7.27.1. Description Copy linkLink copied to clipboard!
TelemeterClientConfig
defines settings for the Telemeter Client component.
7.27.2. Required Copy linkLink copied to clipboard!
-
nodeSelector
-
tolerations
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements |
Defines resource requests and limits for the |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
7.28. ThanosQuerierConfig Copy linkLink copied to clipboard!
7.28.1. Description Copy linkLink copied to clipboard!
The ThanosQuerierConfig
resource defines settings for the Thanos Querier component.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
enableRequestLogging | bool |
A Boolean flag that enables or disables request logging. The default value is |
logLevel | string |
Defines the log level setting for Thanos Querier. The possible values are |
enableCORS | bool |
A Boolean flag that enables setting CORS headers. The headers allow access from any origin. The default value is |
nodeSelector | map[string]string | Defines the nodes on which the pods are scheduled. |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Thanos Querier container. |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
7.29. ThanosRulerConfig Copy linkLink copied to clipboard!
7.29.1. Description Copy linkLink copied to clipboard!
The ThanosRulerConfig
resource defines configuration for the Thanos Ruler instance for user-defined projects.
Appears in: UserWorkloadConfiguration
Property | Type | Description |
---|---|---|
additionalAlertmanagerConfigs |
Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The Cluster Monitoring Operator reads the cluster-wide proxy settings and configures the appropriate proxy URL for the Alertmanager endpoints. All Alertmanager endpoints in this group are expected to use the same proxy URL. Endpoints that bypass the cluster proxy should be placed in a separate group. The default value is | |
evaluationInterval | string |
Configures the default interval between Prometheus rule evaluations in case the |
logLevel | string |
Defines the log level setting for Thanos Ruler. The possible values are |
nodeSelector | map[string]string | Defines the nodes on which the Pods are scheduled. |
resources | *v1.ResourceRequirements | Defines resource requests and limits for the Alertmanager container. |
retention | string |
Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: |
tolerations | []v1.Toleration | Defines tolerations for the pods. |
topologySpreadConstraints | []v1.TopologySpreadConstraint | Defines the pod’s topology spread constraints. |
volumeClaimTemplate | *monv1.EmbeddedPersistentVolumeClaim | Defines persistent storage for Thanos Ruler. Use this setting to configure the storage class and size of a volume. |
7.30. UserWorkloadConfig Copy linkLink copied to clipboard!
7.30.1. Description Copy linkLink copied to clipboard!
The UserWorkloadConfig
resource defines settings for the monitoring of user-defined projects.
Appears in: ClusterMonitoringConfiguration
Property | Type | Description |
---|---|---|
rulesWithoutLabelEnforcementAllowed | *bool |
A Boolean flag that enables or disables the ability to deploy user-defined |
7.31. UserWorkloadConfiguration Copy linkLink copied to clipboard!
7.31.1. Description Copy linkLink copied to clipboard!
The UserWorkloadConfiguration
resource defines the settings responsible for user-defined projects in the user-workload-monitoring-config
config map in the openshift-user-workload-monitoring
namespace. You can only enable UserWorkloadConfiguration
after you have set enableUserWorkload
to true
in the cluster-monitoring-config
config map under the openshift-monitoring
namespace.
Property | Type | Description |
---|---|---|
alertmanager | Defines the settings for the Alertmanager component in user workload monitoring. | |
prometheus | Defines the settings for the Prometheus component in user workload monitoring. | |
prometheusOperator | Defines the settings for the Prometheus Operator component in user workload monitoring. | |
thanosRuler | Defines the settings for the Thanos Ruler component in user workload monitoring. | |
namespacesWithoutLabelEnforcement | []string |
Defines the list of namespaces for which Prometheus and Thanos Ruler in user-defined monitoring do not enforce the
The
To make the resulting alerts and metrics visible to project users, the query expressions should return a |
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.