Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 7. Configuring user workload monitoring


7.1. Preparing to configure the user workload monitoring stack

This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack.

Important

7.1.1. Configurable monitoring components

This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config config map.

Table 7.1. Configurable monitoring components for user-defined projects
Componentuser-workload-monitoring-config config map key

Prometheus Operator

prometheusOperator

Prometheus

prometheus

Alertmanager

alertmanager

Thanos Ruler

thanosRuler

Warning

Different configuration changes to the ConfigMap object result in different outcomes:

  • The pods are not redeployed. Therefore, there is no service outage.
  • The affected pods are redeployed:

    • For single-node clusters, this results in temporary service outage.
    • For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available.
    • Configuring and resizing a persistent volume always results in a service outage, regardless of high availability.

Each procedure that requires a change in the config map includes its expected outcome.

7.1.2. Enabling monitoring for user-defined projects

In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects.

Note

Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform.

7.1.2.1. Enabling monitoring for user-defined projects

Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object.

Important

You must remove any custom Prometheus instances before enabling monitoring for user-defined projects.

Note

You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have installed the OpenShift CLI (oc).
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects.

    Note

    Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enableUserWorkload: true under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        enableUserWorkload: true 1
    1
    When set to true, the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster.
  3. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically.

    Note

    If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default.

  4. Verify that the prometheus-operator, prometheus-user-workload, and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start:

    $ oc -n openshift-user-workload-monitoring get pod

    Example output

    NAME                                   READY   STATUS        RESTARTS   AGE
    prometheus-operator-6f7b748d5b-t7nbg   2/2     Running       0          3h
    prometheus-user-workload-0             4/4     Running       1          3h
    prometheus-user-workload-1             4/4     Running       1          3h
    thanos-ruler-user-workload-0           3/3     Running       0          3h
    thanos-ruler-user-workload-1           3/3     Running       0          3h

7.1.2.2. Granting users permission to configure monitoring for user-defined projects

As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • The user account that you are assigning the role to already exists.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring adm policy add-role-to-user \
      user-workload-monitoring-config-edit <user> \
      --role-namespace openshift-user-workload-monitoring
  2. Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding:

    $ oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring

    Example command

    $ oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring

    Example output

    Name:         user-workload-monitoring-config-edit
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  Role
      Name:  user-workload-monitoring-config-edit
    Subjects:
      Kind  Name  Namespace
      ----  ----  ---------
      User  user1           1

    1
    In this example, user1 is assigned to the user-workload-monitoring-config-edit role.

7.1.3. Enabling alert routing for user-defined projects

In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps:

  • Enable alert routing for user-defined projects:

    • Use the default platform Alertmanager instance.
    • Use a separate Alertmanager instance only for user-defined projects.
  • Grant users permission to configure alert routing for user-defined projects.

After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects.

7.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing

You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enableUserAlertmanagerConfig: true in the alertmanagerMain section under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        # ...
        alertmanagerMain:
          enableUserAlertmanagerConfig: true 1
        # ...
    1
    Set the enableUserAlertmanagerConfig value to true to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager.
  3. Save the file to apply the changes. The new configuration is applied automatically.

7.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing

In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        alertmanager:
          enabled: true 1
          enableAlertmanagerConfig: true 2
    1
    Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance.
    2
    Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects.
  3. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically.

Verification

  • Verify that the user-workload Alertmanager instance has started:

    # oc -n openshift-user-workload-monitoring get alertmanager

    Example output

    NAME            VERSION   REPLICAS   AGE
    user-workload   0.24.0    2          100s

7.1.3.3. Granting users permission to configure alert routing for user-defined projects

You can grant users permission to configure alert routing for user-defined projects.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have enabled monitoring for user-defined projects.
  • The user account that you are assigning the role to already exists.
  • You have installed the OpenShift CLI (oc).

Procedure

  • Assign the alert-routing-edit cluster role to a user in the user-defined project:

    $ oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1
    1
    For <namespace>, substitute the namespace for the user-defined project, such as ns1. For <user>, substitute the username for the account to which you want to assign the role.

Additional resources

Configuring alert notifications

7.1.4. Granting users permissions for monitoring for user-defined projects

As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects.

You can also grant developers and other users different permissions:

  • Monitoring user-defined projects
  • Configuring the components that monitor user-defined projects
  • Configuring alert routing for user-defined projects
  • Managing alerts and silences for user-defined projects

You can grant the permissions by assigning one of the following monitoring roles or cluster roles:

Table 7.2. Monitoring roles
Role nameDescriptionProject

user-workload-monitoring-config-edit

Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring.

openshift-user-workload-monitoring

monitoring-alertmanager-api-reader

Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled.

openshift-user-workload-monitoring

monitoring-alertmanager-api-writer

Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled.

openshift-user-workload-monitoring

Table 7.3. Monitoring cluster roles
Cluster role nameDescriptionProject

monitoring-rules-view

Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console.

Can be bound with RoleBinding to any user project.

monitoring-rules-edit

Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console.

Can be bound with RoleBinding to any user project.

monitoring-edit

Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods.

Can be bound with RoleBinding to any user project.

alert-routing-edit

Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects.

Can be bound with RoleBinding to any user project.

7.1.4.1. Granting user permissions by using the web console

You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • The user account that you are assigning the role to already exists.

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding.
  2. In the Binding Type section, select the Namespace Role Binding type.
  3. In the Name field, enter a name for the role binding.
  4. In the Namespace field, select the project where you want to grant the access.

    Important

    The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field.

  5. Select a monitoring role or cluster role from the Role Name list.
  6. In the Subject section, select User.
  7. In the Subject Name field, enter the name of the user.
  8. Select Create to apply the role binding.

7.1.4.2. Granting user permissions by using the CLI

You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI (oc).

Important

Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • The user account that you are assigning the role to already exists.
  • You have installed the OpenShift CLI (oc).

Procedure

  • To assign a monitoring role to a user for a project, enter the following command:

    $ oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1
    1
    Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access.
  • To assign a monitoring cluster role to a user for a project, enter the following command:

    $ oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1
    1
    Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access.

7.1.5. Excluding a user-defined project from monitoring

Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project’s namespace with a value of false.

Procedure

  1. Add the label to the project namespace:

    $ oc label namespace my-project 'openshift.io/user-monitoring=false'
  2. To re-enable monitoring, remove the label from the namespace:

    $ oc label namespace my-project 'openshift.io/user-monitoring-'
    Note

    If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label.

7.1.6. Disabling monitoring for user-defined projects

After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object.

Note

Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
    1. Set enableUserWorkload: to false under data/config.yaml:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: cluster-monitoring-config
        namespace: openshift-monitoring
      data:
        config.yaml: |
          enableUserWorkload: false
  2. Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically.
  3. Check that the prometheus-operator, prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while:

    $ oc -n openshift-user-workload-monitoring get pod

    Example output

    No resources found in openshift-user-workload-monitoring project.

Note

The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object.

7.2. Configuring performance and scalability for user workload monitoring

You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources.

7.2.1. Controlling the placement and distribution of monitoring components

You can move the monitoring stack components to specific nodes:

  • Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes.
  • Assign tolerations to enable moving components to tainted nodes.

By doing so, you control the placement and distribution of the monitoring components across a cluster.

By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies.

7.2.1.1. Moving monitoring components to different nodes

You can move any of the components that monitor workloads for user-defined projects to specific worker nodes.

Warning

It is not permitted to move components to control plane or infrastructure nodes.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. If you have not done so yet, add a label to the nodes on which you want to run the monitoring components:

    $ oc label nodes <node_name> <node_label> 1
    1
    Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label.
  2. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  3. Specify the node labels for the nodeSelector constraint for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        # ...
        <component>: 1
          nodeSelector:
            <node_label_1> 2
            <node_label_2> 3
        # ...
    1
    Substitute <component> with the appropriate monitoring stack component name.
    2
    Substitute <node_label_1> with the label you added to the node.
    3
    Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels.
    Note

    If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations.

  4. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed.

7.2.1.2. Assigning tolerations to monitoring components

You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Specify tolerations for the component:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>:
          tolerations:
            <toleration_specification>

    Substitute <component> and <toleration_specification> accordingly.

    For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1. This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          tolerations:
          - key: "key1"
            operator: "Equal"
            value: "value1"
            effect: "NoSchedule"
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

7.2.2. Managing CPU and memory resources for monitoring components

You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components.

You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring namespace.

7.2.2.1. Specifying limits and requests

To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring namespace.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add values to define resource limits and requests for each component you want to configure.

    Important

    Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run.

    Example of setting resource limits and requests

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        alertmanager:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi
        prometheus:
          resources:
            limits:
              cpu: 500m
              memory: 3Gi
            requests:
              cpu: 200m
              memory: 500Mi
        thanosRuler:
          resources:
            limits:
              cpu: 500m
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 500Mi

  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

7.2.3. Controlling the impact of unbound metrics attributes in user-defined projects

Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects:

  • Limit the number of samples that can be accepted per target scrape in user-defined projects
  • Limit the number of scraped labels, the length of label names, and the length of label values
  • Configure the intervals between consecutive scrapes and between Prometheus rule evaluations
  • Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped
Note

Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations.

7.2.3.1. Setting scrape sample and label limits for user-defined projects

You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values.

Warning

If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          enforcedSampleLimit: 50000 1
    1
    A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000.
  3. Add the enforcedLabelLimit, enforcedLabelNameLengthLimit, and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          enforcedLabelLimit: 500 1
          enforcedLabelNameLengthLimit: 50 2
          enforcedLabelValueLengthLimit: 600 3
    1
    Specifies the maximum number of labels per scrape. The default value is 0, which specifies no limit.
    2
    Specifies the maximum length in characters of a label name. The default value is 0, which specifies no limit.
    3
    Specifies the maximum length in characters of a label value. The default value is 0, which specifies no limit.
  4. Save the file to apply the changes. The limits are applied automatically.

7.2.3.2. Creating scrape sample alerts

You can create alerts that notify you when:

  • The target cannot be scraped or is not available for the specified for duration
  • A scrape sample threshold is reached or is exceeded for the specified for duration

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml:

    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      labels:
        prometheus: k8s
        role: alert-rules
      name: monitoring-stack-alerts 1
      namespace: ns1 2
    spec:
      groups:
      - name: general.rules
        rules:
        - alert: TargetDown 3
          annotations:
            message: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service
              }} targets in {{ $labels.namespace }} namespace are down.' 4
          expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job,
            namespace, service)) > 10
          for: 10m 5
          labels:
            severity: warning 6
        - alert: ApproachingEnforcedSamplesLimit 7
          annotations:
            message: '{{ $labels.container }} container of the {{ $labels.pod }} pod in the {{ $labels.namespace }} namespace consumes {{ $value | humanizePercentage }} of the samples limit budget.' 8
          expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9
          for: 10m 10
          labels:
            severity: warning 11
    1
    Defines the name of the alerting rule.
    2
    Specifies the user-defined project where the alerting rule is deployed.
    3
    The TargetDown alert fires if the target cannot be scraped and is not available for the for duration.
    4
    The message that is displayed when the TargetDown alert fires.
    5
    The conditions for the TargetDown alert must be true for this duration before the alert is fired.
    6
    Defines the severity for the TargetDown alert.
    7
    The ApproachingEnforcedSamplesLimit alert fires when the defined scrape sample threshold is exceeded and lasts for the specified for duration.
    8
    The message that is displayed when the ApproachingEnforcedSamplesLimit alert fires.
    9
    The threshold for the ApproachingEnforcedSamplesLimit alert. In this example, the alert fires when the number of ingested samples exceeds 90% of the configured limit.
    10
    The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired.
    11
    Defines the severity for the ApproachingEnforcedSamplesLimit alert.
  2. Apply the configuration to the user-defined project:

    $ oc apply -f monitoring-stack-alerts.yaml
  3. Additionally, you can check if a target has hit the configured limit:

    1. In the Administrator perspective of the web console, go to Observe Targets and select an endpoint with a Down status that you want to check.

      The Scrape failed: sample limit exceeded message is displayed if the endpoint failed because of an exceeded sample limit.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.