Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 8. Configuring pod topology spread constraints


You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the following settings under the data/config.yaml field to configure pod topology spread constraints:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>: 1
          topologySpreadConstraints:
          - maxSkew: <n> 2
            topologyKey: <key> 3
            whenUnsatisfiable: <value> 4
            labelSelector: 5
              <match_option>
    1
    Specify a name of the component for which you want to set up pod topology spread constraints.
    2
    Specify a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed.
    3
    Specify a key of node labels for topologyKey. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain.
    4
    Specify a value for whenUnsatisfiable. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    5
    Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.

    Example configuration for Thanos Ruler

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          topologySpreadConstraints:
          - maxSkew: 1
            topologyKey: monitoring
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchLabels:
                app.kubernetes.io/name: thanos-ruler

  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

8.1. Storing and recording data for user workload monitoring

Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting.

8.1.1. Configuring persistent storage

Run cluster monitoring with persistent storage to gain the following benefits:

  • Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
  • Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.

For production environments, it is highly recommended to configure persistent storage.

Important

In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability.

8.1.1.1. Persistent storage prerequisites

  • Dedicate sufficient persistent storage to ensure that the disk does not become full.
  • Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume.

    Important
    • Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes.
    • Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant.

8.1.1.2. Configuring a persistent volume claim

To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add your PVC configuration for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>: 1
          volumeClaimTemplate:
            spec:
              storageClassName: <storage_class> 2
              resources:
                requests:
                  storage: <amount_of_storage> 3
    1
    Specify the monitoring component for which you want to configure the PVC.
    2
    Specify an existing storage class. If a storage class is not specified, the default storage class is used.
    3
    Specify the amount of required storage.

    The following example configures a PVC that claims persistent storage for Thanos Ruler:

    Example PVC configuration

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          volumeClaimTemplate:
            spec:
              storageClassName: my-storage-class
              resources:
                requests:
                  storage: 10Gi

    Note

    Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates.

  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied.

    Warning

    When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage.

Additional resources

8.1.1.3. Resizing a persistent volume

You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured.

Important

You can only expand the size of the PVC. Shrinking the storage size is not possible.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have configured at least one PVC for components that monitor user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes.
  2. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  3. Add a new storage size for the PVC configuration for the component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>: 1
          volumeClaimTemplate:
            spec:
              resources:
                requests:
                  storage: <amount_of_storage> 2
    1
    The component for which you want to change the storage size.
    2
    Specify the new size for the storage volume. It must be greater than the previous value.

    The following example sets the new PVC request to 20 gigabytes for Thanos Ruler:

    Example storage configuration for thanosRuler

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          volumeClaimTemplate:
            spec:
              resources:
                requests:
                  storage: 20Gi

    Note

    Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates.

  4. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

    Warning

    When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage.

8.1.2. Modifying retention time and size for Prometheus metrics data

By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses.

Note

Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the retention time and size configuration under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          retention: <time_specification> 1
          retentionSize: <size_specification> 2
    1
    The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s.
    2
    The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes).

    The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance:

    Example of setting retention time for Prometheus

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          retention: 24h
          retentionSize: 10GB

  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

8.1.2.1. Modifying the retention time for Thanos Ruler metrics data

By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the retention time configuration under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          retention: <time_specification> 1
    1
    Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s. The default is 24h.

    The following example sets the retention time to 10 days for Thanos Ruler data:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          retention: 10d
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

8.1.3. Setting log levels for monitoring components

You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler.

The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap object:

  • debug. Log debug, informational, warning, and error messages.
  • info. Log informational, warning, and error messages.
  • warn. Log warning and error messages only.
  • error. Log error messages only.

The default log level is info.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add logLevel: <log_level> for a component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>: 1
          logLevel: <log_level> 2
    1
    The monitoring stack component for which you are setting a log level. Available component values are prometheus, alertmanager, prometheusOperator, and thanosRuler.
    2
    The log level to set for the component. The available values are error, warn, info, and debug. The default value is info.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
  4. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment:

    $ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"

    Example output

            - --log-level=debug

  5. Check that the pods for the component are running. The following example lists the status of pods:

    $ oc -n openshift-user-workload-monitoring get pods
    Note

    If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully.

8.1.4. Enabling the query log file for Prometheus

You can configure Prometheus to write all queries that have been run by the engine to a log file.

Important

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the queryLogFile parameter for Prometheus under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          queryLogFile: <path> 1
    1
    Add the full path to the file in which queries will be logged.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
  4. Verify that the pods for the component are running. The following sample command lists the status of pods:

    $ oc -n openshift-user-workload-monitoring get pods

    Example output

    ...
    prometheus-operator-776fcbbd56-2nbfm   2/2     Running   0          132m
    prometheus-user-workload-0             5/5     Running   1          132m
    prometheus-user-workload-1             5/5     Running   1          132m
    thanos-ruler-user-workload-0           3/3     Running   0          132m
    thanos-ruler-user-workload-1           3/3     Running   0          132m
    ...

  5. Read the query log:

    $ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>
    Important

    Revert the setting in the config map after you have examined the logged query information.

8.2. Configuring metrics for user workload monitoring

Configure the collection of metrics to monitor how cluster components and your own workloads are performing.

You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters.

Additional resources

8.2.1. Configuring remote write storage

You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).
  • You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature.

    Important

    Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support.

  • You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-user-workload-monitoring namespace.

    Warning

    To reduce security risks, use HTTPS and authentication to send metrics to an endpoint.

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add a remoteWrite: section under data/config.yaml/prometheus, as shown in the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com" 1
            <endpoint_authentication_credentials> 2
    1
    The URL of the remote write endpoint.
    2
    The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods.
  3. Add write relabel configuration values after the authentication credentials:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            <endpoint_authentication_credentials>
            writeRelabelConfigs:
            - <your_write_relabel_configs> 1
    1
    Add configuration for metrics that you want to send to the remote endpoint.

    Example of forwarding a single metric called my_metric

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels: [__name__]
              regex: 'my_metric'
              action: keep

    Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels: [__name__,namespace]
              regex: '(my_metric_1|my_metric_2);my_namespace'
              action: keep

  4. Save the file to apply the changes. The new configuration is applied automatically.

8.2.1.1. Supported remote write authentication settings

You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write.

Authentication methodConfig map fieldDescription

AWS Signature Version 4

sigv4

This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication.

Basic authentication

basicAuth

Basic authentication sets the authorization header on every remote write request with the configured username and password.

authorization

authorization

Authorization sets the Authorization header on every remote write request using the configured token.

OAuth 2.0

oauth2

An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication.

TLS client

tlsConfig

A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file.

8.2.1.2. Example remote write authentication settings

The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring namespace.

8.2.1.2.1. Sample YAML for AWS Signature Version 4 authentication

The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-user-workload-monitoring namespace.

apiVersion: v1
kind: Secret
metadata:
  name: sigv4-credentials
  namespace: openshift-user-workload-monitoring
stringData:
  accessKey: <AWS_access_key> 1
  secretKey: <AWS_secret_key> 2
type: Opaque
1
The AWS API access key.
2
The AWS API secret key.

The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    prometheus:
      remoteWrite:
      - url: "https://authorization.example.com/api/write"
        sigv4:
          region: <AWS_region> 1
          accessKey:
            name: sigv4-credentials 2
            key: accessKey 3
          secretKey:
            name: sigv4-credentials 4
            key: secretKey 5
          profile: <AWS_profile_name> 6
          roleArn: <AWS_role_arn> 7
1
The AWS region.
2 4
The name of the Secret object containing the AWS API access credentials.
3
The key that contains the AWS API access key in the specified Secret object.
5
The key that contains the AWS API secret key in the specified Secret object.
6
The name of the AWS profile that is being used to authenticate.
7
The unique identifier for the Amazon Resource Name (ARN) assigned to your role.
8.2.1.2.2. Sample YAML for Basic authentication

The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: rw-basic-auth
  namespace: openshift-user-workload-monitoring
stringData:
  user: <basic_username> 1
  password: <basic_password> 2
type: Opaque
1
The username.
2
The password.

The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint.

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    prometheus:
      remoteWrite:
      - url: "https://basicauth.example.com/api/write"
        basicAuth:
          username:
            name: rw-basic-auth 1
            key: user 2
          password:
            name: rw-basic-auth 3
            key: password 4
1 3
The name of the Secret object that contains the authentication credentials.
2
The key that contains the username in the specified Secret object.
4
The key that contains the password in the specified Secret object.
8.2.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object

The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: rw-bearer-auth
  namespace: openshift-user-workload-monitoring
stringData:
  token: <authentication_token> 1
type: Opaque
1
The authentication token.

The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    enableUserWorkload: true
    prometheus:
      remoteWrite:
      - url: "https://authorization.example.com/api/write"
        authorization:
          type: Bearer 1
          credentials:
            name: rw-bearer-auth 2
            key: token 3
1
The authentication type of the request. The default value is Bearer.
2
The name of the Secret object that contains the authentication credentials.
3
The key that contains the authentication token in the specified Secret object.
8.2.1.2.4. Sample YAML for OAuth 2.0 authentication

The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: Secret
metadata:
  name: oauth2-credentials
  namespace: openshift-user-workload-monitoring
stringData:
  id: <oauth2_id> 1
  secret: <oauth2_secret> 2
type: Opaque
1
The Oauth 2.0 ID.
2
The OAuth 2.0 secret.

The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    prometheus:
      remoteWrite:
      - url: "https://test.example.com/api/write"
        oauth2:
          clientId:
            secret:
              name: oauth2-credentials 1
              key: id 2
          clientSecret:
            name: oauth2-credentials 3
            key: secret 4
          tokenUrl: https://example.com/oauth2/token 5
          scopes: 6
          - <scope_1>
          - <scope_2>
          endpointParams: 7
            param1: <parameter_1>
            param2: <parameter_2>
1 3
The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object.
2 4
The key that contains the OAuth 2.0 credentials in the specified Secret object.
5
The URL used to fetch a token with the specified clientId and clientSecret.
6
The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access.
7
The OAuth 2.0 authorization request parameters required for the authorization server.
8.2.1.2.5. Sample YAML for TLS client authentication

The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-user-workload-monitoring namespace.

apiVersion: v1
kind: Secret
metadata:
  name: mtls-bundle
  namespace: openshift-user-workload-monitoring
data:
  ca.crt: <ca_cert> 1
  client.crt: <client_cert> 2
  client.key: <client_key> 3
type: tls
1
The CA certificate in the Prometheus container with which to validate the server certificate.
2
The client certificate for authentication with the server.
3
The client key.

The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle.

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    prometheus:
      remoteWrite:
      - url: "https://remote-write-endpoint.example.com"
        tlsConfig:
          ca:
            secret:
              name: mtls-bundle 1
              key: ca.crt 2
          cert:
            secret:
              name: mtls-bundle 3
              key: client.crt 4
          keySecret:
            name: mtls-bundle 5
            key: client.key 6
1 3 5
The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object.
2
The key in the specified Secret object that contains the CA certificate for the endpoint.
4
The key in the specified Secret object that contains the client certificate for the endpoint.
6
The key in the specified Secret object that contains the client key secret.

8.2.1.3. Example remote write queue configuration

You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring namespace.

Example configuration of remote write parameters with default values

apiVersion: v1
kind: ConfigMap
metadata:
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring
data:
  config.yaml: |
    prometheus:
      remoteWrite:
      - url: "https://remote-write-endpoint.example.com"
        <endpoint_authentication_credentials>
        queueConfig:
          capacity: 10000 1
          minShards: 1 2
          maxShards: 50 3
          maxSamplesPerSend: 2000 4
          batchSendDeadline: 5s 5
          minBackoff: 30ms 6
          maxBackoff: 5s 7
          retryOnRateLimit: false 8
          sampleAgeLimit: 0s 9

1
The number of samples to buffer per shard before they are dropped from the queue.
2
The minimum number of shards.
3
The maximum number of shards.
4
The maximum number of samples per send.
5
The maximum time for a sample to wait in buffer.
6
The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time.
7
The maximum time to wait before retrying a failed request.
8
Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage.
9
The samples that are older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s, the parameter is ignored.

8.2.2. Creating cluster ID labels for metrics

You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace.

Note

When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace. This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project.
  • A cluster administrator has enabled monitoring for user-defined projects.
  • You have installed the OpenShift CLI (oc).
  • You have configured remote write storage.

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite, add cluster ID relabel configuration values:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            <endpoint_authentication_credentials>
            writeRelabelConfigs: 1
              - <relabel_config> 2
    1
    Add a list of write relabel configurations for metrics that you want to send to the remote endpoint.
    2
    Substitute the label configuration for the metrics sent to the remote write endpoint.

    The following sample shows how to forward a metric with the cluster ID label cluster_id:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          remoteWrite:
          - url: "https://remote-write-endpoint.example.com"
            writeRelabelConfigs:
            - sourceLabels:
              - __tmp_openshift_cluster_id__ 1
              targetLabel: cluster_id 2
              action: replace 3
    1
    The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__. This temporary label gets replaced by the cluster ID label name that you specify.
    2
    Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__. The final relabeling step removes labels that use this name.
    3
    The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified.
  3. Save the file to apply the changes. The new configuration is applied automatically.

8.2.3. Setting up metrics collection for user-defined projects

You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name.

This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored.

8.2.3.1. Deploying a sample service

To test monitoring of a service in a user-defined project, you can deploy a sample service.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace.

Procedure

  1. Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml.
  2. Add the following deployment and service configuration details to the file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: ns1
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: prometheus-example-app
      name: prometheus-example-app
      namespace: ns1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: prometheus-example-app
      template:
        metadata:
          labels:
            app: prometheus-example-app
        spec:
          containers:
          - image: ghcr.io/rhobs/prometheus-example-app:0.4.2
            imagePullPolicy: IfNotPresent
            name: prometheus-example-app
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: prometheus-example-app
      name: prometheus-example-app
      namespace: ns1
    spec:
      ports:
      - port: 8080
        protocol: TCP
        targetPort: 8080
        name: web
      selector:
        app: prometheus-example-app
      type: ClusterIP

    This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric.

  3. Apply the configuration to the cluster:

    $ oc apply -f prometheus-example-app.yaml

    It takes some time to deploy the service.

  4. You can check that the pod is running:

    $ oc -n ns1 get pod

    Example output

    NAME                                      READY     STATUS    RESTARTS   AGE
    prometheus-example-app-7857545cb7-sbgwq   1/1       Running   0          81m

8.2.3.2. Specifying how a service is monitored

To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod.

This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role.
  • You have enabled monitoring for user-defined projects.
  • For this example, you have deployed the prometheus-example-app sample service in the ns1 project.

    Note

    The prometheus-example-app sample service does not support TLS authentication.

Procedure

  1. Create a new YAML configuration file named example-app-service-monitor.yaml.
  2. Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: prometheus-example-monitor
      namespace: ns1 1
    spec:
      endpoints:
      - interval: 30s
        port: web 2
        scheme: http
      selector: 3
        matchLabels:
          app: prometheus-example-app
    1
    Specify a user-defined namespace where your service runs.
    2
    Specify endpoint ports to be scraped by Prometheus.
    3
    Configure a selector to match your service based on its metadata labels.
    Note

    A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored.

  3. Apply the configuration to the cluster:

    $ oc apply -f example-app-service-monitor.yaml

    It takes some time to deploy the ServiceMonitor resource.

  4. Verify that the ServiceMonitor resource is running:

    $ oc -n <namespace> get servicemonitor

    Example output

    NAME                         AGE
    prometheus-example-monitor   81m

8.2.3.3. Example service endpoint authentication settings

You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs).

The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings.

8.2.3.3.1. Sample YAML authentication with a bearer token

The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace:

Example bearer token secret

apiVersion: v1
kind: Secret
metadata:
  name: example-bearer-auth
  namespace: ns1
stringData:
  token: <authentication_token> 1

1
Specify an authentication token.

The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth:

Example bearer token authentication settings

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-example-monitor
  namespace: ns1
spec:
  endpoints:
  - authorization:
      credentials:
        key: token 1
        name: example-bearer-auth 2
    port: web
  selector:
    matchLabels:
      app: prometheus-example-app

1
The key that contains the authentication token in the specified Secret object.
2
The name of the Secret object that contains the authentication credentials.
Important

Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected.

8.2.3.3.2. Sample YAML for Basic authentication

The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace:

Example Basic authentication secret

apiVersion: v1
kind: Secret
metadata:
  name: example-basic-auth
  namespace: ns1
stringData:
  user: <basic_username> 1
  password: <basic_password>  2

1
Specify a username for authentication.
2
Specify a password for authentication.

The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth:

Example Basic authentication settings

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-example-monitor
  namespace: ns1
spec:
  endpoints:
  - basicAuth:
      username:
        key: user 1
        name: example-basic-auth 2
      password:
        key: password 3
        name: example-basic-auth 4
    port: web
  selector:
    matchLabels:
      app: prometheus-example-app

1
The key that contains the username in the specified Secret object.
2 4
The name of the Secret object that contains the Basic authentication.
3
The key that contains the password in the specified Secret object.
8.2.3.3.3. Sample YAML authentication with OAuth 2.0

The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace:

Example OAuth 2.0 secret

apiVersion: v1
kind: Secret
metadata:
  name: example-oauth2
  namespace: ns1
stringData:
  id: <oauth2_id> 1
  secret: <oauth2_secret> 2

1
Specify an Oauth 2.0 ID.
2
Specify an Oauth 2.0 secret.

The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2:

Example OAuth 2.0 authentication settings

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: prometheus-example-monitor
  namespace: ns1
spec:
  endpoints:
  - oauth2:
      clientId:
        secret:
          key: id 1
          name: example-oauth2 2
      clientSecret:
        key: secret 3
        name: example-oauth2 4
      tokenUrl: https://example.com/oauth2/token 5
    port: web
  selector:
    matchLabels:
      app: prometheus-example-app

1
The key that contains the OAuth 2.0 ID in the specified Secret object.
2 4
The name of the Secret object that contains the OAuth 2.0 credentials.
3
The key that contains the OAuth 2.0 secret in the specified Secret object.
5
The URL used to fetch a token with the specified clientId and clientSecret.

8.3. Configuring alerts and notifications for user workload monitoring

You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information.

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.