Este contenido no está disponible en el idioma seleccionado.

Chapter 6. Using pod topology spread constraints for monitoring


You can use pod topology spread constraints to control how the pods for user-defined monitoring are spread across a network topology when OpenShift Dedicated pods are deployed in multiple availability zones.

Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios.

6.1. Configuring pod topology spread constraints

You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.

You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add the following settings under the data/config.yaml field to configure pod topology spread constraints:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        <component>: 1
          topologySpreadConstraints:
          - maxSkew: <n> 2
            topologyKey: <key> 3
            whenUnsatisfiable: <value> 4
            labelSelector: 5
              <match_option>
    1
    Specify a name of the component for which you want to set up pod topology spread constraints.
    2
    Specify a numeric value for maxSkew, which defines the degree to which pods are allowed to be unevenly distributed.
    3
    Specify a key of node labels for topologyKey. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain.
    4
    Specify a value for whenUnsatisfiable. Available options are DoNotSchedule and ScheduleAnyway. Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
    5
    Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.

    Example configuration for Thanos Ruler

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        thanosRuler:
          topologySpreadConstraints:
          - maxSkew: 1
            topologyKey: monitoring
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchLabels:
                app.kubernetes.io/name: thanos-ruler

  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

6.2. Setting log levels for monitoring components

You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler.

The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap objects:

  • debug. Log debug, informational, warning, and error messages.
  • info. Log informational, warning, and error messages.
  • warn. Log warning and error messages only.
  • error. Log error messages only.

The default log level is info.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the ConfigMap object:

    1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

      $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
    2. Add logLevel: <log_level> for a component under data/config.yaml:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: user-workload-monitoring-config
        namespace: openshift-user-workload-monitoring
      data:
        config.yaml: |
          <component>: 1
            logLevel: <log_level> 2
      1
      The monitoring stack component for which you are setting a log level. For user workload monitoring, available component values are alertmanager, prometheus, prometheusOperator, and thanosRuler.
      2
      The log level to apply to the component. The available values are error, warn, info, and debug. The default value is info.
  2. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
  3. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the prometheus-operator deployment in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level"

    Example output

            - --log-level=debug

  4. Check that the pods for the component are running. The following example lists the status of pods in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring get pods
    Note

    If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully.

6.3. Enabling the query log file for Prometheus

You can configure Prometheus to write all queries that have been run by the engine to a log file.

Important

Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature.

Prerequisites

  • You have access to the cluster as a user with the dedicated-admin role.
  • The user-workload-monitoring-config ConfigMap object exists. This object is created by default when the cluster is created.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
  2. Add queryLogFile: <path> for prometheus under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: user-workload-monitoring-config
      namespace: openshift-user-workload-monitoring
    data:
      config.yaml: |
        prometheus:
          queryLogFile: <path> 1
    1
    The full path to the file in which queries will be logged.
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
  4. Verify that the pods for the component are running. The following example command lists the status of pods in the openshift-user-workload-monitoring project:

    $ oc -n openshift-user-workload-monitoring get pods
  5. Read the query log:

    $ oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>
    Important

    Revert the setting in the config map after you have examined the logged query information.

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.