Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 4. Setting audit log levels for the Prometheus Adapter


In default platform monitoring, you can configure the audit log level for the Prometheus Adapter.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config ConfigMap object.

Procedure

You can set an audit log level for the Prometheus Adapter in the default openshift-monitoring project:

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add profile: in the k8sPrometheusAdapter/audit section under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        k8sPrometheusAdapter:
          audit:
            profile: <audit_log_level> 1
    1
    The audit log level to apply to the Prometheus Adapter.
  3. Set the audit log level by using one of the following values for the profile: parameter:

    • None: Do not log events.
    • Metadata: Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. Metadata is the default audit log level.
    • Request: Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests.
    • RequestResponse: Log event metadata, request text, and response text. This option does not apply for non-resource requests.
  4. Save the file to apply the changes. The pods for the Prometheus Adapter restart automatically when you apply the change.

    Warning

    When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.

Verification

  1. In the config map, under k8sPrometheusAdapter/audit/profile, set the log level to Request and save the file.
  2. Confirm that the pods for the Prometheus Adapter are running. The following example lists the status of pods in the openshift-monitoring project:

    $ oc -n openshift-monitoring get pods
  3. Confirm that the audit log level and audit log file path are correctly configured:

    $ oc -n openshift-monitoring get deploy prometheus-adapter -o yaml

    Example output

    ...
      - --audit-policy-file=/etc/audit/request-profile.yaml
      - --audit-log-path=/var/log/adapter/audit.log

  4. Confirm that the correct log level has been applied in the prometheus-adapter deployment in the openshift-monitoring project:

    $ oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml

    Example output

    "apiVersion": "audit.k8s.io/v1"
    "kind": "Policy"
    "metadata":
      "name": "Request"
    "omitStages":
    - "RequestReceived"
    "rules":
    - "level": "Request"

    Note

    If you enter an unrecognized profile value for the Prometheus Adapter in the ConfigMap object, no changes are made to the Prometheus Adapter, and an error is logged by the Cluster Monitoring Operator.

  5. Review the audit log for the Prometheus Adapter:

    $ oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log

Additional resources

4.1. Disabling the default Grafana deployment

By default, a read-only Grafana instance is deployed with a collection of dashboards displaying cluster metrics. The Grafana instance is not user-configurable.

You can disable the Grafana deployment, causing the associated resources to be deleted from the cluster. You might do this if you do not need these dashboards and want to conserve resources in your cluster. You will still be able to view metrics and dashboards included in the web console. Grafana can be safely enabled again at any time.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have created the cluster-monitoring-config ConfigMap object.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enabled: false for the grafana component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        grafana:
          enabled: false
  3. Save the file to apply the changes. The resources will begin to be removed automatically when you apply the change.

    Warning

    This change results in some components, including Prometheus and the Thanos Querier, being restarted. This might lead to previously collected metrics being lost if you have not yet followed the steps in the "Configuring persistent storage" section.

  4. Check that the Grafana pod is no longer running. The following example lists the status of pods in the openshift-monitoring project:

    $ oc -n openshift-monitoring get pods
    Note

    It may take a few minutes after applying the change for these pods to terminate.

Additional resources

4.2. Disabling the local Alertmanager

A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OpenShift Container Platform monitoring stack.

If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role.
  • You have created the cluster-monitoring-config config map.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enabled: false for the alertmanagerMain component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          enabled: false
  3. Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.

4.3. Next steps

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.