Questo contenuto non è disponibile nella lingua selezionata.
Chapter 4. Setting audit log levels for the Prometheus Adapter
In default platform monitoring, you can configure the audit log level for the Prometheus Adapter.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
ConfigMap
object.
Procedure
You can set an audit log level for the Prometheus Adapter in the default openshift-monitoring
project:
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add
profile:
in thek8sPrometheusAdapter/audit
section underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: audit: profile: <audit_log_level> 1
- 1
- The audit log level to apply to the Prometheus Adapter.
Set the audit log level by using one of the following values for the
profile:
parameter:-
None
: Do not log events. -
Metadata
: Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text.Metadata
is the default audit log level. -
Request
: Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. -
RequestResponse
: Log event metadata, request text, and response text. This option does not apply for non-resource requests.
-
Save the file to apply the changes. The pods for the Prometheus Adapter restart automatically when you apply the change.
WarningWhen changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted.
Verification
-
In the config map, under
k8sPrometheusAdapter/audit/profile
, set the log level toRequest
and save the file. Confirm that the pods for the Prometheus Adapter are running. The following example lists the status of pods in the
openshift-monitoring
project:$ oc -n openshift-monitoring get pods
Confirm that the audit log level and audit log file path are correctly configured:
$ oc -n openshift-monitoring get deploy prometheus-adapter -o yaml
Example output
... - --audit-policy-file=/etc/audit/request-profile.yaml - --audit-log-path=/var/log/adapter/audit.log
Confirm that the correct log level has been applied in the
prometheus-adapter
deployment in theopenshift-monitoring
project:$ oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml
Example output
"apiVersion": "audit.k8s.io/v1" "kind": "Policy" "metadata": "name": "Request" "omitStages": - "RequestReceived" "rules": - "level": "Request"
NoteIf you enter an unrecognized
profile
value for the Prometheus Adapter in theConfigMap
object, no changes are made to the Prometheus Adapter, and an error is logged by the Cluster Monitoring Operator.Review the audit log for the Prometheus Adapter:
$ oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log
Additional resources
- See Preparing to configure the monitoring stack for steps to create monitoring config maps.
4.1. Disabling the default Grafana deployment
By default, a read-only Grafana instance is deployed with a collection of dashboards displaying cluster metrics. The Grafana instance is not user-configurable.
You can disable the Grafana deployment, causing the associated resources to be deleted from the cluster. You might do this if you do not need these dashboards and want to conserve resources in your cluster. You will still be able to view metrics and dashboards included in the web console. Grafana can be safely enabled again at any time.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have created the
cluster-monitoring-config
ConfigMap
object. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
ConfigMap
object in theopenshift-monitoring
project:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add
enabled: false
for thegrafana
component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | grafana: enabled: false
Save the file to apply the changes. The resources will begin to be removed automatically when you apply the change.
WarningThis change results in some components, including Prometheus and the Thanos Querier, being restarted. This might lead to previously collected metrics being lost if you have not yet followed the steps in the "Configuring persistent storage" section.
Check that the Grafana pod is no longer running. The following example lists the status of pods in the
openshift-monitoring
project:$ oc -n openshift-monitoring get pods
NoteIt may take a few minutes after applying the change for these pods to terminate.
Additional resources
- See Preparing to configure the monitoring stack for steps to create monitoring config maps.
4.2. Disabling the local Alertmanager
A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring
project of the OpenShift Container Platform monitoring stack.
If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config
config map in the openshift-monitoring
project.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
cluster role. -
You have created the
cluster-monitoring-config
config map. -
You have installed the OpenShift CLI (
oc
).
Procedure
Edit the
cluster-monitoring-config
config map in theopenshift-monitoring
project:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add
enabled: false
for thealertmanagerMain
component underdata/config.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false
- Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.
Additional resources
4.3. Next steps
- Enabling monitoring for user-defined projects
- Learn about remote health reporting and, if necessary, opt out of it.