Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
About OpenShift logging
Introduction to OpenShift logging.
Abstract
Chapter 1. Red Hat OpenShift Logging overview Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding.
1.1. Inputs and Outputs Link kopierenLink in die Zwischenablage kopiert!
Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster:
-
application -
receiver -
infrastructure -
audit
You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
1.2. Receiver Input Type Link kopierenLink in die Zwischenablage kopiert!
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog.
The ReceiverSpec field defines the configuration for a receiver input.
1.3. Pipelines and Filters Link kopierenLink in die Zwischenablage kopiert!
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
1.4. Operator Behavior Link kopierenLink in die Zwischenablage kopiert!
The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field:
-
When set to
Managed(default), the Operator actively manages the logging resources to match the configuration defined in the spec. -
When set to
Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components.
1.5. Validation Link kopierenLink in die Zwischenablage kopiert!
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
Chapter 2. Cluster logging support Link kopierenLink in die Zwischenablage kopiert!
Only the configuration options described in this documentation are supported for logging.
Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed.
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries.
For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days.
Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
Logging is not:
- Security Information and Event Monitoring (SIEM) compliant
- A "bring your own" (BYO) log collector configuration
- Historical or long term log retention or storage
- A guaranteed log sink
- Secure storage - audit logs are not stored by default
2.1. Supported API custom resource definitions Link kopierenLink in die Zwischenablage kopiert!
The following table describes the supported Logging APIs.
| CustomResourceDefinition (CRD) | ApiVersion | Support state |
|---|---|---|
| LokiStack | lokistack.loki.grafana.com/v1 | Supported from 5.5 |
| RulerConfig | rulerconfig.loki.grafana/v1 | Supported from 5.7 |
| AlertingRule | alertingrule.loki.grafana/v1 | Supported from 5.7 |
| RecordingRule | recordingrule.loki.grafana/v1 | Supported from 5.7 |
| LogFileMetricExporter | LogFileMetricExporter.logging.openshift.io/v1alpha1 | Supported from 5.8 |
| ClusterLogForwarder | clusterlogforwarder.observability.openshift.io/v1 | Supported from 6.0 |
2.2. Unsupported configurations Link kopierenLink in die Zwischenablage kopiert!
You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components:
- The collector configuration file
- The collector daemonset
Explicitly unsupported cases include:
- Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
- Configuring how the log collector normalizes logs. You cannot modify default log normalization.
2.3. Support policy for unmanaged Operators Link kopierenLink in die Zwischenablage kopiert!
The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.
While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
An Operator can be set to an unmanaged state using the following methods:
Individual Operator configuration
Individual Operators have a
managementStateparameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.Changing the
managementStateparameter toUnmanagedmeans that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.WarningChanging individual Operators to the
Unmanagedstate renders that particular component and functionality unsupported. Reported issues must be reproduced inManagedstate for support to proceed.Cluster Version Operator (CVO) overrides
The
spec.overridesparameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting thespec.overrides[].unmanagedparameter totruefor a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.WarningSetting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.
2.4. Collecting logging data for Red Hat Support Link kopierenLink in die Zwischenablage kopiert!
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging.
2.4.1. About the must-gather tool Link kopierenLink in die Zwischenablage kopiert!
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your logging, must-gather collects the following information:
- Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
- Cluster-level resources, including nodes, roles, and role bindings at the cluster level
-
OpenShift Logging resources in the
openshift-loggingandopenshift-operators-redhatnamespaces, including health status for the log collector, the log store, and the log visualizer
When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.
2.4.2. Collecting logging data Link kopierenLink in die Zwischenablage kopiert!
You can use the oc adm must-gather CLI command to collect information about logging.
Procedure
To collect logging information with must-gather:
-
Navigate to the directory where you want to store the
must-gatherinformation. Run the
oc adm must-gathercommand against the logging image:$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')The
must-gathertool creates a new directory that starts withmust-gather.localwithin the current directory. For example:must-gather.local.4157245944708210408.Create a compressed file from the
must-gatherdirectory that was just created. For example, on a computer that uses a Linux operating system, run the following command:$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408- Attach the compressed file to your support case on the Red Hat Customer Portal.
Chapter 3. Visualization for logging Link kopierenLink in die Zwischenablage kopiert!
Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation.
Chapter 4. Quick start Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the
Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from OperatorHub. Create a secret to access an existing object storage bucket:
Example command for AWS
$ oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-loggingCreate a
LokiStackcustom resource (CR) in theopenshift-loggingnamespace:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-loggingCreate a service account for the collector:
$ oc create sa collector -n openshift-loggingBind the
ClusterRoleto the service account:$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCreate a
UIPluginto enable the Log section in the Observe tab:apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-lokiAdd additional roles to the collector service account:
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCreate a
ClusterLogForwarderCR to configure log forwarding:apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console.