Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Logging
Configuring and using logging in OpenShift Container Platform
Abstract
Chapter 1. Logging 6.2 Link kopierenLink in die Zwischenablage kopiert!
1.1. Support Link kopierenLink in die Zwischenablage kopiert!
Only the configuration options described in this documentation are supported for logging.
Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed.
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.
Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
Logging is not:
- A high scale log collection system
- Security Information and Event Monitoring (SIEM) compliant
- A "bring your own" (BYO) log collector configuration
- Historical or long term log retention or storage
- A guaranteed log sink
- Secure storage - audit logs are not stored by default
1.1.1. Supported API custom resource definitions Link kopierenLink in die Zwischenablage kopiert!
The following table describes the supported Logging APIs.
| CustomResourceDefinition (CRD) | ApiVersion | Support state |
|---|---|---|
| LokiStack | lokistack.loki.grafana.com/v1 | Supported from 5.5 |
| RulerConfig | rulerconfig.loki.grafana/v1 | Supported from 5.7 |
| AlertingRule | alertingrule.loki.grafana/v1 | Supported from 5.7 |
| RecordingRule | recordingrule.loki.grafana/v1 | Supported from 5.7 |
| LogFileMetricExporter | LogFileMetricExporter.logging.openshift.io/v1alpha1 | Supported from 5.8 |
| ClusterLogForwarder | clusterlogforwarder.observability.openshift.io/v1 | Supported from 6.0 |
1.1.2. Unsupported configurations Link kopierenLink in die Zwischenablage kopiert!
You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components:
- The collector configuration file
- The collector daemonset
Explicitly unsupported cases include:
- Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
- Configuring how the log collector normalizes logs. You cannot modify default log normalization.
1.1.3. Support policy for unmanaged Operators Link kopierenLink in die Zwischenablage kopiert!
The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.
While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
An Operator can be set to an unmanaged state using the following methods:
Individual Operator configuration
Individual Operators have a
managementStateparameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.Changing the
managementStateparameter toUnmanagedmeans that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.WarningChanging individual Operators to the
Unmanagedstate renders that particular component and functionality unsupported. Reported issues must be reproduced inManagedstate for support to proceed.Cluster Version Operator (CVO) overrides
The
spec.overridesparameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting thespec.overrides[].unmanagedparameter totruefor a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningSetting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.
1.1.4. Support exception for the Logging UI Plugin Link kopierenLink in die Zwischenablage kopiert!
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
1.1.5. Collecting logging data for Red Hat Support Link kopierenLink in die Zwischenablage kopiert!
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging.
1.1.5.1. About the must-gather tool Link kopierenLink in die Zwischenablage kopiert!
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your logging, must-gather collects the following information:
- Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
- Cluster-level resources, including nodes, roles, and role bindings at the cluster level
-
OpenShift Logging resources in the
openshift-loggingandopenshift-operators-redhatnamespaces, including health status for the log collector, the log store, and the log visualizer
When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.
1.1.5.2. Collecting logging data Link kopierenLink in die Zwischenablage kopiert!
You can use the oc adm must-gather CLI command to collect information about logging.
Procedure
To collect logging information with must-gather:
-
Navigate to the directory where you want to store the
must-gatherinformation. Run the
oc adm must-gathercommand against the logging image:oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
must-gathertool creates a new directory that starts withmust-gather.localwithin the current directory. For example:must-gather.local.4157245944708210408.Create a compressed file from the
must-gatherdirectory that was just created. For example, on a computer that uses a Linux operating system, run the following command:tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Attach the compressed file to your support case on the Red Hat Customer Portal.
1.2. Logging 6.2 Link kopierenLink in die Zwischenablage kopiert!
1.2.1. Logging 6.2.0 Release Notes Link kopierenLink in die Zwischenablage kopiert!
1.2.1.1. New Features and Enhancements Link kopierenLink in die Zwischenablage kopiert!
1.2.1.2. Technology Preview Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.2.1.3. Bug Fixes Link kopierenLink in die Zwischenablage kopiert!
1.2.1.4. CVEs Link kopierenLink in die Zwischenablage kopiert!
1.3. Logging 6.2 Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding.
1.3.1. Inputs and outputs Link kopierenLink in die Zwischenablage kopiert!
Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster:
-
application -
receiver -
infrastructure -
audit
You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
1.3.2. Receiver input type Link kopierenLink in die Zwischenablage kopiert!
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog.
The ReceiverSpec field defines the configuration for a receiver input.
1.3.3. Pipelines and filters Link kopierenLink in die Zwischenablage kopiert!
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
1.3.4. Operator behavior Link kopierenLink in die Zwischenablage kopiert!
The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource:
-
When set to
Managed(default), the Operator actively manages the logging resources to match the configuration defined in the spec. -
When set to
Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components.
1.3.5. Validation Link kopierenLink in die Zwischenablage kopiert!
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
1.3.6. Quick start Link kopierenLink in die Zwischenablage kopiert!
OpenShift Logging supports two data models:
- ViaQ (General Availability)
- OpenTelemetry (Technology Preview)
You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack.
In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry.
1.3.6.1. Quick start with ViaQ Link kopierenLink in die Zwischenablage kopiert!
To use the default ViaQ data model, follow these steps:
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminpermissions. -
You installed the OpenShift CLI (
oc). - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the
Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from the software catalog. Create a
LokiStackcustom resource (CR) in theopenshift-loggingnamespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the
logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.Create a service account for the collector:
oc create sa collector -n openshift-logging
$ oc create sa collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Allow the collector’s service account to write data to the
LokiStackCR:oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.To collect logs, use the service account of the collector by running the following commands:
oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPluginCR to enable the Log section in the Observe tab:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterLogForwarderCR to configure log forwarding:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
dataModelfield is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. SpecifyingdataModel: ViaQensures the configuration remains compatible if the default changes.
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console.
1.3.6.2. Quick start with OpenTelemetry Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the
Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from the software catalog. Create a
LokiStackcustom resource (CR) in theopenshift-loggingnamespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the
logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".Create a service account for the collector:
oc create sa collector -n openshift-logging
$ oc create sa collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Allow the collector’s service account to write data to the
LokiStackCR:oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.To collect logs, use the service account of the collector by running the following commands:
oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPluginCR to enable the Log section in the Observe tab:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterLogForwarderCR to configure log forwarding:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot use
lokiStack.labelKeyswhendataModelisOtel. To achieve similar functionality whendataModelisOtel, refer to "Configuring LokiStack for OTLP data ingestion".
Verification
To verify that OTLP is functioning correctly, complete the following steps:
- In the OpenShift web console, click Observe → OpenShift Logging → LokiStack → Writes.
- Check the Distributor - Structured Metadata section.
1.4. Configuring log forwarding Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
1.4.1. Setting up log collection Link kopierenLink in die Zwischenablage kopiert!
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
1.4.1.1. Legacy service accounts Link kopierenLink in die Zwischenablage kopiert!
To use the existing legacy service account logcollector, create the following ClusterRoleBinding:
oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
1.4.1.2. Creating service accounts Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-loggingnamespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.1.2.1. Cluster Role Binding for your Service Account Link kopierenLink in die Zwischenablage kopiert!
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
1.4.1.2.2. Writing application logs Link kopierenLink in die Zwischenablage kopiert!
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
- 1
- rules: Specifies the permissions granted by this ClusterRole.
- 2
- apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
- 3
- loki.grafana.com: The API group for managing Loki-related resources.
- 4
- resources: The resource type that the ClusterRole grants permission to interact with.
- 5
- application: Refers to the application resources within the Loki logging system.
- 6
- resourceNames: Specifies the names of resources that this role can manage.
- 7
- logs: Refers to the log resources that can be created.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new logs in the Loki system.
1.4.1.2.3. Writing audit logs Link kopierenLink in die Zwischenablage kopiert!
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
- 1
- rules: Defines the permissions granted by this ClusterRole.
- 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5
- audit: Specifies that the role manages audit logs within Loki.
- 6
- resourceNames: Defines the specific resources that the role can access.
- 7
- logs: Refers to the logs that can be managed under this role.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new audit logs.
1.4.1.2.4. Writing infrastructure logs Link kopierenLink in die Zwischenablage kopiert!
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
1.4.1.2.5. ClusterLogForwarder editor role Link kopierenLink in die Zwischenablage kopiert!
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
1.4.2. Modifying log level in collector Link kopierenLink in die Zwischenablage kopiert!
To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off.
Example log level annotation
1.4.3. Managing the Operator Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged.
1.4.4. Structure of the ClusterLogForwarder Link kopierenLink in die Zwischenablage kopiert!
The CLF has a spec section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application,infrastructureandauditforward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
1.4.4.1. Inputs Link kopierenLink in die Zwischenablage kopiert!
Inputs are configured in an array under spec.inputs. There are three built-in input types:
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
Selects logs from nodes and from infrastructure components running in the following namespaces:
-
default -
kube -
openshift -
Containing the
kube-oropenshift-prefix
-
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application that select logs from specific namespaces or using pod labels.
1.4.4.2. Outputs Link kopierenLink in die Zwischenablage kopiert!
Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
1.4.5. Configuring OTLP output Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Create or edit a
ClusterLogForwardercustom resource (CR) to enable forwarding using OTLP by adding the following annotation:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.
1.4.5.1. Pipelines Link kopierenLink in die Zwischenablage kopiert!
Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
1.4.5.2. Filters Link kopierenLink in die Zwischenablage kopiert!
Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them.
Administrators can configure the following types of filters:
1.4.6. Enabling multi-line exception detection Link kopierenLink in die Zwischenablage kopiert!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarderCustom Resource (CR) contains adetectMultilineErrorsfield under the.spec.filters.
Example ClusterLogForwarder CR
1.4.6.1. Details Link kopierenLink in die Zwischenablage kopiert!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
1.4.7. Forwarding logs over HTTP Link kopierenLink in die Zwischenablage kopiert!
To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR).
Procedure
Create or edit the
ClusterLogForwarderCR using the template below:Example ClusterLogForwarder CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Additional headers to send with the log record.
- 2
- Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node.
- 3
- Destination address for logs.
- 4
- Values are either
trueorfalse. - 5
- Secret name for destination credentials.
- 6
- This value should be the same as the output name.
- 7
- The name of your service account.
1.4.8. Forwarding logs using the syslog protocol Link kopierenLink in die Zwischenablage kopiert!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarderCR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the output.
- 2
- Optional: Specify the value for the
APP-NAMEpart of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 3
- Optional: Specify the value for
Facilitypart of the syslog-msg header. - 4
- Optional: Specify the value for
MSGIDpart of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 5
- Optional: Specify the record field to use as the payload. The
payloadKeyvalue must be a single field path encased in single curly brackets{}. Example: {.<value>}. - 6
- Optional: Specify the value for the
PROCIDpart of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 7
- Optional: Set the RFC that the generated messages conform to. The value can be
RFC3164orRFC5424. - 8
- Optional: Set the severity level for the message. For more information, see The Syslog Protocol.
- 9
- Optional: Set the delivery mode for log forwarding. The value can be either
AtLeastOnce, orAtMostOnce. - 10
- Specify the absolute URL with a scheme. Valid schemes are:
tcp,tls, andudp. For example:tls://syslog-receiver.example.com:6514. - 11
- Specify the settings for controlling options of the transport layer security (TLS) client connections.
- 12
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 13
- Specify a name for the pipeline.
- 14
- The name of your service account.
Create the CR object:
oc create -f <filename>.yaml
$ oc create -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.8.1. Adding log source information to the message output Link kopierenLink in die Zwischenablage kopiert!
You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR).
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output with enrichment: None
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
Example syslog message output with enrichment: KubernetesMinimal
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
1.4.9. Configuring content filters to drop unwanted log records Link kopierenLink in die Zwischenablage kopiert!
When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
Procedure
Add a configuration for a filter to the
filtersspec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to drop log records based on regular expressions:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the type of filter. The
dropfilter drops log records that match the filter configuration. - 2
- Specifies configuration options for applying the
dropfilter. - 3
- Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
- If all the conditions specified for a test are true, the test passes and the log record is dropped.
-
When multiple tests are specified for the
dropfilter configuration, if any of the tests pass, the record is dropped. - If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
- 4
- Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (
a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a singletestconfiguration, but they must all evaluate to true for the test to pass and thedropfilter to be applied. - 5
- Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 6
- Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 7
- Specifies the pipeline that the
dropfilter is applied to.
Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional examples
The following additional example shows how you can configure the drop filter to only keep higher priority log records:
In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true:
1.4.10. Overview of API audit filter Link kopierenLink in die Zwischenablage kopiert!
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field:
-
None: The event is dropped. -
Metadata: Audit metadata is included, request and response bodies are removed. -
Request: Audit metadata and the request body are included, the response body is removed. -
RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example,oc get pods -Agenerates a response body containing the YAML description of every pod in the cluster.
The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*asterisk character. For example, the namespaceopenshift-\*matchesopenshift-apiserveroropenshift-authentication. Resource\*/statusmatchesPod/statusorDeployment/status. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get,list, andwatchare dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule.
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodesfield, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]. If the value is an empty list,[], then no status codes are omitted.
The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site.
You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
1.4.11. Filtering application logs at input by including the label expressions or a matching label key and values Link kopierenLink in die Zwischenablage kopiert!
You can include the application logs based on the label expressions or a matching label key and its values by using the input selector.
Procedure
Add a configuration for a filter to the
inputspec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.12. Configuring content filters to prune log records Link kopierenLink in die Zwischenablage kopiert!
When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Procedure
Add a configuration for a filter to the
prunespec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to prune log records based on field paths:ImportantIf both are specified, records are pruned based on the
notInarray first, which takes precedence over theinarray. After records have been pruned by using thenotInarray, they are then pruned by using theinarray.Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
prunefilter prunes log records by configured fields. - 2
- Specify configuration options for applying the
prunefilter. TheinandnotInfields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". - 3
- Optional: Any fields that are specified in this array are removed from the log record.
- 4
- Optional: Any fields that are not specified in this array are removed from the log record.
- 5
- Specify the pipeline that the
prunefilter is applied to.
NoteThe filters exempts the
log_type,.log_source, and.messagefields.Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.13. Filtering the audit and infrastructure log inputs by source Link kopierenLink in die Zwischenablage kopiert!
You can define the list of audit and infrastructure sources to collect the logs by using the input selector.
Procedure
Add a configuration to define the
auditandinfrastructuresources in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to defineauditandinfrastructuresources:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node: Journal log from the node -
container: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI: Logs from the Kubernetes API servers -
openshiftAPI: Logs from the OpenShift API servers -
auditd: Logs from a node auditd service -
ovn: Logs from an open virtual network service
-
Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.14. Filtering application logs at input by including or excluding the namespace or container name Link kopierenLink in die Zwischenablage kopiert!
You can include or exclude the application logs based on the namespace and container name by using the input selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to include or exclude namespaces and container names:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
excludesfield takes precedence over theincludesfield.Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Configuring the logging collector Link kopierenLink in die Zwischenablage kopiert!
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector are performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).
1.5.1. Creating a LogFileMetricExporter resource Link kopierenLink in die Zwischenablage kopiert!
To generate metrics from the logs produced by running containers, you must create a LogFileMetricExporter custom resource (CR).
If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a
LogFileMetricExporterCR as a YAML file:Example
LogFileMetricExporterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
LogFileMetricExporterCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5.2. Configure log collector CPU and memory limits Link kopierenLink in die Zwischenablage kopiert!
Use the log collector to adjust the CPU and memory limits.
Procedure
Edit the
ClusterLogForwardercustom resource (CR):oc -n openshift-logging edit ClusterLogging instance
$ oc -n openshift-logging edit ClusterLogging instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CPU and memory limits and requests as needed. The values shown are the default values.
1.5.3. Configuring input receivers Link kopierenLink in die Zwischenablage kopiert!
The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.
1.5.3.1. Configuring the collector to receive audit logs as an HTTP server Link kopierenLink in die Zwischenablage kopiert!
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).
HTTP receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwarderCR.
Procedure
Modify the
ClusterLogForwarderCR to add configuration for thehttpreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for your input receiver.
- 2
- Specify the input receiver type as
http. - 3
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443. - 4
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 5
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example output, the service name is
collector-http-receiver.Extract the certificate authority (CA) certificate file by running the following command:
oc extract cm/openshift-service-ca.crt -n <namespace>
$ oc extract cm/openshift-service-ca.crt -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlcommand to send logs by running the following command:curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<msessage>"}'$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<msessage>"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<openshift_service_ca.crt>with the extracted CA certificate file.NoteYou can only forward logs within a cluster by following the verification steps.
1.5.3.2. Configuring the collector to listen for connections as a syslog server Link kopierenLink in die Zwischenablage kopiert!
You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).
Syslog receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- Red Hat OpenStack Services on OpenShift (RHOSO)
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwarderCR.
Procedure
Grant the
collect-infrastructure-logscluster role to the service account by running the following command:Example binding command
oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
ClusterLogForwarderCR to add configuration for thesyslogreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example output, the service name is
collector-syslog-receiver.
1.6. Storing logs with LokiStack Link kopierenLink in die Zwischenablage kopiert!
You can configure a LokiStack custom resource (CR) to store application, audit, and infrastructure-related logs.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.
1.6.1. Loki deployment sizing Link kopierenLink in die Zwischenablage kopiert!
Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities.
The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.
Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.
It is not possible to change the number 1x for the deployment size.
| 1x.demo | 1x.pico [6.1+ only] | 1x.extra-small | 1x.small | 1x.medium | |
|---|---|---|---|---|---|
| Data transfer | Demo use only | 50GB/day | 100GB/day | 500GB/day | 2TB/day |
| Queries per second (QPS) | Demo use only | 1-25 QPS at 200ms | 1-25 QPS at 200ms | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
| Replication factor | None | 2 | 2 | 2 | 2 |
| Total CPU requests | None | 7 vCPUs | 14 vCPUs | 34 vCPUs | 54 vCPUs |
| Total CPU requests if using the ruler | None | 8 vCPUs | 16 vCPUs | 42 vCPUs | 70 vCPUs |
| Total memory requests | None | 17Gi | 31Gi | 67Gi | 139Gi |
| Total memory requests if using the ruler | None | 18Gi | 35Gi | 83Gi | 171Gi |
| Total disk requests | 40Gi | 590Gi | 430Gi | 430Gi | 590Gi |
| Total disk requests if using the ruler | 60Gi | 910Gi | 750Gi | 750Gi | 910Gi |
1.6.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- You have installed the Loki Operator by using the command-line interface (CLI) or web console.
-
You have created a
serviceAccountCR in the same namespace as theClusterLogForwarderCR. -
You have assigned the
collect-audit-logs,collect-application-logs, andcollect-infrastructure-logscluster roles to theserviceAccountCR.
1.6.3. Core set up and configuration Link kopierenLink in die Zwischenablage kopiert!
Use role-based access controls, basic monitoring, and pod placement to deploy Loki.
1.6.4. Authorizing LokiStack rules RBAC permissions Link kopierenLink in die Zwischenablage kopiert!
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users.
The following cluster roles for alerting and recording rules are available for LokiStack:
| Rule name | Description |
|---|---|
|
|
Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
|
|
Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
1.6.4.1. Examples Link kopierenLink in die Zwischenablage kopiert!
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster.
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
1.6.5. Creating a log-based alerting rule with Loki Link kopierenLink in die Zwischenablage kopiert!
The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions:
-
If an
AlertingRuleCR includes an invalidintervalperiod, it is an invalid alerting rule -
If an
AlertingRuleCR includes an invalidforperiod, it is an invalid alerting rule. -
If an
AlertingRuleCR includes an invalid LogQLexpr, it is an invalid alerting rule. -
If an
AlertingRuleCR includes two groups with the same name, it is an invalid alerting rule. - If none of the above applies, an alerting rule is considered valid.
| Tenant type | Valid namespaces for AlertingRule CRs |
|---|---|
| application |
|
| audit |
|
| infrastructure |
|
Procedure
Create an
AlertingRulecustom resource (CR):Example infrastructure
AlertingRuleCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
AlertingRuleCRs forinfrastructuretenants are only supported in theopenshift-*,kube-\*, ordefaultnamespaces.- 4
- The value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 5
- The value of this mandatory field must be
critical,warning, orinfo. - 6
- This field is mandatory.
- 7
- This field is mandatory.
Example application
AlertingRuleCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
- Value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 4
- The value of this mandatory field must be
critical,warning, orinfo. - 5
- The value of this mandatory field is a summary of the rule.
- 6
- The value of this mandatory field is a detailed description of the rule.
Apply the
AlertingRuleCR:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6.6. Configuring Loki to tolerate memberlist creation failure Link kopierenLink in die Zwischenablage kopiert!
In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks.
As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command:
oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
Example LokiStack to include podIP
1.6.7. Enabling stream-based retention with Loki Link kopierenLink in die Zwischenablage kopiert!
You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules.
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.
Schema v13 is recommended.
Procedure
Create a
LokiStackCR:Enable stream-based retention globally as shown in the following example:
Example global stream-based retention for AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.spec: limits:
Enable stream-based retention per-tenant basis as shown in the following example:
Example per-tenant stream-based retention for AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets retention policy by tenant. Valid tenant types are
application,audit, andinfrastructure. - 2
- Contains the LogQL query used to define the log stream.
Apply the
LokiStackCR:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage.
1.6.8. Loki pod placement Link kopierenLink in die Zwischenablage kopiert!
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node.
Example LokiStack with node selectors
Example LokiStack CR with node selectors and tolerations
To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource:
oc explain lokistack.spec.template
$ oc explain lokistack.spec.template
Example output
For more detailed information, you can add a specific field:
oc explain lokistack.spec.template.compactor
$ oc explain lokistack.spec.template.compactor
Example output
1.6.9. Enhanced reliability and performance Link kopierenLink in die Zwischenablage kopiert!
Use the following configurations to ensure reliability and efficiency of Loki in production.
1.6.10. Enabling authentication to cloud-based log stores using short-lived tokens Link kopierenLink in die Zwischenablage kopiert!
Workload identity federation enables authentication to cloud-based log stores using short-lived tokens.
Procedure
Use one of the following options to enable authentication:
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
CredentialsRequestobject, which populates a secret. If you use the OpenShift CLI (
oc) to install the Loki Operator, you must manually create aSubscriptionobject using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated.Example Azure sample subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example AWS sample subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
1.6.11. Configuring Loki to tolerate node failure Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.
In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.
The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components.
You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field:
Example user settings for the ingester component
1.6.12. LokiStack behavior during cluster restarts Link kopierenLink in die Zwischenablage kopiert!
When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.
1.6.13. Advanced deployment and scalability Link kopierenLink in die Zwischenablage kopiert!
To configure high availability, scalability, and error handling, use the following information.
1.6.14. Zone aware data replication Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2.
To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.
Example LokiStack CR with zone replication enabled
- 1
- Deprecated field, values entered are overwritten by
replication.factor. - 2
- This value is automatically set when deployment size is selected at setup.
- 3
- The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
- 4
- Defines zones in the form of a topology key that corresponds to a node label.
1.6.15. Recovering Loki pods from failed zones Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss.
Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone.
The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating.
Prerequisites
-
Verify your
LokiStackCR has a replication factor greater than 1. - Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration.
The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone.
Procedure
List the pods in
Pendingstatus by running the following command:oc get pods --field-selector status.phase==Pending -n openshift-logging
$ oc get pods --field-selector status.phase==Pending -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get podsoutputNAME READY STATUS RESTARTS AGE logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m
NAME READY STATUS RESTARTS AGE1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16mCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- These pods are in
Pendingstatus because their corresponding PVCs are in the failed zone.
List the PVCs in
Pendingstatus by running the following command:oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r
$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -rCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get pvcoutputstorage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1
storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PVC(s) for a pod by running the following command:
oc delete pvc <pvc_name> -n openshift-logging
$ oc delete pvc <pvc_name> -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the pod(s) by running the following command:
oc delete pod <pod_name> -n openshift-logging
$ oc delete pod <pod_name> -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone.
1.6.15.1. Troubleshooting PVC in a terminating state Link kopierenLink in die Zwischenablage kopiert!
The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully.
Remove the finalizer for each PVC by running the command below, then retry deletion.
oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging$ oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6.16. Troubleshooting Loki rate limit errors Link kopierenLink in die Zwischenablage kopiert!
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors.
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR).
The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.
Conditions
- The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you enter
oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages:429 Too Many Requests Ingestion rate limit exceeded
429 Too Many Requests Ingestion rate limit exceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Vector error message
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
Example Loki ingester error message
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for streamCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Update the
ingestionBurstSizeandingestionRatefields in theLokiStackCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ingestionBurstSizefield defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSizevalue are not permitted. - 2
- The
ingestionRatefield is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
1.7. OTLP data ingestion in Loki Link kopierenLink in die Zwischenablage kopiert!
You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories:
- Resource
- Scope
- Log
You can set metadata for multiple entries simultaneously or individually as needed.
1.7.1. Configuring LokiStack for OTLP data ingestion Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps:
Prerequisites
- Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion.
Procedure
Set the schema version:
When creating a new
LokiStackCR, setversion: v13in the storage schema configuration.NoteFor existing configurations, add a new schema entry with
version: v13and aneffectiveDatein the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation).
Configure the storage schema as follows:
Example configure storage schema
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the
effectiveDatehas passed, the v13 schema takes effect, enabling yourLokiStackto store structured metadata.
1.7.2. Attribute mapping Link kopierenLink in die Zwischenablage kopiert!
When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki.
For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases:
- Using a custom collector: If your setup includes a custom collector that generates additional attributes that you do not want to store, consider customizing the mapping to ensure these attributes are dropped by Loki.
- Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process.
1.7.2.1. Custom attribute mapping for OpenShift Link kopierenLink in die Zwischenablage kopiert!
When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration.
A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode.
Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration:
You can use both global and per-tenant OTLP configurations for mapping attributes to stream labels.
Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects. See the following LokiStack example configuration:
You can drop attributes of type resource, scope, or log from the log entry.
You can use regular expressions by setting regex: true to apply a configuration for attributes with similar names.
Avoid using regular expressions for stream labels, as this can increase data volume.
Attributes that are not explicitly set as stream labels or dropped from the entry are saved as structured metadata by default.
1.7.2.2. Customizing OpenShift defaults Link kopierenLink in die Zwischenablage kopiert!
In the openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be dropped if performance is impacted. For information about the attributes, see OpenTelemetry data model attributes.
When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or some attributes need to be droped, use custom configuration. Custom configurations can merge with default configurations.
1.7.2.3. Removing recommended attributes Link kopierenLink in die Zwischenablage kopiert!
To reduce default attributes in the openshift-logging mode, disable recommended attributes:
- 1
- Set
disableRecommendedAttributes: trueto remove recommended attributes, which limits default attributes to the required attributes or stream labels.NoteThis setting might negatively impact query performance, as it removes default stream labels. You must pair this option with a custom attribute configuration to retain attributes essential for queries.
1.8. Visualization for logging Link kopierenLink in die Zwischenablage kopiert!
Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation.
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
Chapter 2. Logging 6.1 Link kopierenLink in die Zwischenablage kopiert!
2.1. Support Link kopierenLink in die Zwischenablage kopiert!
Only the configuration options described in this documentation are supported for logging.
Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed.
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.
Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
Logging is not:
- A high scale log collection system
- Security Information and Event Monitoring (SIEM) compliant
- A "bring your own" (BYO) log collector configuration
- Historical or long term log retention or storage
- A guaranteed log sink
- Secure storage - audit logs are not stored by default
2.1.1. Supported API custom resource definitions Link kopierenLink in die Zwischenablage kopiert!
The following table describes the supported Logging APIs.
| CustomResourceDefinition (CRD) | ApiVersion | Support state |
|---|---|---|
| LokiStack | lokistack.loki.grafana.com/v1 | Supported from 5.5 |
| RulerConfig | rulerconfig.loki.grafana/v1 | Supported from 5.7 |
| AlertingRule | alertingrule.loki.grafana/v1 | Supported from 5.7 |
| RecordingRule | recordingrule.loki.grafana/v1 | Supported from 5.7 |
| LogFileMetricExporter | LogFileMetricExporter.logging.openshift.io/v1alpha1 | Supported from 5.8 |
| ClusterLogForwarder | clusterlogforwarder.observability.openshift.io/v1 | Supported from 6.0 |
2.1.2. Unsupported configurations Link kopierenLink in die Zwischenablage kopiert!
You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components:
- The collector configuration file
- The collector daemonset
Explicitly unsupported cases include:
- Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
- Configuring how the log collector normalizes logs. You cannot modify default log normalization.
2.1.3. Support policy for unmanaged Operators Link kopierenLink in die Zwischenablage kopiert!
The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.
While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
An Operator can be set to an unmanaged state using the following methods:
Individual Operator configuration
Individual Operators have a
managementStateparameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.Changing the
managementStateparameter toUnmanagedmeans that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.WarningChanging individual Operators to the
Unmanagedstate renders that particular component and functionality unsupported. Reported issues must be reproduced inManagedstate for support to proceed.Cluster Version Operator (CVO) overrides
The
spec.overridesparameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting thespec.overrides[].unmanagedparameter totruefor a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningSetting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.
2.1.4. Support exception for the Logging UI Plugin Link kopierenLink in die Zwischenablage kopiert!
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
2.1.5. Collecting logging data for Red Hat Support Link kopierenLink in die Zwischenablage kopiert!
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging.
2.1.5.1. About the must-gather tool Link kopierenLink in die Zwischenablage kopiert!
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your logging, must-gather collects the following information:
- Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
- Cluster-level resources, including nodes, roles, and role bindings at the cluster level
-
OpenShift Logging resources in the
openshift-loggingandopenshift-operators-redhatnamespaces, including health status for the log collector, the log store, and the log visualizer
When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.
2.1.5.2. Collecting logging data Link kopierenLink in die Zwischenablage kopiert!
You can use the oc adm must-gather CLI command to collect information about logging.
Procedure
To collect logging information with must-gather:
-
Navigate to the directory where you want to store the
must-gatherinformation. Run the
oc adm must-gathercommand against the logging image:oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
must-gathertool creates a new directory that starts withmust-gather.localwithin the current directory. For example:must-gather.local.4157245944708210408.Create a compressed file from the
must-gatherdirectory that was just created. For example, on a computer that uses a Linux operating system, run the following command:tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
$ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Attach the compressed file to your support case on the Red Hat Customer Portal.
2.2. Logging 6.1 Link kopierenLink in die Zwischenablage kopiert!
2.2.1. Logging 6.1.2 Release Notes Link kopierenLink in die Zwischenablage kopiert!
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.2.
2.2.1.1. New Features and Enhancements Link kopierenLink in die Zwischenablage kopiert!
-
This enhancement adds
OTelsemantic stream labels to thelokiStackoutput so that you can query logs by using bothViaQandOTelstream labels. (LOG-6579)
2.2.1.2. Bug Fixes Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the collector alerting rules contained summary and message fields. With this update, the collector alerting rules contain summary and description fields. (LOG-6126)
-
Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the transition from the old to the new pod deployment. With this update, labels are added to the dashboard
ConfigMapto identify the upgraded deployment as the current owner so that it will not be removed. (LOG-6280) -
Before this update, when you included infrastructure namespaces in application inputs, their
log_typewould be set toapplication. With this update, thelog_typeof infrastructure namespaces included in application inputs is set toinfrastructure. (LOG-6373) -
Before this update, the Cluster Logging Operator used a cached client to fetch the
SecurityContextConstraintcluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using a cache. (LOG-6418) -
Before this update, the logging
must-gatherdid not collect resources such asUIPlugin,ClusterLogForwarder,LogFileMetricExporter, andLokiStack. With this update, themust-gathernow collects all of these resources and places them in their respective namespace directory instead of thecluster-loggingdirectory. (LOG-6422) - Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. (LOG-6506)
-
Before this update, the API documentation incorrectly claimed that
lokiStackoutputs would default the target namespace, which could prevent the collector from writing to that output. With this update, this claim has been removed from the API documentation and the Cluster Logging Operator now validates that a target namespace is present. (LOG-6573) -
Before this update, the Cluster Logging Operator could deploy the collector with output configurations that were not referenced by any inputs. With this update, a validation check for the
ClusterLogForwarderresource prevents the Operator from deploying the collector. (LOG-6585)
2.2.1.3. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.2.2. Logging 6.1.1 Release Notes Link kopierenLink in die Zwischenablage kopiert!
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1.
2.2.2.1. New Features and Enhancements Link kopierenLink in die Zwischenablage kopiert!
- With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. (LOG-6420)
2.2.2.2. Bug Fixes Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size,
max_line_bytes, is3145728bytes. The maximum number of bytes read during a read cycle,max_read_bytes, is262144bytes. (LOG-6379) -
Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the
ClusterLogForwardercustom resource. (LOG-6383) - Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405)
- Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407)
-
Before this update, setting up the custom audit inputs in the
ClusterLogForwardercustom resource with configuredLokiStackoutput caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -
Before this update, the
ValidLokistackOTLPOutputscondition appeared in the status of theClusterLogForwardercustom resource even when the output type is notLokiStack. With this update, theValidLokistackOTLPOutputscondition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -
Before this update, the collector did not correctly mount the
/var/log/oauth-server/path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -
Before this update, the
must-gatherscript of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, themust-gatherscript is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -
Before this update, the collector did not correctly mount the
oauth-apiserveraudit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533)
2.2.2.3. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.2.3. Logging 6.1.0 Release Notes Link kopierenLink in die Zwischenablage kopiert!
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0.
2.2.3.1. New Features and Enhancements Link kopierenLink in die Zwischenablage kopiert!
2.2.3.1.1. Log Collection Link kopierenLink in die Zwischenablage kopiert!
-
This enhancement adds the source
iostreamto the attributes sent from collected container logs. The value is set to eitherstdoutorstderrbased on how the collector received it. (LOG-5292) - With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072)
-
With this update, users can now set the syslog output delivery mode of the
ClusterLogForwarderCR to eitherAtLeastOnceorAtMostOnce.(LOG-6355)
2.2.3.1.2. Log Storage Link kopierenLink in die Zwischenablage kopiert!
-
With this update, the new
1x.picoLokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939)
2.2.3.2. Technology Preview Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
With this update, OpenTelemetry logs can now be forwarded using the
OTel(OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add theobservability.openshift.io/tech-preview-otlp-output: "enabled"annotation to yourClusterLogForwarderconfiguration. For additional configuration information, see OTLP Forwarding. -
With this update, a
dataModelfield has been added to thelokiStackoutput specification. Set thedataModeltoOtelto configure log forwarding using the OpenTelemetry data format. The default is set toViaq. For information about data mapping see OTLP Specification.
2.2.3.3. Bug Fixes Link kopierenLink in die Zwischenablage kopiert!
None.
2.2.3.4. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.3. Logging 6.1 Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding.
2.3.1. Inputs and outputs Link kopierenLink in die Zwischenablage kopiert!
Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster:
-
application -
receiver -
infrastructure -
audit
You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
2.3.2. Receiver input type Link kopierenLink in die Zwischenablage kopiert!
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog.
The ReceiverSpec field defines the configuration for a receiver input.
2.3.3. Pipelines and filters Link kopierenLink in die Zwischenablage kopiert!
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
2.3.4. Operator behavior Link kopierenLink in die Zwischenablage kopiert!
The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource:
-
When set to
Managed(default), the Operator actively manages the logging resources to match the configuration defined in the spec. -
When set to
Unmanaged, the Operator does not take any action, allowing you to manually manage the logging components.
2.3.5. Validation Link kopierenLink in die Zwischenablage kopiert!
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
2.3.6. Quick start Link kopierenLink in die Zwischenablage kopiert!
OpenShift Logging supports two data models:
- ViaQ (General Availability)
- OpenTelemetry (Technology Preview)
You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack.
In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry.
2.3.6.1. Quick start with ViaQ Link kopierenLink in die Zwischenablage kopiert!
To use the default ViaQ data model, follow these steps:
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminpermissions. -
You installed the OpenShift CLI (
oc). - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the
Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from the software catalog. Create a
LokiStackcustom resource (CR) in theopenshift-loggingnamespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the
logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.Create a service account for the collector:
oc create sa collector -n openshift-logging
$ oc create sa collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Allow the collector’s service account to write data to the
LokiStackCR:oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.To collect logs, use the service account of the collector by running the following commands:
oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPluginCR to enable the Log section in the Observe tab:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterLogForwarderCR to configure log forwarding:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
dataModelfield is optional and left unset (dataModel: "") by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. SpecifyingdataModel: ViaQensures the configuration remains compatible if the default changes.
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console.
2.3.6.2. Quick start with OpenTelemetry Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:
Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the
Red Hat OpenShift Logging Operator,Loki Operator, andCluster Observability Operator (COO)from the software catalog. Create a
LokiStackcustom resource (CR) in theopenshift-loggingnamespace:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the
logging-loki-s3secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".Create a service account for the collector:
oc create sa collector -n openshift-logging
$ oc create sa collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Allow the collector’s service account to write data to the
LokiStackCR:oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ClusterRoleresource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.To collect logs, use the service account of the collector by running the following commands:
oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarderconfiguration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPluginCR to enable the Log section in the Observe tab:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterLogForwarderCR to configure log forwarding:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou cannot use
lokiStack.labelKeyswhendataModelisOtel. To achieve similar functionality whendataModelisOtel, refer to "Configuring LokiStack for OTLP data ingestion".
Verification
To verify that OTLP is functioning correctly, complete the following steps:
- In the OpenShift web console, click Observe → OpenShift Logging → LokiStack → Writes.
- Check the Distributor - Structured Metadata section.
2.4. Configuring log forwarding Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
2.4.1. Setting up log collection Link kopierenLink in die Zwischenablage kopiert!
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
2.4.1.1. Legacy service accounts Link kopierenLink in die Zwischenablage kopiert!
To use the existing legacy service account logcollector, create the following ClusterRoleBinding:
oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
2.4.1.2. Creating service accounts Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-loggingnamespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.1.2.1. Cluster Role Binding for your Service Account Link kopierenLink in die Zwischenablage kopiert!
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
2.4.1.2.2. Writing application logs Link kopierenLink in die Zwischenablage kopiert!
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
- 1
- rules: Specifies the permissions granted by this ClusterRole.
- 2
- apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
- 3
- loki.grafana.com: The API group for managing Loki-related resources.
- 4
- resources: The resource type that the ClusterRole grants permission to interact with.
- 5
- application: Refers to the application resources within the Loki logging system.
- 6
- resourceNames: Specifies the names of resources that this role can manage.
- 7
- logs: Refers to the log resources that can be created.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new logs in the Loki system.
2.4.1.2.3. Writing audit logs Link kopierenLink in die Zwischenablage kopiert!
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
- 1
- rules: Defines the permissions granted by this ClusterRole.
- 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5
- audit: Specifies that the role manages audit logs within Loki.
- 6
- resourceNames: Defines the specific resources that the role can access.
- 7
- logs: Refers to the logs that can be managed under this role.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new audit logs.
2.4.1.2.4. Writing infrastructure logs Link kopierenLink in die Zwischenablage kopiert!
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
2.4.1.2.5. ClusterLogForwarder editor role Link kopierenLink in die Zwischenablage kopiert!
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
2.4.2. Modifying log level in collector Link kopierenLink in die Zwischenablage kopiert!
To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off.
Example log level annotation
2.4.3. Managing the Operator Link kopierenLink in die Zwischenablage kopiert!
The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged.
2.4.4. Structure of the ClusterLogForwarder Link kopierenLink in die Zwischenablage kopiert!
The CLF has a spec section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application,infrastructureandauditforward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
2.4.4.1. Inputs Link kopierenLink in die Zwischenablage kopiert!
Inputs are configured in an array under spec.inputs. There are three built-in input types:
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
Selects logs from nodes and from infrastructure components running in the following namespaces:
-
default -
kube -
openshift -
Containing the
kube-oropenshift-prefix
-
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application that select logs from specific namespaces or using pod labels.
2.4.4.2. Outputs Link kopierenLink in die Zwischenablage kopiert!
Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
2.4.5. Configuring OTLP output Link kopierenLink in die Zwischenablage kopiert!
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Create or edit a
ClusterLogForwardercustom resource (CR) to enable forwarding using OTLP by adding the following annotation:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.
2.4.5.1. Pipelines Link kopierenLink in die Zwischenablage kopiert!
Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
2.4.5.2. Filters Link kopierenLink in die Zwischenablage kopiert!
Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them.
Administrators can configure the following types of filters:
2.4.5.3. Enabling multi-line exception detection Link kopierenLink in die Zwischenablage kopiert!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarderCustom Resource (CR) contains adetectMultilineErrorsfield under the.spec.filters.
Example ClusterLogForwarder CR
2.4.5.3.1. Details Link kopierenLink in die Zwischenablage kopiert!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
2.4.5.4. Configuring content filters to drop unwanted log records Link kopierenLink in die Zwischenablage kopiert!
When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
Procedure
Add a configuration for a filter to the
filtersspec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to drop log records based on regular expressions:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the type of filter. The
dropfilter drops log records that match the filter configuration. - 2
- Specifies configuration options for applying the
dropfilter. - 3
- Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
- If all the conditions specified for a test are true, the test passes and the log record is dropped.
-
When multiple tests are specified for the
dropfilter configuration, if any of the tests pass, the record is dropped. - If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
- 4
- Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (
a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a singletestconfiguration, but they must all evaluate to true for the test to pass and thedropfilter to be applied. - 5
- Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 6
- Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 7
- Specifies the pipeline that the
dropfilter is applied to.
Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional examples
The following additional example shows how you can configure the drop filter to only keep higher priority log records:
In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true:
2.4.5.5. Overview of API audit filter Link kopierenLink in die Zwischenablage kopiert!
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field:
-
None: The event is dropped. -
Metadata: Audit metadata is included, request and response bodies are removed. -
Request: Audit metadata and the request body are included, the response body is removed. -
RequestResponse: All data is included: metadata, request body and response body. The response body can be very large. For example,oc get pods -Agenerates a response body containing the YAML description of every pod in the cluster.
The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*asterisk character. For example, the namespaceopenshift-\*matchesopenshift-apiserveroropenshift-authentication. Resource\*/statusmatchesPod/statusorDeployment/status. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get,list, andwatchare dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule.
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodesfield, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]. If the value is an empty list,[], then no status codes are omitted.
The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site.
You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
2.4.5.6. Filtering application logs at input by including the label expressions or a matching label key and values Link kopierenLink in die Zwischenablage kopiert!
You can include the application logs based on the label expressions or a matching label key and its values by using the input selector.
Procedure
Add a configuration for a filter to the
inputspec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.5.7. Configuring content filters to prune log records Link kopierenLink in die Zwischenablage kopiert!
When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Procedure
Add a configuration for a filter to the
prunespec in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to prune log records based on field paths:ImportantIf both are specified, records are pruned based on the
notInarray first, which takes precedence over theinarray. After records have been pruned by using thenotInarray, they are then pruned by using theinarray.Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
prunefilter prunes log records by configured fields. - 2
- Specify configuration options for applying the
prunefilter. TheinandnotInfields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". - 3
- Optional: Any fields that are specified in this array are removed from the log record.
- 4
- Optional: Any fields that are not specified in this array are removed from the log record.
- 5
- Specify the pipeline that the
prunefilter is applied to.
NoteThe filters exempts the
log_type,.log_source, and.messagefields.Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.6. Filtering the audit and infrastructure log inputs by source Link kopierenLink in die Zwischenablage kopiert!
You can define the list of audit and infrastructure sources to collect the logs by using the input selector.
Procedure
Add a configuration to define the
auditandinfrastructuresources in theClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to defineauditandinfrastructuresources:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node: Journal log from the node -
container: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI: Logs from the Kubernetes API servers -
openshiftAPI: Logs from the OpenShift API servers -
auditd: Logs from a node auditd service -
ovn: Logs from an open virtual network service
-
Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.7. Filtering application logs at input by including or excluding the namespace or container name Link kopierenLink in die Zwischenablage kopiert!
You can include or exclude the application logs based on the namespace and container name by using the input selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarderCR.The following example shows how to configure the
ClusterLogForwarderCR to include or exclude namespaces and container names:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
excludesfield takes precedence over theincludesfield.Apply the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Configuring the logging collector Link kopierenLink in die Zwischenablage kopiert!
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector are performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).
2.5.1. Creating a LogFileMetricExporter resource Link kopierenLink in die Zwischenablage kopiert!
To generate metrics from the logs produced by running containers, you must create a LogFileMetricExporter custom resource (CR).
If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs.
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create a
LogFileMetricExporterCR as a YAML file:Example
LogFileMetricExporterCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
LogFileMetricExporterCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.2. Configure log collector CPU and memory limits Link kopierenLink in die Zwischenablage kopiert!
Use the log collector to adjust the CPU and memory limits.
Procedure
Edit the
ClusterLogForwardercustom resource (CR):oc -n openshift-logging edit ClusterLogging instance
$ oc -n openshift-logging edit ClusterLogging instanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the CPU and memory limits and requests as needed. The values shown are the default values.
2.5.3. Configuring input receivers Link kopierenLink in die Zwischenablage kopiert!
The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.
2.5.3.1. Configuring the collector to receive audit logs as an HTTP server Link kopierenLink in die Zwischenablage kopiert!
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).
HTTP receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwarderCR.
Procedure
Modify the
ClusterLogForwarderCR to add configuration for thehttpreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for your input receiver.
- 2
- Specify the input receiver type as
http. - 3
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443. - 4
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 5
- Configure a pipeline for your input receiver.
Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example output, the service name is
collector-http-receiver.Extract the certificate authority (CA) certificate file by running the following command:
oc extract cm/openshift-service-ca.crt -n <namespace>
$ oc extract cm/openshift-service-ca.crt -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
curlcommand to send logs by running the following command:curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<msessage>"}'$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<msessage>"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<openshift_service_ca.crt>with the extracted CA certificate file.NoteYou can only forward logs within a cluster by following the verification steps.
2.5.3.2. Configuring the collector to listen for connections as a syslog server Link kopierenLink in die Zwischenablage kopiert!
You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).
Syslog receiver input is only supported for the following scenarios:
- Logging is installed on hosted control planes.
When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:
- Red Hat OpenStack Services on OpenShift (RHOSO)
- OpenShift Virtualization
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc). - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a
ClusterLogForwarderCR.
Procedure
Grant the
collect-infrastructure-logscluster role to the service account by running the following command:Example binding command
oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
ClusterLogForwarderCR to add configuration for thesyslogreceiver input:Example
ClusterLogForwarderCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes to the
ClusterLogForwarderCR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the collector is listening on the service that has a name in the
<clusterlogforwarder_resource_name>-<input_name>format by running the following command:oc get svc
$ oc get svcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20sCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example output, the service name is
collector-syslog-receiver.
2.6. Storing logs with LokiStack Link kopierenLink in die Zwischenablage kopiert!
You can configure a LokiStack CR to store application, audit, and infrastructure-related logs.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster.
2.6.1. Loki deployment sizing Link kopierenLink in die Zwischenablage kopiert!
Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities.
The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.
Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.
It is not possible to change the number 1x for the deployment size.
| 1x.demo | 1x.pico [6.1+ only] | 1x.extra-small | 1x.small | 1x.medium | |
|---|---|---|---|---|---|
| Data transfer | Demo use only | 50GB/day | 100GB/day | 500GB/day | 2TB/day |
| Queries per second (QPS) | Demo use only | 1-25 QPS at 200ms | 1-25 QPS at 200ms | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
| Replication factor | None | 2 | 2 | 2 | 2 |
| Total CPU requests | None | 7 vCPUs | 14 vCPUs | 34 vCPUs | 54 vCPUs |
| Total CPU requests if using the ruler | None | 8 vCPUs | 16 vCPUs | 42 vCPUs | 70 vCPUs |
| Total memory requests | None | 17Gi | 31Gi | 67Gi | 139Gi |
| Total memory requests if using the ruler | None | 18Gi | 35Gi | 83Gi | 171Gi |
| Total disk requests | 40Gi | 590Gi | 430Gi | 430Gi | 590Gi |
| Total disk requests if using the ruler | 60Gi | 910Gi | 750Gi | 750Gi | 910Gi |
2.6.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- You have installed the Loki Operator by using the CLI or web console.
-
You have a
serviceAccountin the same namespace in which you create theClusterLogForwarder. -
The
serviceAccountis assignedcollect-audit-logs,collect-application-logs, andcollect-infrastructure-logscluster roles.
2.6.3. Core Setup and Configuration Link kopierenLink in die Zwischenablage kopiert!
Role-based access controls, basic monitoring, and pod placement to deploy Loki.
2.6.4. Authorizing LokiStack rules RBAC permissions Link kopierenLink in die Zwischenablage kopiert!
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users.
The following cluster roles for alerting and recording rules are available for LokiStack:
| Rule name | Description |
|---|---|
|
|
Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
|
|
Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
|
Users with this role have permission to create, update, and delete |
|
|
Users with this role can read |
2.6.4.1. Examples Link kopierenLink in die Zwischenablage kopiert!
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster.
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
2.6.5. Creating a log-based alerting rule with Loki Link kopierenLink in die Zwischenablage kopiert!
The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions:
-
If an
AlertingRuleCR includes an invalidintervalperiod, it is an invalid alerting rule -
If an
AlertingRuleCR includes an invalidforperiod, it is an invalid alerting rule. -
If an
AlertingRuleCR includes an invalid LogQLexpr, it is an invalid alerting rule. -
If an
AlertingRuleCR includes two groups with the same name, it is an invalid alerting rule. - If none of the above applies, an alerting rule is considered valid.
| Tenant type | Valid namespaces for AlertingRule CRs |
|---|---|
| application |
|
| audit |
|
| infrastructure |
|
Procedure
Create an
AlertingRulecustom resource (CR):Example infrastructure
AlertingRuleCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
AlertingRuleCRs forinfrastructuretenants are only supported in theopenshift-*,kube-\*, ordefaultnamespaces.- 4
- The value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 5
- The value of this mandatory field must be
critical,warning, orinfo. - 6
- This field is mandatory.
- 7
- This field is mandatory.
Example application
AlertingRuleCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
- Value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 4
- The value of this mandatory field must be
critical,warning, orinfo. - 5
- The value of this mandatory field is a summary of the rule.
- 6
- The value of this mandatory field is a detailed description of the rule.
Apply the
AlertingRuleCR:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.6. Configuring Loki to tolerate memberlist creation failure Link kopierenLink in die Zwischenablage kopiert!
In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks.
As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command:
oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
Example LokiStack to include podIP
2.6.7. Enabling stream-based retention with Loki Link kopierenLink in die Zwischenablage kopiert!
You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules.
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.
Schema v13 is recommended.
Procedure
Create a
LokiStackCR:Enable stream-based retention globally as shown in the following example:
Example global stream-based retention for AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.spec: limits:
Enable stream-based retention per-tenant basis as shown in the following example:
Example per-tenant stream-based retention for AWS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Sets retention policy by tenant. Valid tenant types are
application,audit, andinfrastructure. - 2
- Contains the LogQL query used to define the log stream.
Apply the
LokiStackCR:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage.
2.6.8. Loki pod placement Link kopierenLink in die Zwischenablage kopiert!
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node.
Example LokiStack with node selectors
Example LokiStack CR with node selectors and tolerations
To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource:
oc explain lokistack.spec.template
$ oc explain lokistack.spec.template
Example output
For more detailed information, you can add a specific field:
oc explain lokistack.spec.template.compactor
$ oc explain lokistack.spec.template.compactor
Example output
2.6.9. Enhanced Reliability and Performance Link kopierenLink in die Zwischenablage kopiert!
Configurations to ensure Loki’s reliability and efficiency in production.
2.6.10. Enabling authentication to cloud-based log stores using short-lived tokens Link kopierenLink in die Zwischenablage kopiert!
Workload identity federation enables authentication to cloud-based log stores using short-lived tokens.
Procedure
Use one of the following options to enable authentication:
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
CredentialsRequestobject, which populates a secret. If you use the OpenShift CLI (
oc) to install the Loki Operator, you must manually create aSubscriptionobject using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated.Example Azure sample subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example AWS sample subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
2.6.11. Configuring Loki to tolerate node failure Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.
In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.
The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components.
You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field:
Example user settings for the ingester component
2.6.12. LokiStack behavior during cluster restarts Link kopierenLink in die Zwischenablage kopiert!
When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.
2.6.13. Advanced Deployment and Scalability Link kopierenLink in die Zwischenablage kopiert!
Specialized configurations for high availability, scalability, and error handling.
2.6.14. Zone aware data replication Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2.
To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.
Example LokiStack CR with zone replication enabled
- 1
- Deprecated field, values entered are overwritten by
replication.factor. - 2
- This value is automatically set when deployment size is selected at setup.
- 3
- The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
- 4
- Defines zones in the form of a topology key that corresponds to a node label.
2.6.15. Recovering Loki pods from failed zones Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss.
Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone.
The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating.
Prerequisites
-
Verify your
LokiStackCR has a replication factor greater than 1. - Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration.
The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone.
Procedure
List the pods in
Pendingstatus by running the following command:oc get pods --field-selector status.phase==Pending -n openshift-logging
$ oc get pods --field-selector status.phase==Pending -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get podsoutputNAME READY STATUS RESTARTS AGE logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m
NAME READY STATUS RESTARTS AGE1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16mCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- These pods are in
Pendingstatus because their corresponding PVCs are in the failed zone.
List the PVCs in
Pendingstatus by running the following command:oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r
$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -rCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get pvcoutputstorage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1
storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PVC(s) for a pod by running the following command:
oc delete pvc <pvc_name> -n openshift-logging
$ oc delete pvc <pvc_name> -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the pod(s) by running the following command:
oc delete pod <pod_name> -n openshift-logging
$ oc delete pod <pod_name> -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone.
2.6.15.1. Troubleshooting PVC in a terminating state Link kopierenLink in die Zwischenablage kopiert!
The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully.
Remove the finalizer for each PVC by running the command below, then retry deletion.
oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging$ oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.16. Troubleshooting Loki rate limit errors Link kopierenLink in die Zwischenablage kopiert!
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors.
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR).
The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.
Conditions
- The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you enter
oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages:429 Too Many Requests Ingestion rate limit exceeded
429 Too Many Requests Ingestion rate limit exceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Vector error message
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
Example Loki ingester error message
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for streamCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Update the
ingestionBurstSizeandingestionRatefields in theLokiStackCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ingestionBurstSizefield defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSizevalue are not permitted. - 2
- The
ingestionRatefield is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
2.7. OTLP data ingestion in Loki Link kopierenLink in die Zwischenablage kopiert!
You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories:
- Resource
- Scope
- Log
You can set metadata for multiple entries simultaneously or individually as needed.
2.7.1. Configuring LokiStack for OTLP data ingestion Link kopierenLink in die Zwischenablage kopiert!
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps:
Prerequisites
- Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion.
Procedure
Set the schema version:
When creating a new
LokiStackCR, setversion: v13in the storage schema configuration.NoteFor existing configurations, add a new schema entry with
version: v13and aneffectiveDatein the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation).
Configure the storage schema as follows:
Example configure storage schema
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the
effectiveDatehas passed, the v13 schema takes effect, enabling yourLokiStackto store structured metadata.
2.7.2. Attribute mapping Link kopierenLink in die Zwischenablage kopiert!
When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki.
For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases:
- Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki.
- Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process.
Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki.
2.7.2.1. Custom attribute mapping for OpenShift Link kopierenLink in die Zwischenablage kopiert!
When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration.
A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode.
Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration:
Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement.
Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects:
Structured metadata, in contrast, can be generated from resource, scope or log-level attributes:
Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki.
Avoid using regular expressions for stream labels, as this can increase data volume.
2.7.2.2. Customizing OpenShift defaults Link kopierenLink in die Zwischenablage kopiert!
In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be disabled if performance is impacted.
When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations.
2.7.2.3. Removing recommended attributes Link kopierenLink in die Zwischenablage kopiert!
To reduce default attributes in openshift-logging mode, disable recommended attributes:
- 1
- Set
disableRecommendedAttributes: trueto remove recommended attributes, which limits default attributes to the required attributes.
This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries.
2.8. OpenTelemetry data model Link kopierenLink in die Zwischenablage kopiert!
This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging’s OpenTelemetry support with Logging 6.1.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
2.8.1. Forwarding and ingestion protocol Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources.
2.8.2. Semantic conventions Link kopierenLink in die Zwischenablage kopiert!
The log collector in this solution gathers the following log streams:
- Container logs
- Cluster node journal logs
- Cluster node auditd logs
- Kubernetes and OpenShift API server logs
- OpenShift Virtual Network (OVN) logs
You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data.
In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage.
The following sections define the attributes that are generally forwarded.
2.8.2.1. Log entry structure Link kopierenLink in die Zwischenablage kopiert!
All log streams include the following log data fields:
The Applicable Sources column indicates which log sources each field applies to:
-
all: This field is present in all logs. -
container: This field is present in Kubernetes container logs, both application and infrastructure. -
audit: This field is present in Kubernetes, OpenShift API, and OVN logs. -
auditd: This field is present in node auditd logs. -
journal: This field is present in node journal logs.
| Name | Applicable Sources | Comment |
|---|---|---|
|
| all | |
|
| all | |
|
| all | |
|
| container, journal | |
|
| all | (Optional) Present when forwarding stream specific attributes |
2.8.2.2. Attributes Link kopierenLink in die Zwischenablage kopiert!
Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table.
The Location column specifies the type of attribute:
-
resource: Indicates a resource attribute -
scope: Indicates a scope attribute -
log: Indicates a log attribute
The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored:
stream label:- Enables efficient filtering and querying based on specific labels.
-
Can be labeled as
requiredif the Loki Operator enforces this attribute in the configuration.
structured metadata:- Allows for detailed filtering and storage of key-value pairs.
- Enables users to use direct labels for streamlined queries without requiring JSON parsing.
With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries.
| Name | Location | Applicable Sources | Storage (LokiStack) | Comment |
|---|---|---|---|---|
|
| resource | all | required stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | all | required stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | container | stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | all | stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | container | required stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | container | stream label |
(DEPRECATED) Compatibility attribute, contains same information as |
|
| resource | all |
(DEPRECATED) Compatibility attribute, contains same information as | |
|
| log | container, journal |
(DEPRECATED) Compatibility attribute, contains same information as | |
|
| resource | all | required stream label | |
|
| resource | all | required stream label | |
|
| resource | all | required stream label | |
|
| resource | all | structured metadata | |
|
| resource | all | stream label | |
|
| resource | container | required stream label | |
|
| resource | container | stream label | |
|
| resource | container | structured metadata | |
|
| resource | container | stream label | |
|
| resource | container | structured metadata | |
|
| resource | container | stream label | Conditionally forwarded based on creator of pod |
|
| resource | container | stream label | Conditionally forwarded based on creator of pod |
|
| resource | container | stream label | Conditionally forwarded based on creator of pod |
|
| resource | container | stream label | Conditionally forwarded based on creator of pod |
|
| resource | container | structured metadata | Conditionally forwarded based on creator of pod |
|
| resource | container | stream label | Conditionally forwarded based on creator of pod |
|
| log | container | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| log | audit | structured metadata | |
|
| resource | journal | structured metadata | |
|
| resource | journal | structured metadata | |
|
| resource | journal | structured metadata | |
|
| resource | journal | structured metadata | |
|
| resource | journal | stream label | |
|
| log | journal | structured metadata | |
|
| log | journal | structured metadata |
Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases.
Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name.
2.9. Visualization for logging Link kopierenLink in die Zwischenablage kopiert!
Visualization for logging is provided by deploying the link:Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation.
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
Legal Notice
Link kopierenLink in die Zwischenablage kopiert!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.