Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Logging 6.0
2.1. Release notes Link kopierenLink in die Zwischenablage kopiert!
2.1.1. Logging 6.0.3 Link kopierenLink in die Zwischenablage kopiert!
This release includes RHBA-2024:10991.
2.1.1.1. New features and enhancements Link kopierenLink in die Zwischenablage kopiert!
- With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. (LOG-6421)
2.1.1.2. Bug fixes Link kopierenLink in die Zwischenablage kopiert!
- Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. (LOG-6034)
-
Before this update, any namespace containing or
openshiftwas treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces:kube,default,kube, and namespaces that begin withopenshiftoropenshift-. (LOG-6204)kube- -
Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the custom resource. (LOG-6343)
ClusterLogForwarder - Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6352)
- Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6406)
-
Before this update, setting up the custom audit inputs in the custom resource with configured
ClusterLogForwarderoutput caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6441)LokiStack -
Before this update, the collector did not correctly mount the path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6486)
/var/log/oauth-server/ -
Before this update, the collector did not correctly mount the audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6543)
oauth-apiserver
2.1.1.3. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.1.2. Logging 6.0.2 Link kopierenLink in die Zwischenablage kopiert!
This release includes RHBA-2024:10051.
2.1.2.1. Bug fixes Link kopierenLink in die Zwischenablage kopiert!
- Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-5325)
- Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. (LOG-5998)
- Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. (LOG-6264)
- Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. (LOG-6296)
-
Before this update, when infrastructure namespaces were included in application inputs, the was set as
log_type. With this update, theapplicationof infrastructure namespaces included in application inputs is set tolog_type. (LOG-6354)infrastructure -
Before this update, specifying a value for the field of the ClusterLogForwarder added
syslog.enrichment,namespace_name, andcontainer_nameto the messages of non-container logs. With this update, only container logs includepod_name,namespace_name, andcontainer_namein their messages whenpod_nameis set. (LOG-6402)syslog.enrichment
2.1.2.2. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.1.3. Logging 6.0.1 Link kopierenLink in die Zwischenablage kopiert!
This release includes OpenShift Logging Bug Fix Release 6.0.1.
2.1.3.1. Bug fixes Link kopierenLink in die Zwischenablage kopiert!
- With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. (LOG-6180)
-
Before this update, the Loki Operator failed to add the default label to all
namespaceresources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. (LOG-6151)AlertingRule - Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. (LOG-6129)
-
Before this update, it was possible to set in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes
log_sourcein the prune filter is rejected. (LOG-6202)log_source
2.1.3.2. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.1.4. Logging 6.0.0 Link kopierenLink in die Zwischenablage kopiert!
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
| logging Version | Component Version | |||||
|---|---|---|---|---|---|---|
| Operator |
|
|
|
|
|
|
| 6.0 | 0.4 | 1.1 | 3.1.0 | 0.1 | 0.1 | 0.37.1 |
2.1.5. Removal notice Link kopierenLink in die Zwischenablage kopiert!
-
With this release, logging no longer supports the and
ClusterLogging.logging.openshift.iocustom resources. Refer to the product documentation for details on the replacement features. (LOG-5803)ClusterLogForwarder.logging.openshift.io - With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. (LOG-5368)
In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object’s ownerRefs before deleting the ClusterLogging resource.
2.1.6. New features and enhancements Link kopierenLink in die Zwischenablage kopiert!
-
This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the API for log collection and forwarding. Support for the
ClusterLogForwarder.observability.openshift.ioandClusterLogging.logging.openshift.ioAPIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red HatClusterLogForwarder.logging.openshift.iofor log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their previous custom resources. Refer to the official product documentation for more details. (LOG-3493)LokiStack - With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. (LOG-5461)
- This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. (LOG-4745)
- This enhancement updates Vector to align with the upstream version v0.37.1. (LOG-5296)
- This enhancement introduces an alert that triggers when log collectors buffer logs to a node’s file system and use over 15% of the available space, indicating potential back pressure issues. (LOG-5381)
- This enhancement updates the selectors for all components to use common Kubernetes labels. (LOG-5906)
- This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. (LOG-5599)
- This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. (LOG-5372)
- This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. (LOG-5640)
- This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. (LOG-5964)
- This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. (LOG-5949)
- This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. (LOG-4571)
This enhancement updates the
API to follow the Kubernetes standards. (LOG-5977)ClusterLogForwarderExample of a new configuration in the
ClusterLogForwardercustom resource for the updated APIapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce
2.1.7. Technology Preview features Link kopierenLink in die Zwischenablage kopiert!
- This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. (LOG-4225)
2.1.8. Bug fixes Link kopierenLink in die Zwischenablage kopiert!
-
Before this update, the and
CollectorHighErrorRatealerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. (LOG-3432)CollectorVeryHighErrorRate
2.1.9. CVEs Link kopierenLink in die Zwischenablage kopiert!
2.2. Logging 6.0 Link kopierenLink in die Zwischenablage kopiert!
The
ClusterLogForwarder
2.2.1. Inputs and Outputs Link kopierenLink in die Zwischenablage kopiert!
Inputs specify the sources of logs to be forwarded. Logging provides built-in input types:
application
infrastructure
audit
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
2.2.2. Receiver Input Type Link kopierenLink in die Zwischenablage kopiert!
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs:
http
syslog
The
ReceiverSpec
2.2.3. Pipelines and Filters Link kopierenLink in die Zwischenablage kopiert!
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
2.2.4. Operator Behavior Link kopierenLink in die Zwischenablage kopiert!
The Cluster Logging Operator manages the deployment and configuration of the collector based on the
managementState
-
When set to (default), the operator actively manages the logging resources to match the configuration defined in the spec.
Managed -
When set to , the operator does not take any action, allowing you to manually manage the logging components.
Unmanaged
2.2.5. Validation Link kopierenLink in die Zwischenablage kopiert!
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The
ClusterLogForwarder
2.2.6. Quick Start Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
You have access to an OpenShift Container Platform cluster with permissions.
cluster-admin -
You installed the OpenShift CLI ().
oc - You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Procedure
-
Install the ,
Red Hat OpenShift Logging Operator, andLoki Operatorfrom OperatorHub.Cluster Observability Operator (COO) Create a secret to access an existing object storage bucket:
Example command for AWS
$ oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-loggingCreate a
custom resource (CR) in theLokiStacknamespace:openshift-loggingapiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-loggingCreate a service account for the collector:
$ oc create sa collector -n openshift-loggingBind the
to the service account:ClusterRole$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-loggingCreate a
to enable the Log section in the Observe tab:UIPluginapiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-lokiAdd additional roles to the collector service account:
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-loggingCreate a
CR to configure log forwarding:ClusterLogForwarderapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console.
2.3. Upgrading to Logging 6.0 Link kopierenLink in die Zwischenablage kopiert!
Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging:
- Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization).
- Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana).
- Deprecation of the Fluentd log collector implementation.
-
Removal of support for and
ClusterLogging.logging.openshift.ioresources.ClusterLogForwarder.logging.openshift.io
The cluster-logging-operator does not provide an automated upgrade process.
Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing
ClusterLogging.logging.openshift.io
ClusterLogForwarder.logging.openshift.io
ClusterLogForwarder.observability.openshift.io
2.3.1. Using the oc explain command Link kopierenLink in die Zwischenablage kopiert!
The
oc explain
oc
2.3.1.1. Resource Descriptions Link kopierenLink in die Zwischenablage kopiert!
oc explain
To view the documentation for the
outputs
ClusterLogForwarder
$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs
In place of
clusterlogforwarder
obsclf
This will display detailed information about these fields, including their types, default values, and any associated sub-fields.
2.3.1.2. Hierarchical Structure Link kopierenLink in die Zwischenablage kopiert!
The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options.
For instance, here’s how you can drill down into the
storage
LokiStack
$ oc explain lokistacks.loki.grafana.com
$ oc explain lokistacks.loki.grafana.com.spec
$ oc explain lokistacks.loki.grafana.com.spec.storage
$ oc explain lokistacks.loki.grafana.com.spec.storage.schemas
Each command reveals a deeper level of the resource specification, making the structure clear.
2.3.1.3. Type Information Link kopierenLink in die Zwischenablage kopiert!
oc explain
For example:
$ oc explain lokistacks.loki.grafana.com.spec.size
This output shows that the
size
2.3.1.4. Default Values Link kopierenLink in die Zwischenablage kopiert!
When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified.
Again using
lokistacks.loki.grafana.com
$ oc explain lokistacks.spec.template.distributor.replicas
Example output
GROUP: loki.grafana.com
KIND: LokiStack
VERSION: v1
FIELD: replicas <integer>
DESCRIPTION:
Replicas defines the number of replica pods of the component.
2.3.2. Log Storage Link kopierenLink in die Zwischenablage kopiert!
The only managed log storage solution available in this release is a Lokistack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.
To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the Elasticsearch Operator, remove the owner references from the Elasticsearch resource named
elasticsearch
kibana
openshift-logging
ClusterLogging
instance
Temporarily set
resource to theClusterLoggingstate by running the following command:Unmanaged$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=mergeRemove the
parameter from theownerReferencesresource by running the following command:ElasticsearchThe following command ensures that
no longer owns theClusterLoggingresource. Updates to theElasticsearchresource’sClusterLoggingfield will no longer affect thelogStoreresource.Elasticsearch$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=mergeRemove the
parameter from theownerReferencesresource.KibanaThe following command ensures that Cluster Logging no longer owns the
resource. Updates to theKibanaresource’sClusterLoggingfield will no longer affect thevisualizationresource.Kibana$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=mergeSet the
resource to theClusterLoggingstate by running the following command:Managed$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge
2.3.3. Log Visualization Link kopierenLink in die Zwischenablage kopiert!
The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator.
2.3.4. Log Collection and Forwarding Link kopierenLink in die Zwischenablage kopiert!
Log collection and forwarding configurations are now specified under the new API, part of the
observability.openshift.io
Vector is the only supported collector implementation.
2.3.5. Management, Resource Allocation, and Workload Scheduling Link kopierenLink in die Zwischenablage kopiert!
Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API.
Previous Configuration
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
spec:
managementState: "Managed"
collection:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
Current Configuration
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
managementState: Managed
collector:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
2.3.6. Input Specifications Link kopierenLink in die Zwischenablage kopiert!
The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources.
2.3.6.1. Application Inputs Link kopierenLink in die Zwischenablage kopiert!
Namespace and container inclusions and exclusions have been consolidated into a single field.
5.9 Application Input with Namespace and Container Includes and Excludes
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
namespaces:
- foo
- bar
includes:
- namespace: my-important
container: main
excludes:
- container: too-verbose
6.0 Application Input with Namespace and Container Includes and Excludes
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
includes:
- namespace: foo
- namespace: bar
- namespace: my-important
container: main
excludes:
- container: too-verbose
application, infrastructure, and audit are reserved words and cannot be used as names when defining an input.
2.3.6.2. Input Receivers Link kopierenLink in die Zwischenablage kopiert!
Changes to input receivers include:
- Explicit configuration of the type at the receiver level.
- Port settings moved to the receiver level.
5.9 Input Receivers
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
receiver:
http:
port: 8443
format: kubeAPIAudit
- name: a-syslog
receiver:
type: syslog
syslog:
port: 9442
6.0 Input Receivers
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
type: receiver
receiver:
type: http
port: 8443
http:
format: kubeAPIAudit
- name: a-syslog
type: receiver
receiver:
type: syslog
port: 9442
2.3.7. Output Specifications Link kopierenLink in die Zwischenablage kopiert!
High-level changes to output specifications include:
- URL settings moved to each output type specification.
- Tuning parameters moved to each output type specification.
- Separation of TLS configuration from authentication.
- Explicit configuration of keys and secret/configmap for TLS and authentication.
2.3.8. Secrets and TLS Configuration Link kopierenLink in die Zwischenablage kopiert!
Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions.
2.3.9. Red Hat Managed Elasticsearch Link kopierenLink in die Zwischenablage kopiert!
v5.9 Forwarding to Red Hat Managed Elasticsearch
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
logStore:
type: elasticsearch
v6.0 Forwarding to Red Hat Managed Elasticsearch
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
serviceAccount:
name: <service_account_name>
managementState: Managed
outputs:
- name: audit-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: audit-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: app-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: app-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
- name: infra-elasticsearch
type: elasticsearch
elasticsearch:
url: https://elasticsearch:9200
version: 6
index: infra-write
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
pipelines:
- name: app
inputRefs:
- application
outputRefs:
- app-elasticsearch
- name: audit
inputRefs:
- audit
outputRefs:
- audit-elasticsearch
- name: infra
inputRefs:
- infrastructure
outputRefs:
- infra-elasticsearch
2.3.10. Red Hat Managed LokiStack Link kopierenLink in die Zwischenablage kopiert!
v5.9 Forwarding to Red Hat Managed LokiStack
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
logStore:
type: lokistack
lokistack:
name: logging-loki
v6.0 Forwarding to Red Hat Managed LokiStack
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
serviceAccount:
name: <service_account_name>
outputs:
- name: default-lokistack
type: lokiStack
lokiStack:
target:
name: logging-loki
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
pipelines:
- outputRefs:
- default-lokistack
- inputRefs:
- application
- infrastructure
2.3.11. Filters and Pipeline Configuration Link kopierenLink in die Zwischenablage kopiert!
Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the
filters
5.9 Filters
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
spec:
pipelines:
- name: application-logs
parse: json
labels:
foo: bar
detectMultilineErrors: true
6.0 Filter Configuration
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
spec:
filters:
- name: detectexception
type: detectMultilineException
- name: parse-json
type: parse
- name: labels
type: openshiftLabels
openshiftLabels:
foo: bar
pipelines:
- name: application-logs
filterRefs:
- detectexception
- labels
- parse-json
2.3.12. Validation and Status Link kopierenLink in die Zwischenablage kopiert!
Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time.
Instances of the
ClusterLogForwarder.observability.openshift.io
6.0 Status Conditions
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
status:
conditions:
- lastTransitionTime: "2024-09-13T03:28:44Z"
message: 'permitted to collect log types: [application]'
reason: ClusterRolesExist
status: "True"
type: observability.openshift.io/Authorized
- lastTransitionTime: "2024-09-13T12:16:45Z"
message: ""
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/Valid
- lastTransitionTime: "2024-09-13T12:16:45Z"
message: ""
reason: ReconciliationComplete
status: "True"
type: Ready
filterConditions:
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: filter "detectexception" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidFilter-detectexception
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: filter "parse-json" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidFilter-parse-json
inputConditions:
- lastTransitionTime: "2024-09-13T12:23:03Z"
message: input "application1" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidInput-application1
outputConditions:
- lastTransitionTime: "2024-09-13T13:02:59Z"
message: output "default-lokistack-application1" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidOutput-default-lokistack-application1
pipelineConditions:
- lastTransitionTime: "2024-09-13T03:28:44Z"
message: pipeline "default-before" is valid
reason: ValidationSuccess
status: "True"
type: observability.openshift.io/ValidPipeline-default-before
Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue.
2.4. Configuring log forwarding Link kopierenLink in die Zwischenablage kopiert!
The
ClusterLogForwarder
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
2.4.1. Setting up log collection Link kopierenLink in die Zwischenablage kopiert!
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides
collect-audit-logs
collect-application-logs
collect-infrastructure-logs
Setup log collection by binding the required cluster roles to your service account.
2.4.1.1. Legacy service accounts Link kopierenLink in die Zwischenablage kopiert!
To use the existing legacy service account
logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
2.4.1.2. Creating service accounts Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the namespace.
openshift-logging - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
2.4.1.2.1. Cluster Role Binding for your Service Account Link kopierenLink in die Zwischenablage kopiert!
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-logging-operator
subjects:
- kind: ServiceAccount
name: cluster-logging-operator
namespace: openshift-logging
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
2.4.1.2.2. Writing application logs Link kopierenLink in die Zwischenablage kopiert!
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-logging-write-application-logs
rules:
- apiGroups:
- loki.grafana.com
resources:
- application
resourceNames:
- logs
verbs:
- create
Annotations
<1> rules: Specifies the permissions granted by this ClusterRole.
<2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
<3> loki.grafana.com: The API group for managing Loki-related resources.
<4> resources: The resource type that the ClusterRole grants permission to interact with.
<5> application: Refers to the application resources within the Loki logging system.
<6> resourceNames: Specifies the names of resources that this role can manage.
<7> logs: Refers to the log resources that can be created.
<8> verbs: The actions allowed on the resources.
<9> create: Grants permission to create new logs in the Loki system.
2.4.1.2.3. Writing audit logs Link kopierenLink in die Zwischenablage kopiert!
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-logging-write-audit-logs
rules:
- apiGroups:
- loki.grafana.com
resources:
- audit
resourceNames:
- logs
verbs:
- create
- 1 1
- rules: Defines the permissions granted by this ClusterRole.
- 2 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5 5
- audit: Specifies that the role manages audit logs within Loki.
- 6 6
- resourceNames: Defines the specific resources that the role can access.
- 7 7
- logs: Refers to the logs that can be managed under this role.
- 8 8
- verbs: The actions allowed on the resources.
- 9 9
- create: Grants permission to create new audit logs.
2.4.1.2.4. Writing infrastructure logs Link kopierenLink in die Zwischenablage kopiert!
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-logging-write-infrastructure-logs
rules:
- apiGroups:
- loki.grafana.com
resources:
- infrastructure
resourceNames:
- logs
verbs:
- create
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
2.4.1.2.5. ClusterLogForwarder editor role Link kopierenLink in die Zwischenablage kopiert!
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterlogforwarder-editor-role
rules:
- apiGroups:
- observability.openshift.io
resources:
- clusterlogforwarders
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
2.4.2. Modifying log level in collector Link kopierenLink in die Zwischenablage kopiert!
To modify the log level in the collector, you can set the
observability.openshift.io/log-level
trace
debug
info
warn
error
off
Example log level annotation
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
annotations:
observability.openshift.io/log-level: debug
# ...
2.4.3. Managing the Operator Link kopierenLink in die Zwischenablage kopiert!
The
ClusterLogForwarder
managementState
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting
managementState
Unmanaged
2.4.4. Structure of the ClusterLogForwarder Link kopierenLink in die Zwischenablage kopiert!
The CLF has a
spec
- Inputs
-
Select log messages to be forwarded. Built-in input types
application,infrastructureandauditforward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
2.4.4.1. Inputs Link kopierenLink in die Zwischenablage kopiert!
Inputs are configured in an array under
spec.inputs
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
Selects logs from nodes and from infrastructure components running in the following namespaces:
-
default -
kube -
openshift -
Containing the or
kube-prefixopenshift-
-
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type
application
2.4.4.2. Outputs Link kopierenLink in die Zwischenablage kopiert!
Outputs are configured in an array under
spec.outputs
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
2.4.4.3. Pipelines Link kopierenLink in die Zwischenablage kopiert!
Pipelines are configured in an array under
spec.pipelines
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
2.4.4.4. Filters Link kopierenLink in die Zwischenablage kopiert!
Filters are configured in an array under
spec.filters
Administrators can configure the following types of filters:
2.4.4.5. Enabling multi-line exception detection Link kopierenLink in die Zwischenablage kopiert!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the Custom Resource (CR) contains a
ClusterLogForwarderfield under thedetectMultilineErrors..spec.filters
Example ClusterLogForwarder CR
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: <name>
type: detectMultilineException
pipelines:
- inputRefs:
- <input-name>
name: <pipeline-name>
filterRefs:
- <filter-name>
outputRefs:
- <output-name>
2.4.4.5.1. Details Link kopierenLink in die Zwischenablage kopiert!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
2.4.4.6. Configuring content filters to drop unwanted log records Link kopierenLink in die Zwischenablage kopiert!
When the
drop
Procedure
Add a configuration for a filter to the
spec in thefiltersCR.ClusterLogForwarderThe following example shows how to configure the
CR to drop log records based on regular expressions:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop1 drop:2 - test:3 - field: .kubernetes.labels."foo-bar/baz"4 matches: .+5 - field: .kubernetes.pod_name notMatches: "my-pod"6 pipelines: - name: <pipeline_name>7 filterRefs: ["<filter_name>"] # ...- 1
- Specifies the type of filter. The
dropfilter drops log records that match the filter configuration. - 2
- Specifies configuration options for applying the
dropfilter. - 3
- Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
- If all the conditions specified for a test are true, the test passes and the log record is dropped.
-
When multiple tests are specified for the filter configuration, if any of the tests pass, the record is dropped.
drop - If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
- 4
- Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (
a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". You can include multiple field paths in a singletestconfiguration, but they must all evaluate to true for the test to pass and thedropfilter to be applied. - 5
- Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 6
- Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the
matchesornotMatchescondition for a singlefieldpath, but not both. - 7
- Specifies the pipeline that the
dropfilter is applied to.
Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
Additional examples
The following additional example shows how you can configure the
drop
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: important
type: drop
drop:
- test:
- field: .message
notMatches: "(?i)critical|error"
- field: .level
matches: "info|warning"
# ...
In addition to including multiple field paths in a single
test
test
test
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: important
type: drop
drop:
- test:
- field: .kubernetes.namespace_name
matches: "^open"
- test:
- field: .log_type
matches: "application"
- field: .kubernetes.pod_name
notMatches: "my-pod"
# ...
2.4.4.7. Overview of API audit filter Link kopierenLink in die Zwischenablage kopiert!
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the
level
-
: The event is dropped.
None -
: Audit metadata is included, request and response bodies are removed.
Metadata -
: Audit metadata and the request body are included, the response body is removed.
Request -
: All data is included: metadata, request body and response body. The response body can be very large. For example,
RequestResponsegenerates a response body containing the YAML description of every pod in the cluster.oc get pods -A
The
ClusterLogForwarder
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*asterisk character. For example, the namespaceopenshift-\*matchesopenshift-apiserveroropenshift-authentication. Resource\*/statusmatchesPod/statusorDeployment/status. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as ,
get, andlistare dropped.watch - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a
level
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodesfield, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]. If the value is an empty list,[], then no status codes are omitted.
The
ClusterLogForwarder
ClusterLogForwarder
You must have a cluster role
collect-audit-logs
Example audit policy
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccount:
name: <service_account_name>
pipelines:
- name: my-pipeline
inputRefs: audit
filterRefs: my-policy
filters:
- name: my-policy
type: kubeAPIAudit
kubeAPIAudit:
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
2.4.4.8. Filtering application logs at input by including the label expressions or a matching label key and values Link kopierenLink in die Zwischenablage kopiert!
You can include the application logs based on the label expressions or a matching label key and its values by using the
input
Procedure
Add a configuration for a filter to the
spec in theinputCR.ClusterLogForwarderThe following example shows how to configure the
CR to include logs based on label expressions or matched label key/values:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env1 operator: In2 values: ["prod", "qa"]3 - key: zone operator: NotIn values: ["east", "west"] matchLabels:4 app: one name: app1 type: application # ...Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
2.4.4.9. Configuring content filters to prune log records Link kopierenLink in die Zwischenablage kopiert!
When the
prune
Procedure
Add a configuration for a filter to the
spec in thepruneCR.ClusterLogForwarderThe following example shows how to configure the
CR to prune log records based on field paths:ClusterLogForwarderImportantIf both are specified, records are pruned based on the
array first, which takes precedence over thenotInarray. After records have been pruned by using theinarray, they are then pruned by using thenotInarray.inExample
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune1 prune:2 in: [.kubernetes.annotations, .kubernetes.namespace_id]3 notIn: [.kubernetes,.log_type,.message,."@timestamp"]4 pipelines: - name: <pipeline_name>5 filterRefs: ["<filter_name>"] # ...- 1
- Specify the type of filter. The
prunefilter prunes log records by configured fields. - 2
- Specify configuration options for applying the
prunefilter. TheinandnotInfields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_), for example,.kubernetes.namespace_name. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz". - 3
- Optional: Any fields that are specified in this array are removed from the log record.
- 4
- Optional: Any fields that are not specified in this array are removed from the log record.
- 5
- Specify the pipeline that the
prunefilter is applied to.
NoteThe filters exempts the
,log_type, and.log_sourcefields..messageApply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
2.4.5. Filtering the audit and infrastructure log inputs by source Link kopierenLink in die Zwischenablage kopiert!
You can define the list of
audit
infrastructure
input
Procedure
Add a configuration to define the
andauditsources in theinfrastructureCR.ClusterLogForwarderThe following example shows how to configure the
CR to defineClusterLogForwarderandauditsources:infrastructureExample
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources:1 - node - name: mylogs2 type: audit audit: sources:2 - kubeAPI - openshiftAPI - ovn # ...- 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
: Journal log from the node
node -
: Logs from the workloads deployed in the namespaces
container
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
: Logs from the Kubernetes API servers
kubeAPI -
: Logs from the OpenShift API servers
openshiftAPI -
: Logs from a node auditd service
auditd -
: Logs from an open virtual network service
ovn
-
Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
2.4.6. Filtering application logs at input by including or excluding the namespace or container name Link kopierenLink in die Zwischenablage kopiert!
You can include or exclude the application logs based on the namespace and container name by using the
input
Procedure
Add a configuration to include or exclude the namespace and container names in the
CR.ClusterLogForwarderThe following example shows how to configure the
CR to include or exclude namespaces and container names:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project"1 container: "my-container"2 excludes: - container: "other-container*"3 namespace: "other-namespace"4 type: application # ...NoteThe
field takes precedence over theexcludesfield.includesApply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
2.5. Storing logs with LokiStack Link kopierenLink in die Zwischenablage kopiert!
You can configure a
LokiStack
2.5.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- You have installed the Loki Operator by using the CLI or web console.
-
You have a in the same namespace in which you create the
serviceAccount.ClusterLogForwarder -
The is assigned
serviceAccount,collect-audit-logs, andcollect-application-logscluster roles.collect-infrastructure-logs
2.5.1.1. Core Setup and Configuration Link kopierenLink in die Zwischenablage kopiert!
Role-based access controls, basic monitoring, and pod placement to deploy Loki.
2.5.2. Authorizing LokiStack rules RBAC permissions Link kopierenLink in die Zwischenablage kopiert!
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as
ClusterRole
The following cluster roles for alerting and recording rules are available for LokiStack:
| Rule name | Description |
|---|---|
|
| Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch
|
|
| Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to
|
|
| Users with this role have permission to create, update, and delete
|
|
| Users with this role can read
|
|
| Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch
|
|
| Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to
|
|
| Users with this role have permission to create, update, and delete
|
|
| Users with this role can read
|
2.5.2.1. Examples Link kopierenLink in die Zwischenablage kopiert!
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a
RoleBinding
oc adm policy add-role-to-user
ClusterRoleBinding
oc adm policy add-cluster-role-to-user
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
2.5.3. Creating a log-based alerting rule with Loki Link kopierenLink in die Zwischenablage kopiert!
The
AlertingRule
LokiStack
-
If an CR includes an invalid
AlertingRuleperiod, it is an invalid alerting ruleinterval -
If an CR includes an invalid
AlertingRuleperiod, it is an invalid alerting rule.for -
If an CR includes an invalid LogQL
AlertingRule, it is an invalid alerting rule.expr -
If an CR includes two groups with the same name, it is an invalid alerting rule.
AlertingRule - If none of the above applies, an alerting rule is considered valid.
| Tenant type | Valid namespaces for AlertingRule CRs |
|---|---|
| application |
|
| audit |
|
| infrastructure |
|
Procedure
Create an
custom resource (CR):AlertingRuleExample infrastructure
AlertingRuleCRapiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat1 labels:2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure"3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: |4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical5 annotations: summary: High Loki Operator Reconciliation Errors6 description: High Loki Operator Reconciliation Errors7 - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
AlertingRuleCRs forinfrastructuretenants are only supported in theopenshift-*,kube-\*, ordefaultnamespaces.- 4
- The value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 5
- The value of this mandatory field must be
critical,warning, orinfo. - 6
- This field is mandatory.
- 7
- This field is mandatory.
Example application
AlertingRuleCRapiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns1 labels:2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: |3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical4 annotations: summary:5 description:6 - 1
- The namespace where this
AlertingRuleCR is created must have a label matching the LokiStackspec.rules.namespaceSelectordefinition. - 2
- The
labelsblock must match the LokiStackspec.rules.selectordefinition. - 3
- Value for
kubernetes_namespace_name:must match the value formetadata.namespace. - 4
- The value of this mandatory field must be
critical,warning, orinfo. - 5
- The value of this mandatory field is a summary of the rule.
- 6
- The value of this mandatory field is a detailed description of the rule.
Apply the
CR:AlertingRule$ oc apply -f <filename>.yaml
2.5.4. Configuring Loki to tolerate memberlist creation failure Link kopierenLink in die Zwischenablage kopiert!
In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks.
As an administrator, you can select the pod network for the memberlist configuration. You can modify the
LokiStack
podIP
hashRing
LokiStack
$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
Example LokiStack to include podIP
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
# ...
hashRing:
type: memberlist
memberlist:
instanceAddrType: podIP
# ...
2.5.5. Enabling stream-based retention with Loki Link kopierenLink in die Zwischenablage kopiert!
You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules.
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.
Schema v13 is recommended.
Procedure
Create a
CR:LokiStackEnable stream-based retention globally as shown in the following example:
Example global stream-based retention for AWS
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global:1 retention:2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}'3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging- 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.spec: limits:
Enable stream-based retention per-tenant basis as shown in the following example:
Example per-tenant stream-based retention for AWS
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants:1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}'2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging- 1
- Sets retention policy by tenant. Valid tenant types are
application,audit, andinfrastructure. - 2
- Contains the LogQL query used to define the log stream.
Apply the
CR:LokiStack$ oc apply -f <filename>.yamlNoteThis is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage.
2.5.6. Loki pod placement Link kopierenLink in die Zwischenablage kopiert!
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a
key:value
key:value
Example LokiStack with node selectors
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
# ...
template:
compactor:
nodeSelector:
node-role.kubernetes.io/infra: ""
distributor:
nodeSelector:
node-role.kubernetes.io/infra: ""
gateway:
nodeSelector:
node-role.kubernetes.io/infra: ""
indexGateway:
nodeSelector:
node-role.kubernetes.io/infra: ""
ingester:
nodeSelector:
node-role.kubernetes.io/infra: ""
querier:
nodeSelector:
node-role.kubernetes.io/infra: ""
queryFrontend:
nodeSelector:
node-role.kubernetes.io/infra: ""
ruler:
nodeSelector:
node-role.kubernetes.io/infra: ""
# ...
Example LokiStack CR with node selectors and tolerations
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
# ...
template:
compactor:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
distributor:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
indexGateway:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
ingester:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
querier:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
queryFrontend:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
ruler:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
gateway:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
# ...
To configure the
nodeSelector
tolerations
oc explain
$ oc explain lokistack.spec.template
Example output
KIND: LokiStack
VERSION: loki.grafana.com/v1
RESOURCE: template <Object>
DESCRIPTION:
Template defines the resource/limits/tolerations/nodeselectors per
component
FIELDS:
compactor <Object>
Compactor defines the compaction component spec.
distributor <Object>
Distributor defines the distributor component spec.
...
For more detailed information, you can add a specific field:
$ oc explain lokistack.spec.template.compactor
Example output
KIND: LokiStack
VERSION: loki.grafana.com/v1
RESOURCE: compactor <Object>
DESCRIPTION:
Compactor defines the compaction component spec.
FIELDS:
nodeSelector <map[string]string>
NodeSelector defines the labels required by a node to schedule the
component onto it.
...
2.5.6.1. Enhanced Reliability and Performance Link kopierenLink in die Zwischenablage kopiert!
Configurations to ensure Loki’s reliability and efficiency in production.
2.5.7. Enabling authentication to cloud-based log stores using short-lived tokens Link kopierenLink in die Zwischenablage kopiert!
Workload identity federation enables authentication to cloud-based log stores using short-lived tokens.
Procedure
Use one of the following options to enable authentication:
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a object, which populates a secret.
CredentialsRequest If you use the OpenShift CLI (
) to install the Loki Operator, you must manually create aocobject using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated.SubscriptionExample Azure sample subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>Example AWS sample subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
2.5.8. Configuring Loki to tolerate node failure Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.
In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.
The Operator sets default, preferred
podAntiAffinity
compactor
distributor
gateway
indexGateway
ingester
querier
queryFrontend
ruler
You can override the preferred
podAntiAffinity
requiredDuringSchedulingIgnoredDuringExecution
Example user settings for the ingester component
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
# ...
template:
ingester:
podAntiAffinity:
# ...
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: ingester
topologyKey: kubernetes.io/hostname
# ...
2.5.9. LokiStack behavior during cluster restarts Link kopierenLink in die Zwischenablage kopiert!
When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using
PodDisruptionBudget
PodDisruptionBudget
2.5.9.1. Advanced Deployment and Scalability Link kopierenLink in die Zwischenablage kopiert!
Specialized configurations for high availability, scalability, and error handling.
2.5.10. Zone aware data replication Link kopierenLink in die Zwischenablage kopiert!
The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as
1x.extra-small
1x.small
1x.medium
replication.factor
To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.
Example LokiStack CR with zone replication enabled
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
replicationFactor: 2
replication:
factor: 2
zones:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
- 1
- Deprecated field, values entered are overwritten by
replication.factor. - 2
- This value is automatically set when deployment size is selected at setup.
- 3
- The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
- 4
- Defines zones in the form of a topology key that corresponds to a node label.
2.5.11. Recovering Loki pods from failed zones Link kopierenLink in die Zwischenablage kopiert!
In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss.
Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a
StorageClass
The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the
LokiStack
Prerequisites
-
Verify your CR has a replication factor greater than 1.
LokiStack - Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration.
The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone.
Procedure
List the pods in
status by running the following command:Pending$ oc get pods --field-selector status.phase==Pending -n openshift-loggingExample
oc get podsoutputNAME READY STATUS RESTARTS AGE1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m- 1
- These pods are in
Pendingstatus because their corresponding PVCs are in the failed zone.
List the PVCs in
status by running the following command:Pending$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -rExample
oc get pvcoutputstorage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1Delete the PVC(s) for a pod by running the following command:
$ oc delete pvc <pvc_name> -n openshift-loggingDelete the pod(s) by running the following command:
$ oc delete pod <pod_name> -n openshift-loggingOnce these objects have been successfully deleted, they should automatically be rescheduled in an available zone.
2.5.11.1. Troubleshooting PVC in a terminating state Link kopierenLink in die Zwischenablage kopiert!
The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to
kubernetes.io/pv-protection
Remove the finalizer for each PVC by running the command below, then retry deletion.
$ oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging
2.5.12. Troubleshooting Loki rate limit errors Link kopierenLink in die Zwischenablage kopiert!
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (
429
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the
LokiStack
The
LokiStack
Conditions
- The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}After you enter
, the collector logs in your cluster show a line containing one of the following error messages:oc logs -n openshift-logging -l component=collector429 Too Many Requests Ingestion rate limit exceededExample Vector error message
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=trueExample Fluentd error message
2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
Example Loki ingester error message
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
Procedure
Update the
andingestionBurstSizefields in theingestionRateCR:LokiStackapiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 161 ingestionRate: 82 # ...- 1
- The
ingestionBurstSizefield defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSizevalue are not permitted. - 2
- The
ingestionRatefield is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
2.6. Visualization for logging Link kopierenLink in die Zwischenablage kopiert!
Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation.
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its ink:https://docs.redhat.com/en/documentation/red_hat_openshift_cluster_observability_operator/1-latest/html/ui_plugins_for_red_hat_openshift_cluster_observability_operator/logging-ui-plugin#coo-logging-ui-plugin-install_logging-ui-plugin[Logging UI Plugin] on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.