Configuring logging


Red Hat OpenShift Logging 6.3

Configuring log forwarding and LokiStack

Red Hat OpenShift Documentation Team

Abstract

This document provides an overview of configuring OpenShift logging features including log forwarding and LokiStack.

Chapter 1. Configuring log forwarding

The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.

Key Functions of the ClusterLogForwarder

  • Selects log messages using inputs
  • Forwards logs to external destinations using outputs
  • Filters, transforms, and drops log messages using filters
  • Defines log forwarding pipelines connecting inputs, filters and outputs

1.1. Setting up log collection

This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.

The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.

Setup log collection by binding the required cluster roles to your service account.

1.1.1. Legacy service accounts

To use the existing legacy service account logcollector, create the following ClusterRoleBinding:

$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
Copy to Clipboard Toggle word wrap
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Copy to Clipboard Toggle word wrap

Additionally, create the following ClusterRoleBinding if collecting audit logs:

$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
Copy to Clipboard Toggle word wrap

1.1.2. Creating service accounts

Prerequisites

  • The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace.
  • You have administrator permissions.

Procedure

  1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
  2. Bind the appropriate cluster roles to the service account:

    Example binding command

    $ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
    Copy to Clipboard Toggle word wrap

The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: manager-rolebinding
roleRef:                                           
1

  apiGroup: rbac.authorization.k8s.io              
2

  kind: ClusterRole                                
3

  name: cluster-logging-operator                   
4

subjects:                                          
5

  - kind: ServiceAccount                           
6

    name: cluster-logging-operator                 
7

    namespace: openshift-logging                   
8
Copy to Clipboard Toggle word wrap
1
roleRef: References the ClusterRole to which the binding applies.
2
apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
3
kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
4
name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
5
subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
6
kind: Specifies that the subject is a ServiceAccount.
7
Name: The name of the ServiceAccount being granted the permissions.
8
namespace: Indicates the namespace where the ServiceAccount is located.
1.1.2.2. Writing application logs

The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-logging-write-application-logs
rules:                                              
1

  - apiGroups:                                      
2

      - loki.grafana.com                            
3

    resources:                                      
4

      - application                                 
5

    resourceNames:                                  
6

      - logs                                        
7

    verbs:                                          
8

      - create                                      
9
Copy to Clipboard Toggle word wrap
1
rules: Specifies the permissions granted by this ClusterRole.
2
apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
3
loki.grafana.com: The API group for managing Loki-related resources.
4
resources: The resource type that the ClusterRole grants permission to interact with.
5
application: Refers to the application resources within the Loki logging system.
6
resourceNames: Specifies the names of resources that this role can manage.
7
logs: Refers to the log resources that can be created.
8
verbs: The actions allowed on the resources.
9
create: Grants permission to create new logs in the Loki system.
1.1.2.3. Writing audit logs

The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-logging-write-audit-logs
rules:                                              
1

  - apiGroups:                                      
2

      - loki.grafana.com                            
3

    resources:                                      
4

      - audit                                       
5

    resourceNames:                                  
6

      - logs                                        
7

    verbs:                                          
8

      - create                                      
9
Copy to Clipboard Toggle word wrap
1
rules: Defines the permissions granted by this ClusterRole.
2
apiGroups: Specifies the API group loki.grafana.com.
3
loki.grafana.com: The API group responsible for Loki logging resources.
4
resources: Refers to the resource type this role manages, in this case, audit.
5
audit: Specifies that the role manages audit logs within Loki.
6
resourceNames: Defines the specific resources that the role can access.
7
logs: Refers to the logs that can be managed under this role.
8
verbs: The actions allowed on the resources.
9
create: Grants permission to create new audit logs.
1.1.2.4. Writing infrastructure logs

The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.

Sample YAML

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-logging-write-infrastructure-logs
rules:                                              
1

  - apiGroups:                                      
2

      - loki.grafana.com                            
3

    resources:                                      
4

      - infrastructure                              
5

    resourceNames:                                  
6

      - logs                                        
7

    verbs:                                          
8

      - create                                      
9
Copy to Clipboard Toggle word wrap

1
rules: Specifies the permissions this ClusterRole grants.
2
apiGroups: Specifies the API group for Loki-related resources.
3
loki.grafana.com: The API group managing the Loki logging system.
4
resources: Defines the resource type that this role can interact with.
5
infrastructure: Refers to infrastructure-related resources that this role manages.
6
resourceNames: Specifies the names of resources this role can manage.
7
logs: Refers to the log resources related to infrastructure.
8
verbs: The actions permitted by this role.
9
create: Grants permission to create infrastructure logs in the Loki system.
1.1.2.5. ClusterLogForwarder editor role

The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: clusterlogforwarder-editor-role
rules:                                              
1

  - apiGroups:                                      
2

      - observability.openshift.io                  
3

    resources:                                      
4

      - clusterlogforwarders                        
5

    verbs:                                          
6

      - create                                      
7

      - delete                                      
8

      - get                                         
9

      - list                                        
10

      - patch                                       
11

      - update                                      
12

      - watch                                       
13
Copy to Clipboard Toggle word wrap
1
rules: Specifies the permissions this ClusterRole grants.
2
apiGroups: Refers to the OpenShift-specific API group
3
obervability.openshift.io: The API group for managing observability resources, like logging.
4
resources: Specifies the resources this role can manage.
5
clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
6
verbs: Specifies the actions allowed on the ClusterLogForwarders.
7
create: Grants permission to create new ClusterLogForwarders.
8
delete: Grants permission to delete existing ClusterLogForwarders.
9
get: Grants permission to retrieve information about specific ClusterLogForwarders.
10
list: Allows listing all ClusterLogForwarders.
11
patch: Grants permission to partially modify ClusterLogForwarders.
12
update: Grants permission to update existing ClusterLogForwarders.
13
watch: Grants permission to monitor changes to ClusterLogForwarders.

1.2. Modifying log level in collector

To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off.

Example log level annotation

apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: collector
  annotations:
    observability.openshift.io/log-level: debug
# ...
Copy to Clipboard Toggle word wrap

1.3. Managing the Operator

The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged:

Managed
(default) The operator will drive the logging resources to match the desired state in the CLF spec.
Unmanaged
The operator will not take any action related to the logging components.

This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged.

1.4. Structure of the ClusterLogForwarder

The CLF has a spec section that contains the following key components:

Inputs
Select log messages to be forwarded. Built-in input types application, infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs.
Outputs
Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
Pipelines
Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
Filters
Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.

1.4.1. Inputs

Inputs are configured in an array under spec.inputs. There are three built-in input types:

application
Selects logs from all application containers, excluding those in infrastructure namespaces.
infrastructure

Selects logs from nodes and from infrastructure components running in the following namespaces:

  • default
  • kube
  • openshift
  • Containing the kube- or openshift- prefix
audit
Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.

Users can define custom inputs of type application that select logs from specific namespaces or using pod labels.

1.4.2. Outputs

Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are:

azureMonitor
Forwards logs to Azure Monitor.
cloudwatch
Forwards logs to AWS CloudWatch.
elasticsearch
Forwards logs to an external Elasticsearch instance.
googleCloudLogging
Forwards logs to Google Cloud Logging.
http
Forwards logs to a generic HTTP endpoint.
kafka
Forwards logs to a Kafka broker.
loki
Forwards logs to a Loki logging backend.
lokistack
Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
otlp
Forwards logs using the OpenTelemetry Protocol.
splunk
Forwards logs to Splunk.
syslog
Forwards logs to an external syslog server.

Each output type has its own configuration fields.

1.4.3. Configuring OTLP output

Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.

Important

The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Procedure

  • Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      annotations:
        observability.openshift.io/tech-preview-otlp-output: "enabled" 
    1
    
      name: clf-otlp
    spec:
      serviceAccount:
        name: <service_account_name>
      outputs:
      - name: otlp
        type: otlp
        otlp:
          tuning:
            compression: gzip
            deliveryMode: AtLeastOnce
            maxRetryDuration: 20
            maxWrite: 10M
            minRetryDuration: 5
          url: <otlp_url> 
    2
    
      pipelines:
      - inputRefs:
        - application
        - infrastructure
        - audit
        name: otlp-logs
        outputRefs:
        - otlp
    Copy to Clipboard Toggle word wrap

    1
    Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature.
    2
    This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent.
Note

The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.

1.4.4. Pipelines

Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of:

inputRefs
Names of inputs whose logs should be forwarded to this pipeline.
outputRefs
Names of outputs to send logs to.
filterRefs
(optional) Names of filters to apply.

The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.

1.4.5. Filters

Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them.

1.5. About forwarding logs to third-party systems

To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.

pipeline

Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:

  • application. Container logs generated by user applications running in the cluster, except infrastructure container applications.
  • infrastructure. Container logs from pods that run in the openshift*, kube*, or default projects and journal logs sourced from node file system.
  • audit. Audit logs generated by the node audit system, auditd, Kubernetes API server, OpenShift API server, and OVN network.

You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.

input

Forwards the application logs associated with a specific project to a pipeline.

In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter.

Secret
A key:value map that contains confidential data such as user credentials.

Note the following:

  • If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped.
  • You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols.

The following example forwards the audit logs to a secure external Elasticsearch instance.

Sample log forwarding outputs and pipelines

kind: ClusterLogForwarder
apiVersion: observability.openshift.io/v1
metadata:
  name: instance
  namespace: openshift-logging
spec:
  serviceAccount:
    name: logging-admin
  outputs:
    - name: external-es
      type: elasticsearch
      elasticsearch:
        url: 'https://example-elasticsearch-secure.com:9200'
        version: 8  
1

        index: '{.log_type||"undefined"}'  
2

        authentication:
          username:
            key: username
            secretName: es-secret  
3

          password:
            key: password
            secretName: es-secret  
4

      tls:
        ca:                        
5

          key: ca-bundle.crt
          secretName: es-secret
        certificate:
          key: tls.crt
          secretName: es-secret
        key:
          key: tls.key
          secretName: es-secret
  pipelines:
    - name: my-logs
      inputRefs:
        - application
        - infrastructure
      outputRefs:
        - external-es
Copy to Clipboard Toggle word wrap

1
Forwarding to an external Elasticsearch of version 8.x or greater requires the version field to be specified.
2
index is set to read the field value .log_type and falls back to "unknown" if not found.
3 4
Use username and password to authenticate to the server
5
Enable Mutual Transport Layer Security (mTLS) between collector and elasticsearch. The spec identifies the keys and secret to the respective certificates that they represent.
Supported Authorization Keys

Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.

Transport Layer Security (TLS)

Using a TLS URL (http://... or ssl://...) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:

  • passphrase: (string) Passphrase to decode an encoded TLS private key. Requires tls.key.
  • ca-bundle.crt: (string) File name of a customer CA for server authentication.
Username and Password
  • username: (string) Authentication user name. Requires password.
  • password: (string) Authentication password. Requires username.
Simple Authentication Security Layer (SASL)
  • sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set.
  • sasl.mechanisms: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
  • sasl.allow-insecure: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.

1.5.1. Creating a Secret

You can create a secret in the directory that contains your certificate and key files by using the following command:

$ oc create secret generic -n <namespace> <secret_name> \
  --from-file=ca-bundle.crt=<your_bundle_file> \
  --from-literal=username=<your_username> \
  --from-literal=password=<your_password>
Copy to Clipboard Toggle word wrap
Note

Generic or opaque secrets are recommended for best results.

1.6. Creating a log forwarder

To create a log forwarder, create a ClusterLogForwarder custom resource (CR). This CR defines the service account, permissible input log types, pipelines, outputs, and any optional filters.

Important

You need administrator permissions for the namespace where you create the ClusterLogForwarder CR.

ClusterLogForwarder CR example

apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name>
  namespace: <log_forwarder_namespace>
spec:
  outputs:                       
1

    - name: <output_name>
      type: <output_type>
  inputs:                        
2

    - name: <input_name>
      type: <input_type>
  filters:                       
3

    - name: <filter_name>
      type: <filter_type>
  pipelines:
    - inputRefs:
      - <input_name>             
4

    - outputRefs:
      - <output_name>            
5

    - filterRefs:
      - <filter_name>            
6

  serviceAccount:
    name: <service_account_name> 
7

# ...
Copy to Clipboard Toggle word wrap

1
The type of output that you want to forward logs to. The value of this field can be azureMonitor, cloudwatch, elasticsearch, googleCloudLogging, http, kafka, loki, lokistack, otlp, splunk, or syslog.
2
A list of inputs. The names application, audit, and infrastructure are reserved for the default inputs.
3
A list of filters to apply to records going through this pipeline. Each filter is applied in the order defined here. If a filter drops a records, subsequent filters are not applied.
4
This value should be the same as the input name. You can also use the default input names application, infrastructure, and audit.
5
This value should be the same as the output name.
6
This value should be the same as the filter name.
7
The name of your service account.

1.7. Tuning log payloads and delivery

The tuning spec in the ClusterLogForwarder custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.

For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.

Important

To use this feature, your logging deployment must be configured to use the Vector collector. The tuning spec in the ClusterLogForwarder CR is not supported when using the Fluentd collector.

The following example shows the ClusterLogForwarder CR options that you can modify to tune log forwarder outputs:

Example ClusterLogForwarder CR tuning options

apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
  # ...
  outputs:
  - name: default-lokistack
    type: lokiStack
    lokiStack:
      tuning:
        deliveryMode: AtLeastOnce 
1

        compression: none 
2

        maxWrite: <integer> 
3

        minRetryDuration: 1s 
4

        maxRetryDuration: 1s 
5

# ...
Copy to Clipboard Toggle word wrap

1
Specify the delivery mode for log forwarding.
  • AtLeastOnce delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
  • AtMostOnce delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
2
Specifying a compression configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. For more information, see "Supported compression types for tuning outputs".
3
Specifies a limit for the maximum payload of a single send operation to the output.
4
Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
5
Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
Expand
Table 1.1. Supported compression types for tuning outputs
Compression algorithmSplunkAmazon CloudwatchElasticsearch 8LokiStackApache KafkaHTTPSyslogGoogle CloudMicrosoft Azure Monitoring

gzip

X

X

X

X

 

X

   

snappy

 

X

 

X

X

X

   

zlib

 

X

X

  

X

   

zstd

 

X

  

X

X

   

lz4

    

X

    

1.7.1. Enabling multi-line exception detection

Enables multi-line error detection of container logs.

Warning

Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.

Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.

Example java exception

java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
    at testjava.Main.handle(Main.java:47)
    at testjava.Main.printMe(Main.java:19)
    at testjava.Main.main(Main.java:10)
Copy to Clipboard Toggle word wrap

  • To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters.

Example ClusterLogForwarder CR

apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name>
  namespace: <log_forwarder_namespace>
spec:
  serviceAccount:
    name: <service_account_name>
  filters:
  - name: <name>
    type: detectMultilineException
  pipelines:
    - inputRefs:
        - <input-name>
      name: <pipeline-name>
      filterRefs:
        - <filter-name>
      outputRefs:
        - <output-name>
Copy to Clipboard Toggle word wrap

1.7.1.1. Details

When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.

The collector supports the following languages:

  • Java
  • JS
  • Ruby
  • Python
  • Golang
  • PHP
  • Dart

You can forward logs to Google Cloud Logging.

Important

Forwarding logs to GCP is not supported on Red Hat OpenShift on AWS.

Prerequisites

  • Red Hat OpenShift Logging Operator has been installed.

Procedure

  1. Create a secret using your Google service account key.

    $ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
    Copy to Clipboard Toggle word wrap
  2. Create a ClusterLogForwarder Custom Resource YAML using the template below:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: openshift-logging
    spec:
      serviceAccount:
        name: <service_account_name> 
    1
    
      outputs:
        - name: gcp-1
          type: googleCloudLogging
          googleCloudLogging:
            authentication:
              credentials:
                secretName: gcp-secret
                key: google-application-credentials.json
            id:
              type : project
              value: openshift-gce-devel 
    2
    
            logId : app-gcp 
    3
    
      pipelines:
        - name: test-app
          inputRefs: 
    4
    
            - application
          outputRefs:
            - gcp-1
    Copy to Clipboard Toggle word wrap
1
The name of your service account.
2
Set a project, folder, organization, or billingAccount field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy.
3
Set the value to add to the logName field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by ||, followed by another field path or a static value. A dynamic value must be encased in single curly brackets {} and must end with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes.
4
Specify the the names of inputs, defined in the input.name field for this pipeline. You can also use the built-in values application, infrastructure, audit.

1.9. Forwarding logs to Splunk

You can forward logs to the Splunk HTTP Event Collector (HEC).

Prerequisites

  • Red Hat OpenShift Logging Operator has been installed
  • You have obtained a Base64 encoded Splunk HEC token.

Procedure

  1. Create a secret using your Base64 encoded Splunk HEC token.

    $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
    Copy to Clipboard Toggle word wrap
  2. Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: openshift-logging
    spec:
      serviceAccount:
        name: <service_account_name> 
    1
    
      outputs:
        - name: splunk-receiver 
    2
    
          type: splunk 
    3
    
          splunk:
            url: '<http://your.splunk.hec.url:8088>' 
    4
    
            authentication:
              token:
                secretName: splunk-secret
                key: hecToken 
    5
    
            index: '{.log_type||"undefined"}' 
    6
    
            source: '{.log_source||"undefined"}' 
    7
    
            indexedFields: ['.log_type', '.log_source'] 
    8
    
            payloadKey: '.kubernetes' 
    9
    
            tuning:
                compression: gzip 
    10
    
      pipelines:
        - name: my-logs
          inputRefs: 
    11
    
            - application
            - infrastructure
          outputRefs:
            - splunk-receiver 
    12
    Copy to Clipboard Toggle word wrap
    1
    The name of your service account.
    2
    Specify a name for the output.
    3
    Specify the output type as splunk.
    5
    Specify the name of the secret that contains your HEC token.
    4
    Specify the URL, including port, of your Splunk HEC.
    6
    Specify the name of the index to send events to. If you do not specify an index, the default index of the splunk server configuration is used. This is an optional field.
    7
    Specify the source of events to be sent to this sink. You can configure dynamic per-event values. This field is optional.
    8
    Specify the fields to be added to the Splunk index. This field is optional.
    9
    Specify the record field to be used as the payload. This field is optional.
    10
    Specify the compression configuration, which can be either gzip or none. The default value is none. This field is optional.
    11
    Specify the input names.
    12
    Specify the name of the output to use when forwarding logs with this pipeline.

1.10. Forwarding logs over HTTP

To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR).

Procedure

  • Create or edit the ClusterLogForwarder CR using the template below:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: <log_forwarder_namespace>
    spec:
      managementState: Managed
      outputs:
      - name: <output_name>
        type: http
        http:
          headers:  
    1
    
              h1: v1
              h2: v2
          authentication:
            username:
              key: username
              secretName: <http_auth_secret>
            password:
              key: password
              secretName: <http_auth_secret>
          timeout: 300
          proxyURL: <proxy_url> 
    2
    
          url: <url> 
    3
    
        tls:
          insecureSkipVerify: 
    4
    
          ca:
            key: <ca_certificate>
            secretName: <secret_name> 
    5
    
      pipelines:
        - inputRefs:
            - application
          name: pipe1
          outputRefs:
            - <output_name>  
    6
    
      serviceAccount:
        name: <service_account_name> 
    7
    Copy to Clipboard Toggle word wrap

    1
    Additional headers to send with the log record.
    2
    Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node.
    3
    Destination address for logs.
    4
    Values are either true or false.
    5
    Secret name for destination credentials.
    6
    This value should be the same as the output name.
    7
    The name of your service account.

1.11. Forwarding to Azure Monitor Logs

You can forward logs to Azure Monitor Logs. This functionality is provided by the Vector Azure Monitor Logs sink.

Prerequisites

  • You have basic familiarity with Azure services.
  • You have an Azure account configured for Azure Portal or Azure CLI access.
  • You have obtained your Azure Monitor Logs primary or the secondary security key.
  • You have determined which log types to forward.
  • You installed the OpenShift CLI (oc).
  • You have installed Red Hat OpenShift Logging Operator.
  • You have administrator permissions.

Procedure

  1. Enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:

Create a secret with your shared key:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: openshift-logging
type: Opaque
data:
  shared_key: <your_shared_key> 
1
Copy to Clipboard Toggle word wrap
1
Must contain a primary or secondary key for the Log Analytics workspace making the request.
  1. To obtain a shared key, you can use this command in Azure CLI:
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”
Copy to Clipboard Toggle word wrap
  1. Create or edit your ClusterLogForwarder CR using the template matching your log selection.

Forward all logs

apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name>
  namespace: openshift-logging
spec:
  serviceAccount:
    name: <service_account_name> 
1

  outputs:
  - name: azure-monitor
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id 
2

      logType: my_log_type 
3

      authentication:
        sharedKey:
          secretName: my-secret
          key: shared_key
  pipelines:
    - name: app-pipeline
      inputRefs:
      - application
      outputRefs:
      - azure-monitor
Copy to Clipboard Toggle word wrap

1
The name of your service account.
2
Unique identifier for the Log Analytics workspace. Required field.
3
Record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. For more information, see Azure record type in the Microsoft Azure documentation.

You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.

To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the ClusterLogForwarder CR:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: <log_forwarder_namespace>
    spec:
      serviceAccount:
        name: <service_account_name>
      outputs:
      - name: <output_name>
        type: <output_type>
      inputs:
      - name: my-app-logs    
    1
    
        type: application    
    2
    
        application:
          includes:  
    3
    
          - namespace: my-project
      filters:
      - name: my-project-labels
        type: openshiftLabels
        openshiftLabels:  
    4
    
          project: my-project
      - name: cluster-labels
        type: openshiftLabels
        openshiftLabels:
          clusterId: C1234
      pipelines:
      - name: <pipeline_name> 
    5
    
        inputRefs:
        - my-app-logs
        outputRefs:
        - <output_name>
        filterRefs:
        - my-project-labels
        - cluster-labels
    Copy to Clipboard Toggle word wrap

    1
    Specify the name for the input.
    2
    Specify the type as application to collect logs from applications.
    3
    Specify the set of namespaces and containers to include when collecting logs.
    4
    Specify the labels to be applied to log records passing through this pipeline. These labels appear in the openshift.labels map in the log record.
    5
    Specify a name for the pipeline.
  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.

Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.

To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.

Procedure

  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels, as shown in the following example.

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: <log_forwarder_namespace>
    spec:
      serviceAccount:
        name: <service_account_name> 
    1
    
      outputs:
        - <output_name>
        # ...
      inputs:
      - name: exampleAppLogData 
    2
    
        type: application 
    3
    
        application:
          includes: 
    4
    
          - namespace: app1
          - namespace: app2
          selector:
            matchLabels: 
    5
    
              environment: production
              app: nginx
      pipelines:
      - inputRefs:
        - exampleAppLogData
        outputRefs:
        # ...
    Copy to Clipboard Toggle word wrap
    1
    Specify the service account name.
    2
    Specify a name for the input.
    3
    Specify the type as application to collect logs from applications.
    4
    Specify the set of namespaces to include when collecting logs.
    5
    Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
  2. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.

    1. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown.
    2. Update the selectors to match the pod labels of this application.
    3. Add the new inputs[].name value to inputRefs. For example:

      - inputRefs: [ myAppLogData, myOtherAppLogData ]
      Copy to Clipboard Toggle word wrap
  3. Create the CR object:

    $ oc create -f <file-name>.yaml
    Copy to Clipboard Toggle word wrap

1.13.1. Forwarding logs using the syslog protocol

You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.

To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the ClusterLogForwarder CR object:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: collector
    spec:
      managementState: Managed
      outputs:
      - name: rsyslog-east 
    1
    
        syslog:
          appName: <app_name> 
    2
    
          enrichment: KubernetesMinimal
          facility: <facility_value> 
    3
    
          msgId: <message_ID> 
    4
    
          payloadKey: <record_field> 
    5
    
          procId: <process_ID> 
    6
    
          rfc: <RFC3164_or_RFC5424> 
    7
    
          severity: informational 
    8
    
          tuning:
            deliveryMode: <AtLeastOnce_or_AtMostOnce> 
    9
    
          url: <url> 
    10
    
        tls: 
    11
    
          ca:
            key: ca-bundle.crt
            secretName: syslog-secret
        type: syslog
      pipelines:
      - inputRefs: 
    12
    
        - application
        name: syslog-east 
    13
    
        outputRefs:
        - rsyslog-east
      serviceAccount: 
    14
    
        name: logcollector
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the output.
    2
    Optional: Specify the value for the APP-NAME part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
    3
    Optional: Specify the value for Facility part of the syslog-msg header.
    4
    Optional: Specify the value for MSGID part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
    5
    Optional: Specify the record field to use as the payload. The payloadKey value must be a single field path encased in single curly brackets {}. Example: {.<value>}.
    6
    Optional: Specify the value for the PROCID part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by ||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with ||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
    7
    Optional: Set the RFC that the generated messages conform to. The value can be RFC3164 or RFC5424.
    8
    Optional: Set the severity level for the message. For more information, see The Syslog Protocol.
    9
    Optional: Set the delivery mode for log forwarding. The value can be either AtLeastOnce, or AtMostOnce.
    10
    Specify the absolute URL with a scheme. Valid schemes are: tcp, tls, and udp. For example: tls://syslog-receiver.example.com:6514.
    11
    Specify the settings for controlling options of the transport layer security (TLS) client connections.
    12
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    13
    Specify a name for the pipeline.
    14
    The name of your service account.
  2. Create the CR object:

    $ oc create -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR).

# ...
  spec:
    outputs:
    - name: syslogout
      syslog:
        enrichment: KubernetesMinimal
        facility: user
        payloadKey: message
        rfc: RFC3164
        severity: debug
      type: syslog
      url: tls://syslog-receiver.example.com:6514
    pipelines:
    - inputRefs:
      - application
      name: test-app
      outputRefs:
      - syslogout
# ...
Copy to Clipboard Toggle word wrap
Note

This configuration is compatible with both RFC3164 and RFC5424.

Example syslog message output with enrichment: None

 2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: {...}
Copy to Clipboard Toggle word wrap

Example syslog message output with enrichment: KubernetesMinimal

2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
Copy to Clipboard Toggle word wrap

Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on Amazon Web Services (AWS). You can forward logs from OpenShift Logging to CloudWatch securely by leveraging AWS’s Identity and Access Management (IAM) Roles for Service Accounts (IRSA), which uses AWS Security Token Service (STS).

The authentication with CloudWatch works as follows:

  1. The log collector requests temporary AWS credentials from Security Token Service (STS) by presenting its service account token to the OpenID Connect (OIDC) provider in AWS.
  2. AWS validates the token. Afterward, depending on the trust policy, AWS issues short-lived, temporary credentials, including an access key ID, secret access key, and session token, for the log collector to use.

On STS-enabled clusters such as Red Hat OpenShift Service on AWS, AWS roles are pre-configured with the required trust policies. This allows service accounts to assume the roles. Therefore, you can create a secret for AWS with STS that uses the IAM role. You can then create or update a ClusterLogForwarder custom resource (CR) that uses the secret to forward logs to CloudWatch output. Follow these procedures to create a secret and a ClusterLogForwarder CR if roles have been pre-configured:

  • Creating a secret for CloudWatch with an existing AWS role
  • Forwarding logs to Amazon CloudWatch from STS-enabled clusters

If you do not have an AWS IAM role pre-configured with trust policies, you must first create the role with the required trust policies. Complete the following procedures to create a secret, ClusterLogForwarder CR, and role.

1.14.1. Creating an AWS IAM role

Create an Amazon Web Services (AWS) IAM role that your service account can assume to securely access AWS resources.

The following procedure demonstrates creating an AWS IAM role by using the AWS CLI. You can alternatively use the Cloud Credential Operator (CCO) utility ccoctl. Using the ccoctl utility creates many fields in the IAM role policy that are not required by the ClusterLogForwarder custom resource (CR). These extra fields are ignored by the CR. However, the ccoctl utility provides a convenient way for configuring IAM roles. For more information see Manual mode with short-term credentials for components.

Prerequisites

  • You have access to an Red Hat OpenShift Logging cluster with Security Token Service (STS) enabled and configured for AWS.
  • You have administrator access to the AWS account.
  • AWS CLI installed.

Procedure

  1. Create an IAM policy that grants permissions to write logs to CloudWatch.

    1. Create a file, for example cw-iam-role-policy.json, with the following content:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "logs:PutLogEvents",
                      "logs:CreateLogGroup",
                      "logs:PutRetentionPolicy",
                      "logs:CreateLogStream",
                      "logs:DescribeLogGroups",
                      "logs:DescribeLogStreams"
                  ],
                  "Resource": "arn:aws:logs:*:*:*"
              }
          ]
      }
      Copy to Clipboard Toggle word wrap
    2. Create the IAM policy based on the previous policy definition by running the following command:

      aws iam create-policy \
          --policy-name cluster-logging-allow \
          --policy-document file://cw-iam-role-policy.json
      Copy to Clipboard Toggle word wrap

      Note the Arn value of the created policy.

  2. Create a trust policy to allow the logging service account to assume an IAM role:

    1. Create a file, for example cw-trust-policy.json, with the following content:

      {
      "Version": "2012-10-17",
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Federated": "arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL>" 
      1
      
              },
              "Action": "sts:AssumeRoleWithWebIdentity",
              "Condition": {
                  "StringEquals": {
                      "<OPENSHIFT_OIDC_PROVIDER_URL>:sub": "system:serviceaccount:openshift-logging:logcollector" 
      2
      
                  }
              }
          }
      ]
      }
      Copy to Clipboard Toggle word wrap
      1
      Replace <OPENSHIFT_OIDC_PROVIDER_URL> with the URL of your Red Hat OpenShift Logging OIDC URL.
      2
      The namespace and service account must match the namespace and service account that the log forwarder uses.
  3. Create an IAM role based on the previously defined trust policy by running the following command:

    $ aws iam create-role --role-name openshift-logger --assume-role-policy-document file://cw-trust-policy.json
    Copy to Clipboard Toggle word wrap

    Note the Arn value of the created role.

  4. Attach the policy to the role by running the following command:

    $ aws iam put-role-policy \
          --role-name openshift-logger --policy-name cluster-logging-allow \
          --policy-document file://cw-role-policy.json
    Copy to Clipboard Toggle word wrap

Verification

  • Verify the role and the permissions policy by running the following command:

    $ aws iam get-role --role-name openshift-logger
    Copy to Clipboard Toggle word wrap

    Example output

    ROLE	arn:aws:iam::123456789012:role/openshift-logger
    ASSUMEROLEPOLICYDOCUMENT	2012-10-17
    STATEMENT	sts:AssumeRoleWithWebIdentity	Allow
    STRINGEQUALS	system:serviceaccount:openshift-logging:openshift-logger
    PRINCIPAL	arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL>
    Copy to Clipboard Toggle word wrap

Create a secret for Amazon Web Services (AWS) Security Token Service (STS) from the configured AWS IAM role by using the oc create secret --from-literal command.

Prerequisites

  • You have created an AWS IAM role.
  • You have administrator access to Red Hat OpenShift Logging.

Procedure

  • In the CLI, enter the following to generate a secret for AWS:

    $ oc create secret generic sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/openshift-logger
    Copy to Clipboard Toggle word wrap

    Example Secret

    apiVersion: v1
    kind: Secret
    metadata:
      namespace: openshift-logging
      name: sts-secret
    stringData:
      role_arn: arn:aws:iam::123456789012:role/openshift-logger
    Copy to Clipboard Toggle word wrap

You can forward logs from logging for Red Hat OpenShift deployed on clusters with Amazon Web Services (AWS) Security Token Service (STS)-enabled to Amazon CloudWatch. Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on AWS.

Prerequisites

  • Red Hat OpenShift Logging Operator has been installed.
  • You have configured a credential secret.
  • You have administrator access to Red Hat OpenShift Logging.

Procedure

  • Create or update a ClusterLogForwarder custom resource (CR):

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name>
      namespace: openshift-logging
    spec:
      serviceAccount:
        name: <service_account_name> 
    1
    
      outputs:
       - name: cw-output 
    2
    
         type: cloudwatch 
    3
    
         cloudwatch:
           groupName: 'cw-projected{.log_type||"missing"}' 
    4
    
           region: us-east-2 
    5
    
           authentication:
             type: iamRole 
    6
    
             iamRole:
               roleARN: 
    7
    
                 key: role_arn
                 secretName: sts-secret
               token: 
    8
    
                 from: serviceAccount
      pipelines:
        - name: to-cloudwatch
          inputRefs: 
    9
    
            - infrastructure
            - audit
            - application
          outputRefs: 
    10
    
            - cw-output
    Copy to Clipboard Toggle word wrap
    1
    Specify the service account.
    2
    Specify a name for the output.
    3
    Specify the cloudwatch type.
    4
    Specify the group name for the log stream.
    5
    Specify the AWS region.
    6
    Specify iamRole as the authentication type for STS.
    7
    Specify the name of the secret and the key where the role_arn resource is stored.
    8
    Specify the service account token to use for authentication. To use the projected service account token, use from: serviceAccount.
    9
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    10
    Specify the names of the output to use when forwarding logs with this pipeline.

Collecting all cluster logs produces a large amount of data, which can be expensive to move and store. To reduce volume, you can configure the drop filter to exclude unwanted log records before forwarding. The log collector evaluates log streams against the filter and drops records that match specified conditions.

The drop filter uses the test field to define one or more conditions for evaluating log records. The filter applies the following rules to check whether to drop a record:

  • A test passes if all its specified conditions evaluate to true.
  • If a test passes, the filter drops the log record.
  • If you define several tests in the drop filter configuration, the filter drops the log record if any of the tests pass.
  • If there is an error evaluating a condition, for example, the referenced field is missing, that condition evaluates to false.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator.
  • You have administrator permissions.
  • You have created a ClusterLogForwarder custom resource (CR).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Extract the existing ClusterLogForwarder configuration and save it as a local file.

    $ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
    Copy to Clipboard Toggle word wrap

    Where:

    • <name> is the name of the ClusterLogForwarder instance you want to configure.
    • <namespace> is the namespace where you created the ClusterLogForwarder instance, for example openshift-logging.
    • <filename> is the name of the local file where you save the configuration.
  2. Add a configuration to drop unwanted log records to the filters spec in the ClusterLogForwarder CR.

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      # ...
      filters:
      - name: drop-filter
        type: drop 
    1
    
        drop: 
    2
    
        - test: 
    3
    
          - field: .kubernetes.labels."app.version-1.2/beta" 
    4
    
            matches: .+ 
    5
    
          - field: .kubernetes.pod_name
            notMatches: "my-pod" 
    6
    
      pipelines:
      - name: my-pipeline 
    7
    
        filterRefs:
        - drop-filter
      # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify the type of filter. The drop filter drops log records that match the filter configuration.
    2
    Specify configuration options for the drop filter.
    3
    Specify conditions for tests to evaluate whether the filter drops a log record.
    4
    Specify dot-delimited paths to fields in log records.
    • Each path segment can contain alphanumeric characters and underscores, a-z, A-Z, 0-9, _, for example, .kubernetes.namespace_name.
    • If segments contain different characters, the segment must be in quotes, for example, .kubernetes.labels."app.version-1.2/beta".
    • You can include several field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to apply.
    5
    Specify a regular expression. If log records match this regular expression, they are dropped.
    6
    Specify a regular expression. If log records do not match this regular expression, they are dropped.
    7
    Specify the pipeline that uses the drop filter.
    Note

    You can set either the matches or notMatches condition for a single field path, but not both.

    Example configuration that keeps only high-priority log records

    # ...
    filters:
    - name: important
      type: drop
      drop:
      - test:
        - field: .message
          notMatches: "(?i)critical|error"
        - field: .level
          matches: "info|warning"
    # ...
    Copy to Clipboard Toggle word wrap

    Example configuration with several tests

    # ...
    filters:
    - name: important
      type: drop
      drop:
      - test: 
    1
    
        - field: .kubernetes.namespace_name
          matches: "openshift.*"
      - test: 
    2
    
        - field: .log_type
          matches: "application"
        - field: .kubernetes.pod_name
          notMatches: "my-pod"
    # ...
    Copy to Clipboard Toggle word wrap

    1
    The filter drops logs that contain a namespace that starts with openshift.
    2
    The filter drops application logs that do not have my-pod in the pod name.
  3. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

1.14.5. API audit filter overview

OpenShift API servers generate audit events for every API call. These events include details about the request, the response, and the identity of the requester. This can lead to large volumes of data.

The API audit filter helps manage the audit trail by using rules to exclude non-essential events and to reduce the event size. Rules are checked in order, and checking stops at the first match. The amount of data in an event depends on the value of the level field:

  • None: The event is dropped.
  • Metadata: The event includes audit metadata and excludes request and response bodies.
  • Request: The event includes audit metadata and the request body, and excludes the response body.
  • RequestResponse: The event includes all data: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster.
Note

You can only use the API audit filter feature if the Vector collector is set up in your logging deployment.

The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy. The ClusterLogForwarder CR provides the following additional functions:

Wildcards
Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the openshift-\* namespace matches openshift-apiserver or openshift-authentication namespaces. The \*/status resource matches Pod/status or Deployment/status resources.
Default Rules

Events that do not match any rule in the policy are filtered as follows:

  • Read-only system events such as get, list, and watch are dropped.
  • Service account write events that occur within the same namespace as the service account are dropped.
  • All other events are forwarded, subject to any configured rate limits.

To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule.

Omit Response Codes
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], no status codes are omitted.

The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.

Important
  • You must have the collect-audit-logs cluster role to collect the audit logs.
  • The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.

Example audit policy

apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  serviceAccount:
    name: example-service-account
  pipelines:
    - name: my-pipeline
      inputRefs:
        - audit 
1

      filterRefs:
        - my-policy 
2

      outputRefs:
        - my-output
  filters:
    - name: my-policy
      type: kubeAPIAudit
      kubeAPIAudit:
        # Don't generate audit events for all requests in RequestReceived stage.
        omitStages:
          - "RequestReceived"

        rules:
          # Log pod changes at RequestResponse level
          - level: RequestResponse
            resources:
            - group: ""
              resources: ["pods"]

          # Log "pods/log", "pods/status" at Metadata level
          - level: Metadata
            resources:
            - group: ""
              resources: ["pods/log", "pods/status"]

          # Don't log requests to a configmap called "controller-leader"
          - level: None
            resources:
            - group: ""
              resources: ["configmaps"]
              resourceNames: ["controller-leader"]

          # Don't log watch requests by the "system:kube-proxy" on endpoints or services
          - level: None
            users: ["system:kube-proxy"]
            verbs: ["watch"]
            resources:
            - group: "" # core API group
              resources: ["endpoints", "services"]

          # Don't log authenticated requests to certain non-resource URL paths.
          - level: None
            userGroups: ["system:authenticated"]
            nonResourceURLs:
            - "/api*" # Wildcard matching.
            - "/version"

          # Log the request body of configmap changes in kube-system.
          - level: Request
            resources:
            - group: "" # core API group
              resources: ["configmaps"]
            # This rule only applies to resources in the "kube-system" namespace.
            # The empty string "" can be used to select non-namespaced resources.
            namespaces: ["kube-system"]

          # Log configmap and secret changes in all other namespaces at the Metadata level.
          - level: Metadata
            resources:
            - group: "" # core API group
              resources: ["secrets", "configmaps"]

          # Log all other resources in core and extensions at the Request level.
          - level: Request
            resources:
            - group: "" # core API group
            - group: "extensions" # Version of group should NOT be included.

          # A catch-all rule to log all other requests at the Metadata level.
          - level: Metadata
Copy to Clipboard Toggle word wrap

1
The collected log types. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that is defined for your application.
2
The name of your audit policy.

You can include the application logs based on the label expressions or a matching label key and its values by using the input selector.

Procedure

  1. Add a configuration for a filter to the input spec in the ClusterLogForwarder CR.

    The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
        - name: mylogs
          application:
            selector:
              matchExpressions:
              - key: env 
    1
    
                operator: In 
    2
    
                values: ["prod", "qa"] 
    3
    
              - key: zone
                operator: NotIn
                values: ["east", "west"]
              matchLabels: 
    4
    
                app: one
                name: app1
          type: application
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specifies the label key to match.
    2
    Specifies the operator. Valid values include: In, NotIn, Exists, and DoesNotExist.
    3
    Specifies an array of string values. If the operator value is either Exists or DoesNotExist, the value array must be empty.
    4
    Specifies an exact key or value mapping.
  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

If you configure the prune filter, the log collector evaluates log streams against the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator.
  • You have administrator permissions.
  • You have created a ClusterLogForwarder custom resource (CR).
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Extract the existing ClusterLogForwarder configuration and save it as a local file.

    $ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
    Copy to Clipboard Toggle word wrap

    Where:

    • <name> is the name of the ClusterLogForwarder instance you want to configure.
    • <namespace> is the namespace where you created the ClusterLogForwarder instance, for example openshift-logging.
    • <filename> is the name of the local file where you save the configuration.
  2. Add a configuration to prune log records to the filters spec in the ClusterLogForwarder CR.

    Important

    If you specify both in and notIn parameters, the notIn array takes precedence over in during pruning. After records are pruned by using the notIn array, they are then pruned by using the in array.

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      serviceAccount:
        name: my-account
      filters:
      - name: prune-filter
        type: prune 
    1
    
        prune: 
    2
    
          in: [.kubernetes.annotations, .kubernetes.namespace_id] 
    3
    
          notIn: [.kubernetes,.log_type,.message,."@timestamp",.log_source] 
    4
    
      pipelines:
      - name: my-pipeline 
    5
    
        filterRefs: ["prune-filter"]
      # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify the type of filter. The prune filter prunes log records by configured fields.
    2
    Specify configuration options for the prune filter.
    • The in and notIn fields are arrays of dot-delimited paths to fields in log records.
    • Each path segment can contain alpha-numeric characters and underscores, a-z, A-Z, 0-9, _, for example, .kubernetes.namespace_name.
    • If segments contain different characters, the segment must be in quotes, for example, .kubernetes.labels."app.version-1.2/beta".
    3
    Optional: Specify fields to remove from the log record. The log collector keeps all other fields.
    4
    Optional: Specify fields to keep in the log record. The log collector removes all other fields.
    5
    Specify the pipeline that the prune filter is applied to.
    Important
    • The filters cannot remove the .log_type, .log_source, .message fields from the log records. You must include them in the notIn field.
    • If you use the googleCloudLogging output, you must include .hostname in the notIn field.
  3. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

You can define the list of audit and infrastructure sources to collect the logs by using the input selector.

Procedure

  1. Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR.

    The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
        - name: mylogs1
          type: infrastructure
          infrastructure:
            sources: 
    1
    
              - node
        - name: mylogs2
          type: audit
          audit:
            sources: 
    2
    
              - kubeAPI
              - openshiftAPI
              - ovn
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specifies the list of infrastructure sources to collect. The valid sources include:
    • node: Journal log from the node
    • container: Logs from the workloads deployed in the namespaces
    2
    Specifies the list of audit sources to collect. The valid sources include:
    • kubeAPI: Logs from the Kubernetes API servers
    • openshiftAPI: Logs from the OpenShift API servers
    • auditd: Logs from a node auditd service
    • ovn: Logs from an open virtual network service
  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

You can include or exclude the application logs based on the namespace and container name by using the input selector.

Procedure

  1. Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR.

    The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
        - name: mylogs
          application:
            includes:
              - namespace: "my-project" 
    1
    
                container: "my-container" 
    2
    
            excludes:
              - container: "other-container*" 
    3
    
                namespace: "other-namespace" 
    4
    
          type: application
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specifies that the logs are only collected from these namespaces.
    2
    Specifies that the logs are only collected from these containers.
    3
    Specifies the pattern of namespaces to ignore when collecting the logs.
    4
    Specifies the set of containers to ignore when collecting the logs.
    Note

    The excludes field takes precedence over the includes field.

  2. Apply the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Chapter 2. Configuring the logging collector

Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogForwarder custom resource (CR).

2.1. Creating a LogFileMetricExporter resource

You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.

Note

If you do not create the LogFileMetricExporter CR, you might see a No datapoints found message in the OpenShift Container Platform web console dashboard for the Produced Logs field.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a LogFileMetricExporter CR as a YAML file:

    Example LogFileMetricExporter CR

    apiVersion: logging.openshift.io/v1alpha1
    kind: LogFileMetricExporter
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      nodeSelector: {} 
    1
    
      resources: 
    2
    
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      tolerations: [] 
    3
    
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Optional: The nodeSelector stanza defines which nodes the pods are scheduled on.
    2
    The resources stanza defines resource requirements for the LogFileMetricExporter CR.
    3
    Optional: The tolerations stanza defines the tolerations that the pods accept.
  2. Apply the LogFileMetricExporter CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the logfilesmetricexporter pods are running in the namespace where you have created the LogFileMetricExporter CR, by running the following command and observing the output:

    $ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           READY   STATUS    RESTARTS   AGE
    logfilesmetricexporter-9qbjj   1/1     Running   0          2m46s
    logfilesmetricexporter-cbc4v   1/1     Running   0          2m46s
    Copy to Clipboard Toggle word wrap

    A logfilesmetricexporter pod runs concurrently with a collector pod on each node.

2.2. Configure log collector CPU and memory limits

You can adjust both the CPU and memory limits for the log collector by editing the ClusterLogForwarder custom resource (CR).

Procedure

  • Edit the ClusterLogForwarder CR in the openshift-logging project:

    $ oc -n openshift-logging edit ClusterLogging instance
    Copy to Clipboard Toggle word wrap
    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clf_name> 
    1
    
      namespace: openshift-logging
    spec:
      collector:
        resources: 
    2
    
          requests:
            memory: 736Mi
          limits:
            cpu: 100m
            memory: 736Mi
    # ...
    Copy to Clipboard Toggle word wrap
    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify the CPU and memory limits and requests as needed. The values shown are the default values.

2.3. Configuring input receivers

The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder ClusterLogForwarder CR deployments, the service name is in the <clusterlogforwarder_resource_name>-<input_name> format.

You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

HTTP receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Modify the ClusterLogForwarder CR to add configuration for the http receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name>
      inputs:
      - name: http-receiver 
    2
    
        type: receiver
        receiver:
          type: http 
    3
    
          port: 8443 
    4
    
          http:
            format: kubeAPIAudit 
    5
    
      outputs:
      - name: <output_name>
        type: http
        http:
          url: <url>
      pipelines: 
    6
    
        - name: http-pipeline
          inputRefs:
            - http-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1
    Specify a name for the ClusterLogForwarder CR.
    2
    Specify a name for your input receiver.
    3
    Specify the input receiver type as http.
    4
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535. The default value is 8443 if this is not specified.
    5
    Currently, only the kube-apiserver webhook format is supported for http input receivers.
    6
    Configure a pipeline for your input receiver.
  2. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                 ClusterIP   172.30.85.239    <none>        24231/TCP          3m6s
    collector-http-receiver   ClusterIP   172.30.205.160   <none>        8443/TCP           3m6s
    Copy to Clipboard Toggle word wrap

    In the example, the service name is collector-http-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

You can configure your log collector to collect journal format infrastructure logs by specifying syslog as a receiver input in the ClusterLogForwarder custom resource (CR).

Important

Syslog receiver input is only supported for the following scenarios:

  • Logging is installed on hosted control planes.
  • When logs originate from a Red Hat-supported product that is installed on the same cluster as the Red Hat OpenShift Logging Operator. For example:

    • Red Hat OpenStack Services on OpenShift (RHOSO)
    • OpenShift Virtualization

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.

Procedure

  1. Grant the collect-infrastructure-logs cluster role to the service account by running the following command:

    Example binding command

    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
    Copy to Clipboard Toggle word wrap

  2. Modify the ClusterLogForwarder CR to add configuration for the syslog receiver input:

    Example ClusterLogForwarder CR

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <clusterlogforwarder_name> 
    1
    
      namespace: <namespace>
    # ...
    spec:
      serviceAccount:
        name: <service_account_name> 
    2
    
      inputs:
        - name: syslog-receiver 
    3
    
          type: receiver
          receiver:
            type: syslog 
    4
    
            port: 10514 
    5
    
      outputs:
      - name: <output_name>
        lokiStack:
          authentication:
            token:
              from: serviceAccount
          target:
            name: logging-loki
            namespace: openshift-logging
        tls: 
    6
    
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
        type: lokiStack
    # ...
      pipelines: 
    7
    
        - name: syslog-pipeline
          inputRefs:
            - syslog-receiver
          outputRefs:
            - <output_name>
    # ...
    Copy to Clipboard Toggle word wrap

    1 2
    Use the service account that you granted the collect-infrastructure-logs permission in the previous step.
    3
    Specify a name for your input receiver.
    4
    Specify the input receiver type as syslog.
    5
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535.
    6
    If TLS configuration is not set, the default certificates will be used. For more information, run the command oc explain clusterlogforwarders.spec.inputs.receiver.tls.
    7
    Configure a pipeline for your input receiver.
  3. Apply the changes to the ClusterLogForwarder CR by running the following command:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap
  4. Verify that the collector is listening on the service that has a name in the <clusterlogforwarder_resource_name>-<input_name> format by running the following command:

    $ oc get svc
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
    collector                   ClusterIP   172.30.85.239    <none>        24231/TCP          33m
    collector-syslog-receiver   ClusterIP   172.30.216.142   <none>        10514/TCP          2m20s
    Copy to Clipboard Toggle word wrap

    In this example output, the service name is collector-syslog-receiver.

Verification

  1. Extract the certificate authority (CA) certificate file by running the following command:

    $ oc extract cm/openshift-service-ca.crt -n <namespace>
    Copy to Clipboard Toggle word wrap
    Note

    If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.

  2. As an example, use the curl command to send logs by running the following command:

    $ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514  “test message”
    Copy to Clipboard Toggle word wrap

    Replace <openshift_service_ca.crt> with the extracted CA certificate file.

Chapter 3. Configuring the log store

You can configure a LokiStack custom resource (CR) to store application, audit, and infrastructure-related logs.

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries.

Important

For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days.

3.1. Loki deployment sizing

Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities.

The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.

Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.

Important

It is not possible to change the number 1x for the deployment size.

Expand
Table 3.1. Loki sizing
 1x.demo1x.pico [6.1+ only]1x.extra-small1x.small1x.medium

Data transfer

Demo use only

50GB/day

100GB/day

500GB/day

2TB/day

Queries per second (QPS)

Demo use only

1-25 QPS at 200ms

1-25 QPS at 200ms

25-50 QPS at 200ms

25-75 QPS at 200ms

Replication factor

None

2

2

2

2

Total CPU requests

None

7 vCPUs

14 vCPUs

34 vCPUs

54 vCPUs

Total CPU requests if using the ruler

None

8 vCPUs

16 vCPUs

42 vCPUs

70 vCPUs

Total memory requests

None

17Gi

31Gi

67Gi

139Gi

Total memory requests if using the ruler

None

18Gi

35Gi

83Gi

171Gi

Total disk requests

40Gi

590Gi

430Gi

430Gi

590Gi

Total disk requests if using the ruler

60Gi

910Gi

750Gi

750Gi

910Gi

3.2. Loki object storage

The Loki Operator supports AWS S3, as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation. Azure, GCS, and Swift are also supported.

The recommended nomenclature for Loki storage is logging-loki-<your_storage_provider>.

The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider.

Expand
Table 3.2. Secret type quick reference
Storage providerSecret type value

AWS

s3

Azure

azure

Google Cloud

gcs

Minio

s3

OpenShift Data Foundation

s3

Swift

swift

3.2.1. AWS storage

You can create an object storage in Amazon Web Services (AWS) to store logs.

Prerequisites

Procedure

  • Create an object storage secret with the name logging-loki-aws by running the following command:

    $ oc create secret generic logging-loki-aws \ 
    1
    
      --from-literal=bucketnames="<bucket_name>" \
      --from-literal=endpoint="<aws_bucket_endpoint>" \
      --from-literal=access_key_id="<aws_access_key_id>" \
      --from-literal=access_key_secret="<aws_access_key_secret>" \
      --from-literal=region="<aws_region_of_your_bucket>"
      --from-literal=forcepathstyle="true" 
    2
    Copy to Clipboard Toggle word wrap
    1
    logging-loki-aws is the name of the secret.
    2
    AWS endpoints (those ending in .amazonaws.com) use a virtual-hosted style by default, which is equivalent to setting the forcepathstyle attribute to false. Conversely, non-AWS endpoints use a path style, equivalent to setting forcepathstyle attribute to true. If you need to use a virtual-hosted style with non-AWS S3 services, you must explicitly set forcepathstyle to false.
3.2.1.1. AWS storage for STS enabled clusters

If your cluster has STS enabled, the Cloud Credential Operator (CCO) supports short-term authentication by using AWS tokens.

You can create the Loki object storage secret manually by running the following command:

$ oc -n openshift-logging create secret generic "logging-loki-aws" \
  --from-literal=bucketnames="<s3_bucket_name>" \
  --from-literal=region="<bucket_region>" \
  --from-literal=audience="<oidc_audience>" 
1
Copy to Clipboard Toggle word wrap
1
Optional annotation, default value is openshift.

3.2.2. Azure storage

Prerequisites

  • You installed the Loki Operator.
  • You installed the OpenShift CLI (oc).
  • You created a bucket on Azure.

Procedure

  • Create an object storage secret with the name logging-loki-azure by running the following command:

    $ oc create secret generic logging-loki-azure \
      --from-literal=container="<azure_container_name>" \
      --from-literal=environment="<azure_environment>" \ 
    1
    
      --from-literal=account_name="<azure_account_name>" \
      --from-literal=account_key="<azure_account_key>"
    Copy to Clipboard Toggle word wrap
    1
    Supported environment values are AzureGlobal, AzureChinaCloud, AzureGermanCloud, or AzureUSGovernment.

If your cluster has Microsoft Entra Workload ID enabled, the Cloud Credential Operator (CCO) supports short-term authentication using Workload ID.

You can create the Loki object storage secret manually by running the following command:

$ oc -n openshift-logging create secret generic logging-loki-azure \
  --from-literal=environment="<azure_environment>" \
  --from-literal=account_name="<storage_account_name>" \
  --from-literal=container="<container_name>"
Copy to Clipboard Toggle word wrap

3.2.3. Google Cloud Platform storage

Prerequisites

  • You installed the Loki Operator.
  • You installed the OpenShift CLI (oc).
  • You created a project on Google Cloud Platform (GCP).
  • You created a bucket in the same project.
  • You created a service account in the same project for GCP authentication.

Procedure

  1. Copy the service account credentials received from GCP into a file called key.json.
  2. Create an object storage secret with the name logging-loki-gcs by running the following command:

    $ oc create secret generic logging-loki-gcs \
      --from-literal=bucketname="<bucket_name>" \
      --from-file=key.json="<path/to/key.json>"
    Copy to Clipboard Toggle word wrap

3.2.4. Minio storage

You can create an object storage in Minio to store logs.

Prerequisites

  • You installed the Loki Operator.
  • You installed the OpenShift CLI (oc).
  • You have Minio deployed on your cluster.
  • You created a bucket on Minio.

Procedure

  • Create an object storage secret with the name logging-loki-minio by running the following command:

    $ oc create secret generic logging-loki-minio \ 
    1
    
      --from-literal=bucketnames="<bucket_name>" \
      --from-literal=endpoint="<minio_bucket_endpoint>" \
      --from-literal=access_key_id="<minio_access_key_id>" \
      --from-literal=access_key_secret="<minio_access_key_secret>"
      --from-literal=forcepathstyle="true" 
    2
    Copy to Clipboard Toggle word wrap
    1
    logging-loki-minio is the name of the secret.
    2
    AWS endpoints (those ending in .amazonaws.com) use a virtual-hosted style by default, which is equivalent to setting the forcepathstyle attribute to false. Conversely, non-AWS endpoints use a path style, equivalent to setting forcepathstyle attribute to true. If you need to use a virtual-hosted style with non-AWS S3 services, you must explicitly set forcepathstyle to false.

3.2.5. OpenShift Data Foundation storage

You can create an object storage in OpenShift Data Foundation storage to store logs.

Prerequisites

Procedure

  1. Create an ObjectBucketClaim custom resource in the openshift-logging namespace:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: loki-bucket-odf
      namespace: openshift-logging
    spec:
      generateBucketName: loki-bucket-odf
      storageClassName: openshift-storage.noobaa.io
    Copy to Clipboard Toggle word wrap
  2. Get bucket properties from the associated ConfigMap object by running the following command:

    BUCKET_HOST=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}')
    BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}')
    BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')
    Copy to Clipboard Toggle word wrap
  3. Get bucket access key from the associated secret by running the following command:

    ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d)
    SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)
    Copy to Clipboard Toggle word wrap
  4. Create an object storage secret with the name logging-loki-odf by running the following command:

    $ oc create -n openshift-logging secret generic logging-loki-odf \ 
    1
    
      --from-literal=access_key_id="<access_key_id>" \
      --from-literal=access_key_secret="<secret_access_key>" \
      --from-literal=bucketnames="<bucket_name>" \
      --from-literal=endpoint="https://<bucket_host>:<bucket_port>"
      --from-literal=forcepathstyle="true" 
    2
    Copy to Clipboard Toggle word wrap
    1
    logging-loki-odf is the name of the secret.
    2
    AWS endpoints (those ending in .amazonaws.com) use a virtual-hosted style by default, which is equivalent to setting the forcepathstyle attribute to false. Conversely, non-AWS endpoints use a path style, equivalent to setting forcepathstyle attribute to true. If you need to use a virtual-hosted style with non-AWS S3 services, you must explicitly set forcepathstyle to false.

3.2.6. Swift storage

Prerequisites

  • You installed the Loki Operator.
  • You installed the OpenShift CLI (oc).
  • You created a bucket on Swift.

Procedure

  • Create an object storage secret with the name logging-loki-swift by running the following command:

    $ oc create secret generic logging-loki-swift \
      --from-literal=auth_url="<swift_auth_url>" \
      --from-literal=username="<swift_usernameclaim>" \
      --from-literal=user_domain_name="<swift_user_domain_name>" \
      --from-literal=user_domain_id="<swift_user_domain_id>" \
      --from-literal=user_id="<swift_user_id>" \
      --from-literal=password="<swift_password>" \
      --from-literal=domain_id="<swift_domain_id>" \
      --from-literal=domain_name="<swift_domain_name>" \
      --from-literal=container_name="<swift_container_name>"
    Copy to Clipboard Toggle word wrap
  • You can optionally provide project-specific data, region, or both by running the following command:

    $ oc create secret generic logging-loki-swift \
      --from-literal=auth_url="<swift_auth_url>" \
      --from-literal=username="<swift_usernameclaim>" \
      --from-literal=user_domain_name="<swift_user_domain_name>" \
      --from-literal=user_domain_id="<swift_user_domain_id>" \
      --from-literal=user_id="<swift_user_id>" \
      --from-literal=password="<swift_password>" \
      --from-literal=domain_id="<swift_domain_id>" \
      --from-literal=domain_name="<swift_domain_name>" \
      --from-literal=container_name="<swift_container_name>" \
      --from-literal=project_id="<swift_project_id>" \
      --from-literal=project_name="<swift_project_name>" \
      --from-literal=project_domain_id="<swift_project_domain_id>" \
      --from-literal=project_domain_name="<swift_project_domain_name>" \
      --from-literal=region="<swift_region>"
    Copy to Clipboard Toggle word wrap

For some storage providers, you can use the Cloud Credential Operator utility (ccoctl) during installation to implement short-term credentials. These credentials are created and managed outside the OpenShift Container Platform cluster. For more information, see Manual mode with short-term credentials for components.

Note

Short-term credential authentication must be configured during a new installation of Loki Operator, on a cluster that uses this credentials strategy. You cannot configure an existing cluster that uses a different credentials strategy to use this feature.

You can use workload identity federation with short-lived tokens to authenticate to cloud-based log stores. With workload identity federation, you do not have to store long-lived credentials in your cluster, which reduces the risk of credential leaks and simplifies secret management.

Prerequisites

  • You have administrator permissions.

Procedure

  • Use one of the following options to enable authentication:

    • If you used the OpenShift Container Platform web console to install the Loki Operator, the system automatically detects clusters that use short-lived tokens. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret.
    • If you used the OpenShift CLI (oc) to install the Loki Operator, you must manually create a Subscription object. Use the appropriate template for your storage provider, as shown in the following samples. This authentication strategy supports only the storage providers indicated within the samples.

      Microsoft Azure sample subscription

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: loki-operator
        namespace: openshift-operators-redhat
      spec:
        channel: "stable-6.3"
        installPlanApproval: Manual
        name: loki-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        config:
          env:
            - name: CLIENTID
              value: <your_client_id>
            - name: TENANTID
              value: <your_tenant_id>
            - name: SUBSCRIPTIONID
              value: <your_subscription_id>
            - name: REGION
              value: <your_region>
      Copy to Clipboard Toggle word wrap

      Amazon Web Services (AWS) sample subscription

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: loki-operator
        namespace: openshift-operators-redhat
      spec:
        channel: "stable-6.3"
        installPlanApproval: Manual
        name: loki-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        config:
          env:
          - name: ROLEARN
            value: <role_ARN>
      Copy to Clipboard Toggle word wrap

      Google Cloud Platform (GCP) sample subscription

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: loki-operator
        namespace: openshift-operators-redhat
      spec:
        channel: "stable-6.3"
        installPlanApproval: Manual
        name: loki-operator
        source:  redhat-operators
        sourceNamespace: openshift-marketplace
        config:
          env:
          - name: PROJECT_NUMBER
            value: <your_project_number>
          - name: POOL_ID
            value: <your_pool_id>
          - name: PROVIDER_ID
            value: <your_provider_id>
          - name: SERVICE_ACCOUNT_EMAIL
            value: example@mydomain.iam.gserviceaccount.com
      Copy to Clipboard Toggle word wrap

You can create a LokiStack custom resource (CR) by using the OpenShift Container Platform web console.

Prerequisites

  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console.
  • You installed the Loki Operator.

Procedure

  1. Go to the OperatorsInstalled Operators page. Click the All instances tab.
  2. From the Create new drop-down list, select LokiStack.
  3. Select YAML view, and then use the following template to create a LokiStack CR:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki 
    1
    
      namespace: openshift-logging
    spec:
      size: 1x.small 
    2
    
      storage:
        schemas:
          - effectiveDate: '2023-10-15'
            version: v13
        secret:
          name: logging-loki-s3 
    3
    
          type: s3 
    4
    
          credentialMode: 
    5
    
      storageClassName: <storage_class_name> 
    6
    
      tenants:
        mode: openshift-logging
    Copy to Clipboard Toggle word wrap
    1
    Use the name logging-loki.
    2
    Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium.
    3
    Specify the secret used for your log storage.
    4
    Specify the corresponding storage type.
    5
    Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters.
    6
    Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command.

To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI (oc).

Prerequisites

  • You have administrator permissions.
  • You installed the Loki Operator.
  • You installed the OpenShift CLI (oc).

Procedure

  • Create a secret in the directory that contains your certificate and key files by running the following command:

    $ oc create secret generic -n openshift-logging <your_secret_name> \
     --from-file=tls.key=<your_key_file>
     --from-file=tls.crt=<your_crt_file>
     --from-file=ca-bundle.crt=<your_bundle_file>
     --from-literal=username=<your_username>
     --from-literal=password=<your_password>
    Copy to Clipboard Toggle word wrap
Note

Use generic or opaque secrets for best results.

Verification

  • Verify that a secret was created by running the following command:

    $ oc get secrets
    Copy to Clipboard Toggle word wrap

3.2.8. Fine grained access for Loki logs

The Red Hat OpenShift Logging Operator does not grant all users access to logs by default. As an administrator, you must configure your users' access unless the Operator was upgraded and prior configurations are in place. Depending on your configuration and need, you can configure fine grain access to logs using the following:

  • Cluster wide policies
  • Namespace scoped policies
  • Creation of custom admin groups

As an administrator, you need to create the role bindings and cluster role bindings appropriate for your deployment. The Red Hat OpenShift Logging Operator provides the following cluster roles:

  • cluster-logging-application-view grants permission to read application logs.
  • cluster-logging-infrastructure-view grants permission to read infrastructure logs.
  • cluster-logging-audit-view grants permission to read audit logs.

If you have upgraded from a prior version, an additional cluster role logging-application-logs-reader and associated cluster role binding logging-all-authenticated-application-logs-reader provide backward compatibility, allowing any authenticated user read access in their namespaces.

Note

Users with access by namespace must provide a namespace when querying application logs.

3.2.8.1. Cluster wide access

Cluster role binding resources reference cluster roles, and set permissions cluster wide.

Example ClusterRoleBinding

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: logging-all-application-logs-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-logging-application-view 
1

subjects: 
2

- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
Copy to Clipboard Toggle word wrap

1
Additional ClusterRoles are cluster-logging-infrastructure-view, and cluster-logging-audit-view.
2
Specifies the users or groups this object applies to.
3.2.8.2. Namespaced access

RoleBinding resources can be used with ClusterRole objects to define the namespace a user or group has access to logs for.

Example RoleBinding

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: allow-read-logs
  namespace: log-test-0 
1

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-logging-application-view
subjects:
- kind: User
  apiGroup: rbac.authorization.k8s.io
  name: testuser-0
Copy to Clipboard Toggle word wrap

1
Specifies the namespace this RoleBinding applies to.
3.2.8.3. Custom admin group access

If you have a large deployment with several users who require broader permissions, you can create a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack CR are considered administrators.

Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role.

Example LokiStack CR

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
  name: logging-loki
  namespace: openshift-logging
spec:
  tenants:
    mode: openshift-logging 
1

    openshift:
      adminGroups: 
2

      - cluster-admin
      - custom-admin-group 
3
Copy to Clipboard Toggle word wrap

1
Custom admin groups are only available in this mode.
2
Entering an empty list [] value for this field disables admin groups.
3
Overrides the default groups (system:cluster-admins, cluster-admin, dedicated-admin)
Important

Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120). For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it.

Use the following procedure to create a new group for users with cluster-admin permissions.

Procedure

  1. Enter the following command to create a new group:

    $ oc adm groups new cluster-admin
    Copy to Clipboard Toggle word wrap
  2. Enter the following command to add the desired user to the cluster-admin group:

    $ oc adm groups add-users cluster-admin <username>
    Copy to Clipboard Toggle word wrap
  3. Enter the following command to add cluster-admin user role to the group:

    $ oc adm policy add-cluster-role-to-group cluster-admin cluster-admin
    Copy to Clipboard Toggle word wrap

3.3. Enhanced reliability and performance for Loki

Use the following configurations to ensure reliability and efficiency of Loki in production.

3.3.1. Loki pod placement

You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.

You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node.

Example LokiStack with node selectors

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
  name: logging-loki
  namespace: openshift-logging
spec:
# ...
  template:
    compactor: 
1

      nodeSelector:
        node-role.kubernetes.io/infra: "" 
2

    distributor:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    gateway:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    indexGateway:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    ingester:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    querier:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    queryFrontend:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
    ruler:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
# ...
Copy to Clipboard Toggle word wrap

1
Specifies the component pod type that applies to the node selector.
2
Specifies the pods that are moved to nodes containing the defined label.

Example LokiStack CR with node selectors and tolerations

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
  name: logging-loki
  namespace: openshift-logging
spec:
# ...
  template:
    compactor:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    distributor:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    indexGateway:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    ingester:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    querier:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    queryFrontend:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    ruler:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    gateway:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
# ...
Copy to Clipboard Toggle word wrap

To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource:

$ oc explain lokistack.spec.template
Copy to Clipboard Toggle word wrap

Example output

KIND:     LokiStack
VERSION:  loki.grafana.com/v1

RESOURCE: template <Object>

DESCRIPTION:
     Template defines the resource/limits/tolerations/nodeselectors per
     component

FIELDS:
   compactor	<Object>
     Compactor defines the compaction component spec.

   distributor	<Object>
     Distributor defines the distributor component spec.
...
Copy to Clipboard Toggle word wrap

For more detailed information, you can add a specific field:

$ oc explain lokistack.spec.template.compactor
Copy to Clipboard Toggle word wrap

Example output

KIND:     LokiStack
VERSION:  loki.grafana.com/v1

RESOURCE: compactor <Object>

DESCRIPTION:
     Compactor defines the compaction component spec.

FIELDS:
   nodeSelector	<map[string]string>
     NodeSelector defines the labels required by a node to schedule the
     component onto it.
...
Copy to Clipboard Toggle word wrap

3.3.2. Configuring Loki to tolerate node failure

In the logging 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.

Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.

In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.

The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor, distributor, gateway, indexGateway, ingester, querier, queryFrontend, and ruler components.

You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field:

Example user settings for the ingester component

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
  name: logging-loki
  namespace: openshift-logging
spec:
# ...
  template:
    ingester:
      podAntiAffinity:
      # ...
        requiredDuringSchedulingIgnoredDuringExecution: 
1

        - labelSelector:
            matchLabels: 
2

              app.kubernetes.io/component: ingester
          topologyKey: kubernetes.io/hostname
# ...
Copy to Clipboard Toggle word wrap

1
The stanza to define a required rule.
2
The key-value pair (label) that must be matched to apply the rule.

3.3.3. Enabling stream-based retention with Loki

You can configure retention policies based on log streams. You can set retention rules globally, per-tenant, or both. If you configure both, tenant rules apply before global rules.

Important

If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.

Note
  • Although logging version 5.9 and later supports schema v12, schema v13 is recommended for future compatibility.
  • For cost-effective log pruning, configure retention policies directly on the object storage provider. Use the lifecycle management features of the storage provider to ensure automatic deletion of old logs. This also avoids extra processing from Loki and delete requests to S3.

    If the object storage does not support lifecycle policies, you must configure LokiStack to enforce retention internally. The supported retention period is up to 30 days.

Prerequisites

  • You have administrator permissions.
  • You have installed the Loki Operator.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. To enable stream-based retention, create a LokiStack CR and save it as a YAML file. In the following example, it is called lokistack.yaml.

    Example global stream-based retention for S3

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      limits:
       global: 
    1
    
          retention: 
    2
    
            days: 20
            streams:
            - days: 4
              priority: 1
              selector: '{kubernetes_namespace_name=~"test.+"}' 
    3
    
            - days: 1
              priority: 1
              selector: '{log_type="infrastructure"}'
      managementState: Managed
      replicationFactor: 1
      size: 1x.small
      storage:
        schemas:
        - effectiveDate: "2020-10-11"
          version: v13
        secret:
          name: logging-loki-s3
          type: s3
      storageClassName: gp3-csi
      tenants:
        mode: openshift-logging
    Copy to Clipboard Toggle word wrap

    1
    Set the retention policy for all log streams. This policy does not impact the retention period for stored logs in object storage.
    2
    Enable retention in the cluster by adding the retention block to the CR.
    3
    Specify the LogQL query to match log streams to the retention rule.

    Example per-tenant stream-based retention for S3

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      limits:
        global:
          retention:
            days: 20
        tenants: 
    1
    
          application:
            retention:
              days: 1
              streams:
                - days: 4
                  selector: '{kubernetes_namespace_name=~"test.+"}' 
    2
    
          infrastructure:
            retention:
              days: 5
              streams:
                - days: 1
                  selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}'
      managementState: Managed
      replicationFactor: 1
      size: 1x.small
      storage:
        schemas:
        - effectiveDate: "2020-10-11"
          version: v13
        secret:
          name: logging-loki-s3
          type: s3
      storageClassName: gp3-csi
      tenants:
        mode: openshift-logging
    Copy to Clipboard Toggle word wrap

    1
    Set the retention policy per-tenant. Valid tenant types are application, audit, and infrastructure.
    2
    Specify the LogQL query to match log streams to the retention rule.
  2. Apply the LokiStack CR:

    $ oc apply -f lokistack.yaml
    Copy to Clipboard Toggle word wrap

In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks.

As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command:

$ oc patch LokiStack logging-loki -n openshift-logging  --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
Copy to Clipboard Toggle word wrap

Example LokiStack to include podIP

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
  name: logging-loki
  namespace: openshift-logging
spec:
# ...
  hashRing:
    type: memberlist
    memberlist:
      instanceAddrType: podIP
# ...
Copy to Clipboard Toggle word wrap

3.3.5. LokiStack behavior during cluster restarts

When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.

3.4. Advanced deployment and scalability for Loki

You can configure high availability, scalability, and error handling for Loki.

3.4.1. Zone aware data replication

The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small, 1x.small, or 1x.medium, the replication.factor field is automatically set to 2.

To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.

Example LokiStack CR with zone replication enabled

apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
 name: logging-loki
 namespace: openshift-logging
spec:
 replicationFactor: 2 
1

 replication:
   factor: 2 
2

   zones:
   -  maxSkew: 1 
3

      topologyKey: topology.kubernetes.io/zone 
4
Copy to Clipboard Toggle word wrap

1
Deprecated field, values entered are overwritten by replication.factor.
2
This value is automatically set when deployment size is selected at setup.
3
The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
4
Defines zones in the form of a topology key that corresponds to a node label.

3.4.2. Recovering Loki pods from failed zones

In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss.

Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone.

Warning

The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating.

Prerequisites

  • Verify your LokiStack CR has a replication factor greater than 1.
  • Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration.

The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone.

Procedure

  1. List the pods in Pending status by running the following command:

    $ oc get pods --field-selector status.phase==Pending -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Example oc get pods output

    NAME                           READY   STATUS    RESTARTS   AGE 
    1
    
    logging-loki-index-gateway-1   0/1     Pending   0          17m
    logging-loki-ingester-1        0/1     Pending   0          16m
    logging-loki-ruler-1           0/1     Pending   0          16m
    Copy to Clipboard Toggle word wrap

    1
    These pods are in Pending status because their corresponding PVCs are in the failed zone.
  2. List the PVCs in Pending status by running the following command:

    $ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r
    Copy to Clipboard Toggle word wrap

    Example oc get pvc output

    storage-logging-loki-index-gateway-1
    storage-logging-loki-ingester-1
    wal-logging-loki-ingester-1
    storage-logging-loki-ruler-1
    wal-logging-loki-ruler-1
    Copy to Clipboard Toggle word wrap

  3. Delete the PVC(s) for a pod by running the following command:

    $ oc delete pvc <pvc_name> -n openshift-logging
    Copy to Clipboard Toggle word wrap
  4. Delete the pod(s) by running the following command:

    $ oc delete pod <pod_name> -n openshift-logging
    Copy to Clipboard Toggle word wrap

    Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone.

The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection. Removing the finalizers should allow the PVCs to delete successfully.

  • Remove the finalizer for each PVC by running the command below, then retry deletion.

    $ oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging
    Copy to Clipboard Toggle word wrap

3.4.3. Troubleshooting Loki rate limit errors

If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors.

These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.

In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR).

Important

The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.

Conditions

  • The Log Forwarder API is configured to forward logs to Loki.
  • Your system sends a block of messages that is larger than 2 MB to Loki. For example:

    "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\
    .......
    ......
    ......
    ......
    \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
    Copy to Clipboard Toggle word wrap
  • After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages:

    429 Too Many Requests Ingestion rate limit exceeded
    Copy to Clipboard Toggle word wrap

    Example Vector error message

    2023-08-25T16:08:49.301780Z  WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true
    Copy to Clipboard Toggle word wrap

    The error is also visible on the receiving end. For example, in the LokiStack ingester pod:

    Example Loki ingester error message

    level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
    Copy to Clipboard Toggle word wrap

Procedure

  • Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      limits:
        global:
          ingestion:
            ingestionBurstSize: 16 
    1
    
            ingestionRate: 8 
    2
    
    # ...
    Copy to Clipboard Toggle word wrap
    1
    The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted.
    2
    The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.

3.5. Log-based alerts for Loki

You can configure log-based alerts for Loki by creating an AlertingRule custom resource (CR).

Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users.

The following cluster roles for alerting and recording rules are available for LokiStack:

Expand
Rule nameDescription

alertingrules.loki.grafana.com-v1-admin

Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group.

alertingrules.loki.grafana.com-v1-crdview

Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources.

alertingrules.loki.grafana.com-v1-edit

Users with this role have permission to create, update, and delete AlertingRule resources.

alertingrules.loki.grafana.com-v1-view

Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them.

recordingrules.loki.grafana.com-v1-admin

Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group.

recordingrules.loki.grafana.com-v1-crdview

Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources.

recordingrules.loki.grafana.com-v1-edit

Users with this role have permission to create, update, and delete RecordingRule resources.

recordingrules.loki.grafana.com-v1-view

Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them.

3.5.1.1. Examples

To apply cluster roles for a user, you must bind an existing cluster role to a specific username.

Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster.

The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:

Example cluster role binding command for alerting rule CRUD permissions in a specific namespace

$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
Copy to Clipboard Toggle word wrap

The following command gives the specified user administrator permissions for alerting rules in all namespaces:

Example cluster role binding command for administrator permissions

$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
Copy to Clipboard Toggle word wrap

The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions:

  • If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule
  • If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule.
  • If an AlertingRule CR includes an invalid LogQL expr, it is an invalid alerting rule.
  • If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule.
  • If none of the above applies, an alerting rule is considered valid.
Expand
Table 3.3. AlertingRule definitions
Tenant typeValid namespaces for AlertingRule CRs

application

<your_application_namespace>

audit

openshift-logging

infrastructure

openshift-/*, kube-/\*, default

Procedure

  1. Create an AlertingRule custom resource (CR):

    Example infrastructure AlertingRule CR

      apiVersion: loki.grafana.com/v1
      kind: AlertingRule
      metadata:
        name: loki-operator-alerts
        namespace: openshift-operators-redhat 
    1
    
        labels: 
    2
    
          openshift.io/<label_name>: "true"
      spec:
        tenantID: "infrastructure" 
    3
    
        groups:
          - name: LokiOperatorHighReconciliationError
            rules:
              - alert: HighPercentageError
                expr: | 
    4
    
                  sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job)
                    /
                  sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job)
                    > 0.01
                for: 10s
                labels:
                  severity: critical 
    5
    
                annotations:
                  summary: High Loki Operator Reconciliation Errors 
    6
    
                  description: High Loki Operator Reconciliation Errors 
    7
    Copy to Clipboard Toggle word wrap

    1
    The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition.
    2
    The labels block must match the LokiStack spec.rules.selector definition.
    3
    AlertingRule CRs for infrastructure tenants are only supported in the openshift-*, kube-\*, or default namespaces.
    4
    The value for kubernetes_namespace_name: must match the value for metadata.namespace.
    5
    The value of this mandatory field must be critical, warning, or info.
    6
    This field is mandatory.
    7
    This field is mandatory.

    Example application AlertingRule CR

      apiVersion: loki.grafana.com/v1
      kind: AlertingRule
      metadata:
        name: app-user-workload
        namespace: app-ns 
    1
    
        labels: 
    2
    
          openshift.io/<label_name>: "true"
      spec:
        tenantID: "application"
        groups:
          - name: AppUserWorkloadHighError
            rules:
              - alert:
                expr: | 
    3
    
                  sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job)
                for: 10s
                labels:
                  severity: critical 
    4
    
                annotations:
                  summary:  
    5
    
                  description:  
    6
    Copy to Clipboard Toggle word wrap

    1
    The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition.
    2
    The labels block must match the LokiStack spec.rules.selector definition.
    3
    Value for kubernetes_namespace_name: must match the value for metadata.namespace.
    4
    The value of this mandatory field must be critical, warning, or info.
    5
    The value of this mandatory field is a summary of the rule.
    6
    The value of this mandatory field is a detailed description of the rule.
  2. Apply the AlertingRule CR:

    $ oc apply -f <filename>.yaml
    Copy to Clipboard Toggle word wrap

Chapter 4. OTLP data ingestion in Loki

You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata. Instead, OTLP provides metadata about log entries as attributes, grouped into the following three categories:

  • Resource
  • Scope
  • Log

You can set metadata for multiple entries simultaneously or individually as needed.

4.1. Configuring LokiStack for OTLP data ingestion

Important

The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps:

Prerequisites

  • Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion.

Procedure

  1. Set the schema version:

    • When creating a new LokiStack CR, set version: v13 in the storage schema configuration.

      Note

      For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation).

  2. Configure the storage schema as follows:

    Example configure storage schema

    # ...
    spec:
      storage:
        schemas:
        - version: v13
          effectiveDate: 2024-10-25
    Copy to Clipboard Toggle word wrap

    Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata.

4.2. Attribute mapping

When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki.

For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases:

  • Using a custom collector: If your setup includes a custom collector that generates additional attributes that you do not want to store, consider customizing the mapping to ensure these attributes are dropped by Loki.
  • Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process.

4.2.1. Custom attribute mapping for OpenShift

When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration.

Note

A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode.

Within LokiStack, attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration:

# ...
spec:
  limits:
    global:
      otlp: {} 
1

    tenants:
      application:
        otlp: {} 
2
Copy to Clipboard Toggle word wrap
1
Defines global OTLP attribute configuration.
2
Defines the OTLP attribute configuration for the application tenant within the openshift-logging mode. You can also configure infrastructure and audit tenants in addition to application tenants.
Note

You can use both global and per-tenant OTLP configurations for mapping attributes to stream labels.

Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects. See the following LokiStack example configuration:

spec:
  limits:
    global:
      otlp:
        streamLabels:
          resourceAttributes:
          - name: "k8s.namespace.name"
          - name: "k8s.pod.name"
          - name: "k8s.container.name"
Copy to Clipboard Toggle word wrap

You can drop attributes of type resource, scope, or log from the log entry.

# ...
spec:
  limits:
    global:
      otlp:
        streamLabels:
# ...
        drop:
          resourceAttributes:
          - name: "process.command_line"
          - name: "k8s\\.pod\\.labels\\..+"
            regex: true
          scopeAttributes:
          - name: "service.name"
          logAttributes:
          - name: "http.route"
Copy to Clipboard Toggle word wrap

You can use regular expressions by setting regex: true to apply a configuration for attributes with similar names.

Important

Avoid using regular expressions for stream labels, as this can increase data volume.

Attributes that are not explicitly set as stream labels or dropped from the entry are saved as structured metadata by default.

4.2.2. Customizing OpenShift defaults

In the openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended, might be dropped if performance is impacted. For information about the attributes, see OpenTelemetry data model attributes.

When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or some attributes need to be dropped, use custom configuration. Custom configurations can merge with default configurations.

Chapter 5. OpenTelemetry data model

This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging’s OpenTelemetry support with Logging.

Important

The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

5.1. Forwarding and ingestion protocol

Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources.

5.2. Semantic conventions

The log collector in this solution gathers the following log streams:

  • Container logs
  • Cluster node journal logs
  • Cluster node auditd logs
  • Kubernetes and OpenShift API server logs
  • OpenShift Virtual Network (OVN) logs

You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name, cluster_id, pod_name, namespace, and possibly deployment or app_name. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data.

In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage.

The following sections define the attributes that are generally forwarded.

5.2.1. Log entry structure

All log streams include the following log data fields:

The Applicable Sources column indicates which log sources each field applies to:

  • all: This field is present in all logs.
  • container: This field is present in Kubernetes container logs, both application and infrastructure.
  • audit: This field is present in Kubernetes, OpenShift API, and OVN logs.
  • auditd: This field is present in node auditd logs.
  • journal: This field is present in node journal logs.
Expand
NameApplicable SourcesComment

body

all

 

observedTimeUnixNano

all

 

timeUnixNano

all

 

severityText

container, journal

 

attributes

all

(Optional) Present when forwarding stream specific attributes

5.2.2. Attributes

Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table.

The Location column specifies the type of attribute:

  • resource: Indicates a resource attribute
  • scope: Indicates a scope attribute
  • log: Indicates a log attribute

The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored:

  • stream label:

    • Enables efficient filtering and querying based on specific labels.
    • Can be labeled as required if the Loki Operator enforces this attribute in the configuration.
  • structured metadata:

    • Allows for detailed filtering and storage of key-value pairs.
    • Enables users to use direct labels for streamlined queries without requiring JSON parsing.

With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries.

Expand
NameLocationApplicable SourcesStorage (LokiStack)Comment

log_source

resource

all

required stream label

(DEPRECATED) Compatibility attribute, contains same information as openshift.log.source

log_type

resource

all

required stream label

(DEPRECATED) Compatibility attribute, contains same information as openshift.log.type

kubernetes.container_name

resource

container

stream label

(DEPRECATED) Compatibility attribute, contains same information as k8s.container.name

kubernetes.host

resource

all

stream label

(DEPRECATED) Compatibility attribute, contains same information as k8s.node.name

kubernetes.namespace_name

resource

container

required stream label

(DEPRECATED) Compatibility attribute, contains same information as k8s.namespace.name

kubernetes.pod_name

resource

container

stream label

(DEPRECATED) Compatibility attribute, contains same information as k8s.pod.name

openshift.cluster_id

resource

all

 

(DEPRECATED) Compatibility attribute, contains same information as openshift.cluster.uid

level

log

container, journal

 

(DEPRECATED) Compatibility attribute, contains same information as severityText

openshift.cluster.uid

resource

all

required stream label

 

openshift.log.source

resource

all

required stream label

 

openshift.log.type

resource

all

required stream label

 

openshift.labels.*

resource

all

structured metadata

 

k8s.node.name

resource

all

stream label

 

k8s.namespace.name

resource

container

required stream label

 

k8s.container.name

resource

container

stream label

 

k8s.pod.labels.*

resource

container

structured metadata

 

k8s.pod.name

resource

container

stream label

 

k8s.pod.uid

resource

container

structured metadata

 

k8s.cronjob.name

resource

container

stream label

Conditionally forwarded based on creator of pod

k8s.daemonset.name

resource

container

stream label

Conditionally forwarded based on creator of pod

k8s.deployment.name

resource

container

stream label

Conditionally forwarded based on creator of pod

k8s.job.name

resource

container

stream label

Conditionally forwarded based on creator of pod

k8s.replicaset.name

resource

container

structured metadata

Conditionally forwarded based on creator of pod

k8s.statefulset.name

resource

container

stream label

Conditionally forwarded based on creator of pod

log.iostream

log

container

structured metadata

 

k8s.audit.event.level

log

audit

structured metadata

 

k8s.audit.event.stage

log

audit

structured metadata

 

k8s.audit.event.user_agent

log

audit

structured metadata

 

k8s.audit.event.request.uri

log

audit

structured metadata

 

k8s.audit.event.response.code

log

audit

structured metadata

 

k8s.audit.event.annotation.*

log

audit

structured metadata

 

k8s.audit.event.object_ref.resource

log

audit

structured metadata

 

k8s.audit.event.object_ref.name

log

audit

structured metadata

 

k8s.audit.event.object_ref.namespace

log

audit

structured metadata

 

k8s.audit.event.object_ref.api_group

log

audit

structured metadata

 

k8s.audit.event.object_ref.api_version

log

audit

structured metadata

 

k8s.user.username

log

audit

structured metadata

 

k8s.user.groups

log

audit

structured metadata

 

process.executable.name

resource

journal

structured metadata

 

process.executable.path

resource

journal

structured metadata

 

process.command_line

resource

journal

structured metadata

 

process.pid

resource

journal

structured metadata

 

service.name

resource

journal

stream label

 

systemd.t.*

log

journal

structured metadata

 

systemd.u.*

log

journal

structured metadata

 
Note

Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases.

Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (.,/,-) will be replaced by underscores (_). For example, k8s.namespace.name will become k8s_namespace_name.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat