Chapter 1. Configuring log forwarding
The ClusterLogForwarder
(CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
1.1. Setting up log collection Copy linkLink copied to clipboard!
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs
, collect-application-logs
, and collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
1.1.1. Legacy service accounts Copy linkLink copied to clipboard!
To use the existing legacy service account logcollector
, create the following ClusterRoleBinding:
oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
1.1.2. Creating service accounts Copy linkLink copied to clipboard!
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-logging
namespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2.1. Cluster Role Binding for your Service Account Copy linkLink copied to clipboard!
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
1.1.2.2. Writing application logs Copy linkLink copied to clipboard!
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
- 1
- rules: Specifies the permissions granted by this ClusterRole.
- 2
- apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
- 3
- loki.grafana.com: The API group for managing Loki-related resources.
- 4
- resources: The resource type that the ClusterRole grants permission to interact with.
- 5
- application: Refers to the application resources within the Loki logging system.
- 6
- resourceNames: Specifies the names of resources that this role can manage.
- 7
- logs: Refers to the log resources that can be created.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new logs in the Loki system.
1.1.2.3. Writing audit logs Copy linkLink copied to clipboard!
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
- 1
- rules: Defines the permissions granted by this ClusterRole.
- 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5
- audit: Specifies that the role manages audit logs within Loki.
- 6
- resourceNames: Defines the specific resources that the role can access.
- 7
- logs: Refers to the logs that can be managed under this role.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new audit logs.
1.1.2.4. Writing infrastructure logs Copy linkLink copied to clipboard!
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
1.1.2.5. ClusterLogForwarder editor role Copy linkLink copied to clipboard!
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
1.2. Modifying log level in collector Copy linkLink copied to clipboard!
To modify the log level in the collector, you can set the observability.openshift.io/log-level
annotation to trace
, debug
, info
, warn
, error
, and off
.
Example log level annotation
1.3. Managing the Operator Copy linkLink copied to clipboard!
The ClusterLogForwarder
resource has a managementState
field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState
to Unmanaged
.
1.4. Structure of the ClusterLogForwarder Copy linkLink copied to clipboard!
The CLF has a spec
section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application
,infrastructure
andaudit
forward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
1.4.1. Inputs Copy linkLink copied to clipboard!
Inputs are configured in an array under spec.inputs
. There are three built-in input types:
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
Selects logs from nodes and from infrastructure components running in the following namespaces:
-
default
-
kube
-
openshift
-
Containing the
kube-
oropenshift-
prefix
-
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application
that select logs from specific namespaces or using pod labels.
1.4.2. Outputs Copy linkLink copied to clipboard!
Outputs are configured in an array under spec.outputs
. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
1.4.3. Pipelines Copy linkLink copied to clipboard!
Pipelines are configured in an array under spec.pipelines
. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
1.4.4. Filters Copy linkLink copied to clipboard!
Filters are configured in an array under spec.filters
. They can match incoming log messages based on the value of structured fields and modify or drop them.
Administrators can configure the following types of filters:
1.4.5. Enabling multi-line exception detection Copy linkLink copied to clipboard!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarder
Custom Resource (CR) contains adetectMultilineErrors
field under the.spec.filters
.
Example ClusterLogForwarder CR
1.4.5.1. Details Copy linkLink copied to clipboard!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
1.4.6. Configuring content filters to drop unwanted log records Copy linkLink copied to clipboard!
Collecting all cluster logs produces a large amount of data, which can be expensive to move and store. To reduce volume, you can configure the drop
filter to exclude unwanted log records before forwarding. The log collector evaluates log streams against the filter and drops records that match specified conditions.
The drop
filter uses the test
field to define one or more conditions for evaluating log records. The filter applies the following rules to check whether to drop a record:
- A test passes if all its specified conditions evaluate to true.
- If a test passes, the filter drops the log record.
-
If you define several tests in the
drop
filter configuration, the filter drops the log record if any of the tests pass. - If there is an error evaluating a condition, for example, the referenced field is missing, that condition evaluates to false.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
-
You have created a
ClusterLogForwarder
custom resource (CR). -
You have installed the OpenShift CLI (
oc
).
Procedure
Extract the existing
ClusterLogForwarder
configuration and save it as a local file.oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
$ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
<name>
is the name of theClusterLogForwarder
instance you want to configure. -
<namespace>
is the namespace where you created theClusterLogForwarder
instance, for exampleopenshift-logging
. -
<filename>
is the name of the local file where you save the configuration.
-
Add a configuration to drop unwanted log records to the
filters
spec in theClusterLogForwarder
CR.Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
drop
filter drops log records that match the filter configuration. - 2
- Specify configuration options for the
drop
filter. - 3
- Specify conditions for tests to evaluate whether the filter drops a log record.
- 4
- Specify dot-delimited paths to fields in log records.
-
Each path segment can contain alphanumeric characters and underscores,
a-z
,A-Z
,0-9
,_
, for example,.kubernetes.namespace_name
. -
If segments contain different characters, the segment must be in quotes, for example,
.kubernetes.labels."app.version-1.2/beta"
. -
You can include several field paths in a single
test
configuration, but they must all evaluate to true for the test to pass and thedrop
filter to apply.
-
Each path segment can contain alphanumeric characters and underscores,
- 5
- Specify a regular expression. If log records match this regular expression, they are dropped.
- 6
- Specify a regular expression. If log records do not match this regular expression, they are dropped.
- 7
- Specify the pipeline that uses the
drop
filter.
NoteYou can set either the
matches
ornotMatches
condition for a singlefield
path, but not both.Example configuration that keeps only high-priority log records
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration with several tests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.7. API audit filter overview Copy linkLink copied to clipboard!
OpenShift API servers generate audit events for every API call. These events include details about the request, the response, and the identity of the requester. This can lead to large volumes of data.
The API audit filter helps manage the audit trail by using rules to exclude non-essential events and to reduce the event size. Rules are checked in order, and checking stops at the first match. The amount of data in an event depends on the value of the level
field:
-
None
: The event is dropped. -
Metadata
: The event includes audit metadata and excludes request and response bodies. -
Request
: The event includes audit metadata and the request body, and excludes the response body. -
RequestResponse
: The event includes all data: metadata, request body and response body. The response body can be very large. For example,oc get pods -A
generates a response body containing the YAML description of every pod in the cluster.
You can only use the API audit filter feature if the Vector collector is set up in your logging deployment.
The ClusterLogForwarder
custom resource (CR) uses the same format as the standard Kubernetes audit policy. The ClusterLogForwarder
CR provides the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*
asterisk character. For example, theopenshift-\*
namespace matchesopenshift-apiserver
oropenshift-authentication
namespaces. The\*/status
resource matchesPod/status
orDeployment/status
resources. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get
,list
, andwatch
are dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
To disable these defaults, either end your rules list with a rule that has only a
level
field or add an empty rule.-
Read-only system events such as
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodes
field, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]
. If the value is an empty list,[]
, no status codes are omitted.
The ClusterLogForwarder
CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder
CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.
-
You must have the
collect-audit-logs
cluster role to collect the audit logs. - The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
1.4.8. Filtering application logs at input by including the label expressions or a matching label key and values Copy linkLink copied to clipboard!
You can include the application logs based on the label expressions or a matching label key and its values by using the input
selector.
Procedure
Add a configuration for a filter to the
input
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.9. Configuring content filters to prune log records Copy linkLink copied to clipboard!
If you configure the prune
filter, the log collector evaluates log streams against the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
-
You have created a
ClusterLogForwarder
custom resource (CR). -
You have installed the OpenShift CLI (
oc
).
Procedure
Extract the existing
ClusterLogForwarder
configuration and save it as a local file.oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
$ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
<name>
is the name of theClusterLogForwarder
instance you want to configure. -
<namespace>
is the namespace where you created theClusterLogForwarder
instance, for exampleopenshift-logging
. -
<filename>
is the name of the local file where you save the configuration.
-
Add a configuration to prune log records to the
filters
spec in theClusterLogForwarder
CR.ImportantIf you specify both
in
andnotIn
parameters, thenotIn
array takes precedence overin
during pruning. After records are pruned by using thenotIn
array, they are then pruned by using thein
array.Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
prune
filter prunes log records by configured fields. - 2
- Specify configuration options for the
prune
filter.-
The
in
andnotIn
fields are arrays of dot-delimited paths to fields in log records. -
Each path segment can contain alpha-numeric characters and underscores,
a-z
,A-Z
,0-9
,_
, for example,.kubernetes.namespace_name
. -
If segments contain different characters, the segment must be in quotes, for example,
.kubernetes.labels."app.version-1.2/beta"
.
-
The
- 3
- Optional: Specify fields to remove from the log record. The log collector keeps all other fields.
- 4
- Optional: Specify fields to keep in the log record. The log collector removes all other fields.
- 5
- Specify the pipeline that the
prune
filter is applied to.Important-
The filters cannot remove the
.log_type
,.log_source
,.message
fields from the log records. You must include them in thenotIn
field. -
If you use the
googleCloudLogging
output, you must include.hostname
in thenotIn
field.
-
The filters cannot remove the
Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. Filtering the audit and infrastructure log inputs by source Copy linkLink copied to clipboard!
You can define the list of audit
and infrastructure
sources to collect the logs by using the input
selector.
Procedure
Add a configuration to define the
audit
andinfrastructure
sources in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to defineaudit
andinfrastructure
sources:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node
: Journal log from the node -
container
: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI
: Logs from the Kubernetes API servers -
openshiftAPI
: Logs from the OpenShift API servers -
auditd
: Logs from a node auditd service -
ovn
: Logs from an open virtual network service
-
Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.6. Filtering application logs at input by including or excluding the namespace or container name Copy linkLink copied to clipboard!
You can include or exclude the application logs based on the namespace and container name by using the input
selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include or exclude namespaces and container names:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
excludes
field takes precedence over theincludes
field.Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow