Chapter 1. Configuring log forwarding
The ClusterLogForwarder
(CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
1.1. Setting up log collection Copy linkLink copied to clipboard!
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs
, collect-application-logs
, and collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
1.1.1. Legacy service accounts Copy linkLink copied to clipboard!
To use the existing legacy service account logcollector
, create the following ClusterRoleBinding:
oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
1.1.2. Creating service accounts Copy linkLink copied to clipboard!
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-logging
namespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2.1. Cluster Role Binding for your Service Account Copy linkLink copied to clipboard!
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
1.1.2.2. Writing application logs Copy linkLink copied to clipboard!
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
- 1
- rules: Specifies the permissions granted by this ClusterRole.
- 2
- apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
- 3
- loki.grafana.com: The API group for managing Loki-related resources.
- 4
- resources: The resource type that the ClusterRole grants permission to interact with.
- 5
- application: Refers to the application resources within the Loki logging system.
- 6
- resourceNames: Specifies the names of resources that this role can manage.
- 7
- logs: Refers to the log resources that can be created.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new logs in the Loki system.
1.1.2.3. Writing audit logs Copy linkLink copied to clipboard!
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
- 1
- rules: Defines the permissions granted by this ClusterRole.
- 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5
- audit: Specifies that the role manages audit logs within Loki.
- 6
- resourceNames: Defines the specific resources that the role can access.
- 7
- logs: Refers to the logs that can be managed under this role.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new audit logs.
1.1.2.4. Writing infrastructure logs Copy linkLink copied to clipboard!
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
1.1.2.5. ClusterLogForwarder editor role Copy linkLink copied to clipboard!
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
1.2. Modifying log level in collector Copy linkLink copied to clipboard!
To modify the log level in the collector, you can set the observability.openshift.io/log-level
annotation to trace
, debug
, info
, warn
, error
, and off
.
Example log level annotation
1.3. Managing the Operator Copy linkLink copied to clipboard!
The ClusterLogForwarder
resource has a managementState
field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState
to Unmanaged
.
1.4. Structure of the ClusterLogForwarder Copy linkLink copied to clipboard!
The CLF has a spec
section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application
,infrastructure
andaudit
forward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
1.4.1. Inputs Copy linkLink copied to clipboard!
Inputs are configured in an array under spec.inputs
. There are three built-in input types:
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
Selects logs from nodes and from infrastructure components running in the following namespaces:
-
default
-
kube
-
openshift
-
Containing the
kube-
oropenshift-
prefix
-
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application
that select logs from specific namespaces or using pod labels.
1.4.2. Outputs Copy linkLink copied to clipboard!
Outputs are configured in an array under spec.outputs
. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
1.4.3. Configuring OTLP output Copy linkLink copied to clipboard!
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Create or edit a
ClusterLogForwarder
custom resource (CR) to enable forwarding using OTLP by adding the following annotation:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.
1.4.4. Pipelines Copy linkLink copied to clipboard!
Pipelines are configured in an array under spec.pipelines
. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
1.4.5. Filters Copy linkLink copied to clipboard!
Filters are configured in an array under spec.filters
. They can match incoming log messages based on the value of structured fields and modify or drop them.
1.5. About forwarding logs to third-party systems Copy linkLink copied to clipboard!
To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder
custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret
object.
- pipeline
Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
-
application
. Container logs generated by user applications running in the cluster, except infrastructure container applications. -
infrastructure
. Container logs from pods that run in theopenshift*
,kube*
, ordefault
projects and journal logs sourced from node file system. -
audit
. Audit logs generated by the node audit system,auditd
, Kubernetes API server, OpenShift API server, and OVN network.
You can add labels to outbound log messages by using
key:value
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.-
- input
Forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an
inputRef
parameter and where to forward the logs to using anoutputRef
parameter.- Secret
-
A
key:value map
that contains confidential data such as user credentials.
Note the following:
-
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the
application
andaudit
types, but do not specify a pipeline for theinfrastructure
type,infrastructure
logs are dropped. -
You can use multiple types of outputs in the
ClusterLogForwarder
custom resource (CR) to send logs to servers that support different protocols.
The following example forwards the audit logs to a secure external Elasticsearch instance.
Sample log forwarding outputs and pipelines
- 1
- Forwarding to an external Elasticsearch of version 8.x or greater requires the
version
field to be specified. - 2
index
is set to read the field value.log_type
and falls back to "unknown" if not found.- 3 4
- Use username and password to authenticate to the server
- 5
- Enable Mutual Transport Layer Security (mTLS) between collector and elasticsearch. The spec identifies the keys and secret to the respective certificates that they represent.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
Using a TLS URL (
http://...
orssl://...
) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:-
passphrase
: (string) Passphrase to decode an encoded TLS private key. Requirestls.key
. -
ca-bundle.crt
: (string) File name of a customer CA for server authentication.
-
- Username and Password
-
username
: (string) Authentication user name. Requirespassword
. -
password
: (string) Authentication password. Requiresusername
.
-
- Simple Authentication Security Layer (SASL)
-
sasl.enable
(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the othersasl.
keys are set. -
sasl.mechanisms
: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. -
sasl.allow-insecure
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
-
1.5.1. Creating a Secret Copy linkLink copied to clipboard!
You can create a secret in the directory that contains your certificate and key files by using the following command:
oc create secret generic -n <namespace> <secret_name> \ --from-file=ca-bundle.crt=<your_bundle_file> \ --from-literal=username=<your_username> \ --from-literal=password=<your_password>
$ oc create secret generic -n <namespace> <secret_name> \
--from-file=ca-bundle.crt=<your_bundle_file> \
--from-literal=username=<your_username> \
--from-literal=password=<your_password>
Generic or opaque secrets are recommended for best results.
1.6. Creating a log forwarder Copy linkLink copied to clipboard!
To create a log forwarder, create a ClusterLogForwarder
custom resource (CR). This CR defines the service account, permissible input log types, pipelines, outputs, and any optional filters.
You need administrator permissions for the namespace where you create the ClusterLogForwarder
CR.
ClusterLogForwarder
CR example
- 1
- The type of output that you want to forward logs to. The value of this field can be
azureMonitor
,cloudwatch
,elasticsearch
,googleCloudLogging
,http
,kafka
,loki
,lokistack
,otlp
,splunk
, orsyslog
. - 2
- A list of inputs. The names
application
,audit
, andinfrastructure
are reserved for the default inputs. - 3
- A list of filters to apply to records going through this pipeline. Each filter is applied in the order defined here. If a filter drops a records, subsequent filters are not applied.
- 4
- This value should be the same as the input name. You can also use the default input names
application
,infrastructure
, andaudit
. - 5
- This value should be the same as the output name.
- 6
- This value should be the same as the filter name.
- 7
- The name of your service account.
1.7. Tuning log payloads and delivery Copy linkLink copied to clipboard!
The tuning
spec in the ClusterLogForwarder
custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.
For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.
To use this feature, your logging deployment must be configured to use the Vector collector. The tuning
spec in the ClusterLogForwarder
CR is not supported when using the Fluentd collector.
The following example shows the ClusterLogForwarder
CR options that you can modify to tune log forwarder outputs:
Example ClusterLogForwarder
CR tuning options
- 1
- Specify the delivery mode for log forwarding.
-
AtLeastOnce
delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. -
AtMostOnce
delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
-
- 2
- Specifying a
compression
configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. For more information, see "Supported compression types for tuning outputs". - 3
- Specifies a limit for the maximum payload of a single send operation to the output.
- 4
- Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (
ms
), seconds (s
), or minutes (m
). - 5
- Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (
ms
), seconds (s
), or minutes (m
).
Compression algorithm | Splunk | Amazon Cloudwatch | Elasticsearch 8 | LokiStack | Apache Kafka | HTTP | Syslog | Google Cloud | Microsoft Azure Monitoring |
---|---|---|---|---|---|---|---|---|---|
| X | X | X | X | X | ||||
| X | X | X | X | |||||
| X | X | X | ||||||
| X | X | X | ||||||
| X |
1.7.1. Enabling multi-line exception detection Copy linkLink copied to clipboard!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarder
Custom Resource (CR) contains adetectMultilineErrors
field under the.spec.filters
.
Example ClusterLogForwarder CR
1.7.1.1. Details Copy linkLink copied to clipboard!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
1.8. Forwarding logs to Google Cloud Platform (GCP) Copy linkLink copied to clipboard!
You can forward logs to Google Cloud Logging.
Forwarding logs to GCP is not supported on Red Hat OpenShift on AWS.
Prerequisites
- Red Hat OpenShift Logging Operator has been installed.
Procedure
Create a secret using your Google service account key.
oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ClusterLogForwarder
Custom Resource YAML using the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- 1
- The name of your service account.
- 2
- Set a
project
,folder
,organization
, orbillingAccount
field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy. - 3
- Set the value to add to the
logName
field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by||
, followed by another field path or a static value. A dynamic value must be encased in single curly brackets{}
and must end with a static fallback value separated with||
. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. - 4
- Specify the the names of inputs, defined in the
input.name
field for this pipeline. You can also use the built-in valuesapplication
,infrastructure
,audit
.
1.9. Forwarding logs to Splunk Copy linkLink copied to clipboard!
You can forward logs to the Splunk HTTP Event Collector (HEC).
Prerequisites
- Red Hat OpenShift Logging Operator has been installed
- You have obtained a Base64 encoded Splunk HEC token.
Procedure
Create a secret using your Base64 encoded Splunk HEC token.
oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
$ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or edit the
ClusterLogForwarder
Custom Resource (CR) using the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of your service account.
- 2
- Specify a name for the output.
- 3
- Specify the output type as
splunk
. - 5
- Specify the name of the secret that contains your HEC token.
- 4
- Specify the URL, including port, of your Splunk HEC.
- 6
- Specify the name of the index to send events to. If you do not specify an index, the default index of the splunk server configuration is used. This is an optional field.
- 7
- Specify the source of events to be sent to this sink. You can configure dynamic per-event values. This field is optional.
- 8
- Specify the fields to be added to the Splunk index. This field is optional.
- 9
- Specify the record field to be used as the payload. This field is optional.
- 10
- Specify the compression configuration, which can be either
gzip
ornone
. The default value isnone
. This field is optional. - 11
- Specify the input names.
- 12
- Specify the name of the output to use when forwarding logs with this pipeline.
1.10. Forwarding logs over HTTP Copy linkLink copied to clipboard!
To enable forwarding logs over HTTP, specify http
as the output type in the ClusterLogForwarder
custom resource (CR).
Procedure
Create or edit the
ClusterLogForwarder
CR using the template below:Example ClusterLogForwarder CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Additional headers to send with the log record.
- 2
- Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node.
- 3
- Destination address for logs.
- 4
- Values are either
true
orfalse
. - 5
- Secret name for destination credentials.
- 6
- This value should be the same as the output name.
- 7
- The name of your service account.
1.11. Forwarding to Azure Monitor Logs Copy linkLink copied to clipboard!
You can forward logs to Azure Monitor Logs. This functionality is provided by the Vector Azure Monitor Logs sink.
Prerequisites
- You have basic familiarity with Azure services.
- You have an Azure account configured for Azure Portal or Azure CLI access.
- You have obtained your Azure Monitor Logs primary or the secondary security key.
- You have determined which log types to forward.
-
You installed the OpenShift CLI (
oc
). - You have installed Red Hat OpenShift Logging Operator.
- You have administrator permissions.
Procedure
- Enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:
Create a secret with your shared key:
- 1
- Must contain a primary or secondary key for the Log Analytics workspace making the request.
- To obtain a shared key, you can use this command in Azure CLI:
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”
-
Create or edit your
ClusterLogForwarder
CR using the template matching your log selection.
Forward all logs
- 1
- The name of your service account.
- 2
- Unique identifier for the Log Analytics workspace. Required field.
- 3
- Record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. For more information, see Azure record type in the Microsoft Azure documentation.
1.12. Forwarding application logs from specific projects Copy linkLink copied to clipboard!
You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a ClusterLogForwarder
custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name for the input.
- 2
- Specify the type as
application
to collect logs from applications. - 3
- Specify the set of namespaces and containers to include when collecting logs.
- 4
- Specify the labels to be applied to log records passing through this pipeline. These labels appear in the
openshift.labels
map in the log record. - 5
- Specify a name for the pipeline.
Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.13. Forwarding application logs from specific pods Copy linkLink copied to clipboard!
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more matchLabels
key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object. In the file, specify the pod labels using simple equality-based selectors underinputs[].name.application.selector.matchLabels
, as shown in the following example.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the service account name.
- 2
- Specify a name for the input.
- 3
- Specify the type as
application
to collect logs from applications. - 4
- Specify the set of namespaces to include when collecting logs.
- 5
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
-
For each unique combination of pod labels, create an additional
inputs[].name
section similar to the one shown. -
Update the
selectors
to match the pod labels of this application. Add the new
inputs[].name
value toinputRefs
. For example:- inputRefs: [ myAppLogData, myOtherAppLogData ]
- inputRefs: [ myAppLogData, myOtherAppLogData ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
For each unique combination of pod labels, create an additional
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.13.1. Forwarding logs using the syslog protocol Copy linkLink copied to clipboard!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the output.
- 2
- Optional: Specify the value for the
APP-NAME
part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||
, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||
. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 3
- Optional: Specify the value for
Facility
part of the syslog-msg header. - 4
- Optional: Specify the value for
MSGID
part of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by||
, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||
. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 5
- Optional: Specify the record field to use as the payload. The
payloadKey
value must be a single field path encased in single curly brackets{}
. Example: {.<value>}. - 6
- Optional: Specify the value for the
PROCID
part of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||
, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||
. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}. - 7
- Optional: Set the RFC that the generated messages conform to. The value can be
RFC3164
orRFC5424
. - 8
- Optional: Set the severity level for the message. For more information, see The Syslog Protocol.
- 9
- Optional: Set the delivery mode for log forwarding. The value can be either
AtLeastOnce
, orAtMostOnce
. - 10
- Specify the absolute URL with a scheme. Valid schemes are:
tcp
,tls
, andudp
. For example:tls://syslog-receiver.example.com:6514
. - 11
- Specify the settings for controlling options of the transport layer security (TLS) client connections.
- 12
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 13
- Specify a name for the pipeline.
- 14
- The name of your service account.
Create the CR object:
oc create -f <filename>.yaml
$ oc create -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.13.1.1. Adding log source information to the message output Copy linkLink copied to clipboard!
You can add namespace_name
, pod_name
, and container_name
elements to the message
field of the record by adding the enrichment
field to your ClusterLogForwarder
custom resource (CR).
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output with enrichment: None
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}
Example syslog message output with enrichment: KubernetesMinimal
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
1.14. Forwarding logs to Amazon CloudWatch from STS-enabled clusters Copy linkLink copied to clipboard!
Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on Amazon Web Services (AWS). You can forward logs from OpenShift Logging to CloudWatch securely by leveraging AWS’s Identity and Access Management (IAM) Roles for Service Accounts (IRSA), which uses AWS Security Token Service (STS).
The authentication with CloudWatch works as follows:
- The log collector requests temporary AWS credentials from Security Token Service (STS) by presenting its service account token to the OpenID Connect (OIDC) provider in AWS.
- AWS validates the token. Afterward, depending on the trust policy, AWS issues short-lived, temporary credentials, including an access key ID, secret access key, and session token, for the log collector to use.
On STS-enabled clusters such as Red Hat OpenShift Service on AWS, AWS roles are pre-configured with the required trust policies. This allows service accounts to assume the roles. Therefore, you can create a secret for AWS with STS that uses the IAM role. You can then create or update a ClusterLogForwarder
custom resource (CR) that uses the secret to forward logs to CloudWatch output. Follow these procedures to create a secret and a ClusterLogForwarder
CR if roles have been pre-configured:
- Creating a secret for CloudWatch with an existing AWS role
- Forwarding logs to Amazon CloudWatch from STS-enabled clusters
If you do not have an AWS IAM role pre-configured with trust policies, you must first create the role with the required trust policies. Complete the following procedures to create a secret, ClusterLogForwarder
CR, and role.
1.14.1. Creating an AWS IAM role Copy linkLink copied to clipboard!
Create an Amazon Web Services (AWS) IAM role that your service account can assume to securely access AWS resources.
The following procedure demonstrates creating an AWS IAM role by using the AWS CLI. You can alternatively use the Cloud Credential Operator (CCO) utility ccoctl
. Using the ccoctl
utility creates many fields in the IAM role policy that are not required by the ClusterLogForwarder
custom resource (CR). These extra fields are ignored by the CR. However, the ccoctl
utility provides a convenient way for configuring IAM roles. For more information see Manual mode with short-term credentials for components.
Prerequisites
- You have access to an Red Hat OpenShift Logging cluster with Security Token Service (STS) enabled and configured for AWS.
- You have administrator access to the AWS account.
- AWS CLI installed.
Procedure
Create an IAM policy that grants permissions to write logs to CloudWatch.
Create a file, for example
cw-iam-role-policy.json
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the IAM policy based on the previous policy definition by running the following command:
aws iam create-policy \ --policy-name cluster-logging-allow \ --policy-document file://cw-iam-role-policy.json
aws iam create-policy \ --policy-name cluster-logging-allow \ --policy-document file://cw-iam-role-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the
Arn
value of the created policy.
Create a trust policy to allow the logging service account to assume an IAM role:
Create a file, for example
cw-trust-policy.json
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an IAM role based on the previously defined trust policy by running the following command:
aws iam create-role --role-name openshift-logger --assume-role-policy-document file://cw-trust-policy.json
$ aws iam create-role --role-name openshift-logger --assume-role-policy-document file://cw-trust-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the
Arn
value of the created role.Attach the policy to the role by running the following command:
aws iam put-role-policy \ --role-name openshift-logger --policy-name cluster-logging-allow \ --policy-document file://cw-role-policy.json
$ aws iam put-role-policy \ --role-name openshift-logger --policy-name cluster-logging-allow \ --policy-document file://cw-role-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the role and the permissions policy by running the following command:
aws iam get-role --role-name openshift-logger
$ aws iam get-role --role-name openshift-logger
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ROLE arn:aws:iam::123456789012:role/openshift-logger ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:openshift-logging:openshift-logger PRINCIPAL arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL>
ROLE arn:aws:iam::123456789012:role/openshift-logger ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:openshift-logging:openshift-logger PRINCIPAL arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14.2. Creating a secret for AWS CloudWatch with an existing AWS role Copy linkLink copied to clipboard!
Create a secret for Amazon Web Services (AWS) Security Token Service (STS) from the configured AWS IAM role by using the oc create secret --from-literal
command.
Prerequisites
- You have created an AWS IAM role.
- You have administrator access to Red Hat OpenShift Logging.
Procedure
In the CLI, enter the following to generate a secret for AWS:
oc create secret generic sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/openshift-logger
$ oc create secret generic sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/openshift-logger
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Secret
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14.3. Forwarding logs to Amazon CloudWatch from STS-enabled clusters Copy linkLink copied to clipboard!
You can forward logs from logging for Red Hat OpenShift deployed on clusters with Amazon Web Services (AWS) Security Token Service (STS)-enabled to Amazon CloudWatch. Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on AWS.
Prerequisites
- Red Hat OpenShift Logging Operator has been installed.
- You have configured a credential secret.
- You have administrator access to Red Hat OpenShift Logging.
Procedure
Create or update a
ClusterLogForwarder
custom resource (CR):Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the service account.
- 2
- Specify a name for the output.
- 3
- Specify the
cloudwatch
type. - 4
- Specify the group name for the log stream.
- 5
- Specify the AWS region.
- 6
- Specify
iamRole
as the authentication type for STS. - 7
- Specify the name of the secret and the key where the
role_arn
resource is stored. - 8
- Specify the service account token to use for authentication. To use the projected service account token, use
from: serviceAccount
. - 9
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 10
- Specify the names of the output to use when forwarding logs with this pipeline.
1.14.4. Configuring content filters to drop unwanted log records Copy linkLink copied to clipboard!
Collecting all cluster logs produces a large amount of data, which can be expensive to move and store. To reduce volume, you can configure the drop
filter to exclude unwanted log records before forwarding. The log collector evaluates log streams against the filter and drops records that match specified conditions.
The drop
filter uses the test
field to define one or more conditions for evaluating log records. The filter applies the following rules to check whether to drop a record:
- A test passes if all its specified conditions evaluate to true.
- If a test passes, the filter drops the log record.
-
If you define several tests in the
drop
filter configuration, the filter drops the log record if any of the tests pass. - If there is an error evaluating a condition, for example, the referenced field is missing, that condition evaluates to false.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
-
You have created a
ClusterLogForwarder
custom resource (CR). -
You have installed the OpenShift CLI (
oc
).
Procedure
Extract the existing
ClusterLogForwarder
configuration and save it as a local file.oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
$ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
<name>
is the name of theClusterLogForwarder
instance you want to configure. -
<namespace>
is the namespace where you created theClusterLogForwarder
instance, for exampleopenshift-logging
. -
<filename>
is the name of the local file where you save the configuration.
-
Add a configuration to drop unwanted log records to the
filters
spec in theClusterLogForwarder
CR.Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
drop
filter drops log records that match the filter configuration. - 2
- Specify configuration options for the
drop
filter. - 3
- Specify conditions for tests to evaluate whether the filter drops a log record.
- 4
- Specify dot-delimited paths to fields in log records.
-
Each path segment can contain alphanumeric characters and underscores,
a-z
,A-Z
,0-9
,_
, for example,.kubernetes.namespace_name
. -
If segments contain different characters, the segment must be in quotes, for example,
.kubernetes.labels."app.version-1.2/beta"
. -
You can include several field paths in a single
test
configuration, but they must all evaluate to true for the test to pass and thedrop
filter to apply.
-
Each path segment can contain alphanumeric characters and underscores,
- 5
- Specify a regular expression. If log records match this regular expression, they are dropped.
- 6
- Specify a regular expression. If log records do not match this regular expression, they are dropped.
- 7
- Specify the pipeline that uses the
drop
filter.
NoteYou can set either the
matches
ornotMatches
condition for a singlefield
path, but not both.Example configuration that keeps only high-priority log records
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example configuration with several tests
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14.5. API audit filter overview Copy linkLink copied to clipboard!
OpenShift API servers generate audit events for every API call. These events include details about the request, the response, and the identity of the requester. This can lead to large volumes of data.
The API audit filter helps manage the audit trail by using rules to exclude non-essential events and to reduce the event size. Rules are checked in order, and checking stops at the first match. The amount of data in an event depends on the value of the level
field:
-
None
: The event is dropped. -
Metadata
: The event includes audit metadata and excludes request and response bodies. -
Request
: The event includes audit metadata and the request body, and excludes the response body. -
RequestResponse
: The event includes all data: metadata, request body and response body. The response body can be very large. For example,oc get pods -A
generates a response body containing the YAML description of every pod in the cluster.
You can only use the API audit filter feature if the Vector collector is set up in your logging deployment.
The ClusterLogForwarder
custom resource (CR) uses the same format as the standard Kubernetes audit policy. The ClusterLogForwarder
CR provides the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*
asterisk character. For example, theopenshift-\*
namespace matchesopenshift-apiserver
oropenshift-authentication
namespaces. The\*/status
resource matchesPod/status
orDeployment/status
resources. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get
,list
, andwatch
are dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
To disable these defaults, either end your rules list with a rule that has only a
level
field or add an empty rule.-
Read-only system events such as
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodes
field, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]
. If the value is an empty list,[]
, no status codes are omitted.
The ClusterLogForwarder
CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder
CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.
-
You must have the
collect-audit-logs
cluster role to collect the audit logs. - The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
1.14.6. Filtering application logs at input by including the label expressions or a matching label key and values Copy linkLink copied to clipboard!
You can include the application logs based on the label expressions or a matching label key and its values by using the input
selector.
Procedure
Add a configuration for a filter to the
input
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.14.7. Configuring content filters to prune log records Copy linkLink copied to clipboard!
If you configure the prune
filter, the log collector evaluates log streams against the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
-
You have created a
ClusterLogForwarder
custom resource (CR). -
You have installed the OpenShift CLI (
oc
).
Procedure
Extract the existing
ClusterLogForwarder
configuration and save it as a local file.oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
$ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
<name>
is the name of theClusterLogForwarder
instance you want to configure. -
<namespace>
is the namespace where you created theClusterLogForwarder
instance, for exampleopenshift-logging
. -
<filename>
is the name of the local file where you save the configuration.
-
Add a configuration to prune log records to the
filters
spec in theClusterLogForwarder
CR.ImportantIf you specify both
in
andnotIn
parameters, thenotIn
array takes precedence overin
during pruning. After records are pruned by using thenotIn
array, they are then pruned by using thein
array.Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the type of filter. The
prune
filter prunes log records by configured fields. - 2
- Specify configuration options for the
prune
filter.-
The
in
andnotIn
fields are arrays of dot-delimited paths to fields in log records. -
Each path segment can contain alpha-numeric characters and underscores,
a-z
,A-Z
,0-9
,_
, for example,.kubernetes.namespace_name
. -
If segments contain different characters, the segment must be in quotes, for example,
.kubernetes.labels."app.version-1.2/beta"
.
-
The
- 3
- Optional: Specify fields to remove from the log record. The log collector keeps all other fields.
- 4
- Optional: Specify fields to keep in the log record. The log collector removes all other fields.
- 5
- Specify the pipeline that the
prune
filter is applied to.Important-
The filters cannot remove the
.log_type
,.log_source
,.message
fields from the log records. You must include them in thenotIn
field. -
If you use the
googleCloudLogging
output, you must include.hostname
in thenotIn
field.
-
The filters cannot remove the
Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.15. Filtering the audit and infrastructure log inputs by source Copy linkLink copied to clipboard!
You can define the list of audit
and infrastructure
sources to collect the logs by using the input
selector.
Procedure
Add a configuration to define the
audit
andinfrastructure
sources in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to defineaudit
andinfrastructure
sources:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node
: Journal log from the node -
container
: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI
: Logs from the Kubernetes API servers -
openshiftAPI
: Logs from the OpenShift API servers -
auditd
: Logs from a node auditd service -
ovn
: Logs from an open virtual network service
-
Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.16. Filtering application logs at input by including or excluding the namespace or container name Copy linkLink copied to clipboard!
You can include or exclude the application logs based on the namespace and container name by using the input
selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include or exclude namespaces and container names:Example
ClusterLogForwarder
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
excludes
field takes precedence over theincludes
field.Apply the
ClusterLogForwarder
CR by running the following command:oc apply -f <filename>.yaml
$ oc apply -f <filename>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow