Este conteúdo não está disponível no idioma selecionado.
Chapter 1. Configuring log forwarding
			The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
		
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
1.1. Setting up log collection
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
				The Red Hat OpenShift Logging Operator provides collect-audit-logs, collect-application-logs, and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
			
Setup log collection by binding the required cluster roles to your service account.
1.1.1. Legacy service accounts
					To use the existing legacy service account logcollector, create the following ClusterRoleBinding:
				
oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollectoroc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollectorAdditionally, create the following ClusterRoleBinding if collecting audit logs:
oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector1.1.2. Creating service accounts
Prerequisites
- 
							The Red Hat OpenShift Logging Operator is installed in the openshift-loggingnamespace.
- You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
- Bind the appropriate cluster roles to the service account: - Example binding command - oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> - $ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.1.2.1. Cluster Role Binding for your Service Account
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
1.1.2.2. Writing application logs
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
- 1
- rules: Specifies the permissions granted by this ClusterRole.
- 2
- apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system.
- 3
- loki.grafana.com: The API group for managing Loki-related resources.
- 4
- resources: The resource type that the ClusterRole grants permission to interact with.
- 5
- application: Refers to the application resources within the Loki logging system.
- 6
- resourceNames: Specifies the names of resources that this role can manage.
- 7
- logs: Refers to the log resources that can be created.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new logs in the Loki system.
1.1.2.3. Writing audit logs
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
- 1
- rules: Defines the permissions granted by this ClusterRole.
- 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5
- audit: Specifies that the role manages audit logs within Loki.
- 6
- resourceNames: Defines the specific resources that the role can access.
- 7
- logs: Refers to the logs that can be managed under this role.
- 8
- verbs: The actions allowed on the resources.
- 9
- create: Grants permission to create new audit logs.
1.1.2.4. Writing infrastructure logs
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
1.1.2.5. ClusterLogForwarder editor role
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
1.2. Modifying log level in collector
				To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace, debug, info, warn, error, and off.
			
Example log level annotation
1.3. Managing the Operator
				The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged:
			
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
				This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged.
			
1.4. Structure of the ClusterLogForwarder
				The CLF has a spec section that contains the following key components:
			
- Inputs
- 
							Select log messages to be forwarded. Built-in input types application,infrastructureandauditforward logs from different parts of the cluster. You can also define custom inputs.
- Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
1.4.1. Inputs
					Inputs are configured in an array under spec.inputs. There are three built-in input types:
				
- application
- Selects logs from all application containers, excluding those in infrastructure namespaces.
- infrastructure
- Selects logs from nodes and from infrastructure components running in the following namespaces: - 
										default
- 
										kube
- 
										openshift
- 
										Containing the kube-oropenshift-prefix
 
- 
										
- audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
					Users can define custom inputs of type application that select logs from specific namespaces or using pod labels.
				
1.4.2. Outputs
					Outputs are configured in an array under spec.outputs. Each output must have a unique name and a type. Supported types are:
				
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
1.4.3. Configuring OTLP output
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
- Create or edit a - ClusterLogForwardercustom resource (CR) to enable forwarding using OTLP by adding the following annotation:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.
1.4.4. Pipelines
					Pipelines are configured in an array under spec.pipelines. Each pipeline must have a unique name and consists of:
				
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
1.4.5. Filters
					Filters are configured in an array under spec.filters. They can match incoming log messages based on the value of structured fields and modify or drop them.
				
1.5. About forwarding logs to third-party systems
				To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.
			
- pipeline
- Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: - 
									application. Container logs generated by user applications running in the cluster, except infrastructure container applications.
- 
									infrastructure. Container logs from pods that run in theopenshift*,kube*, ordefaultprojects and journal logs sourced from node file system.
- 
									audit. Audit logs generated by the node audit system,auditd, Kubernetes API server, OpenShift API server, and OVN network.
 - You can add labels to outbound log messages by using - key:valuepairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.
- 
									
- input
- Forwards the application logs associated with a specific project to a pipeline. - In the pipeline, you define which log types to forward using an - inputRefparameter and where to forward the logs to using an- outputRefparameter.
- Secret
- 
							A key:value mapthat contains confidential data such as user credentials.
Note the following:
- 
						If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the applicationandaudittypes, but do not specify a pipeline for theinfrastructuretype,infrastructurelogs are dropped.
- 
						You can use multiple types of outputs in the ClusterLogForwardercustom resource (CR) to send logs to servers that support different protocols.
The following example forwards the audit logs to a secure external Elasticsearch instance.
Sample log forwarding outputs and pipelines
- 1
- Forwarding to an external Elasticsearch of version 8.x or greater requires theversionfield to be specified.
- 2
- indexis set to read the field value- .log_typeand falls back to "unknown" if not found.
- 3 4
- Use username and password to authenticate to the server
- 5
- Enable Mutual Transport Layer Security (mTLS) between collector and elasticsearch. The spec identifies the keys and secret to the respective certificates that they represent.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
- Using a TLS URL ( - http://...or- ssl://...) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:- 
									passphrase: (string) Passphrase to decode an encoded TLS private key. Requirestls.key.
- 
									ca-bundle.crt: (string) File name of a customer CA for server authentication.
 
- 
									
- Username and Password
- 
									username: (string) Authentication user name. Requirespassword.
- 
									password: (string) Authentication password. Requiresusername.
 
- 
									
- Simple Authentication Security Layer (SASL)
- 
									sasl.enable(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the othersasl.keys are set.
- 
									sasl.mechanisms: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
- 
									sasl.allow-insecure: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
 
- 
									
1.5.1. Creating a Secret
You can create a secret in the directory that contains your certificate and key files by using the following command:
oc create secret generic -n <namespace> <secret_name> \ --from-file=ca-bundle.crt=<your_bundle_file> \ --from-literal=username=<your_username> \ --from-literal=password=<your_password>
$ oc create secret generic -n <namespace> <secret_name> \
  --from-file=ca-bundle.crt=<your_bundle_file> \
  --from-literal=username=<your_username> \
  --from-literal=password=<your_password>Generic or opaque secrets are recommended for best results.
1.6. Creating a log forwarder
				To create a log forwarder, create a ClusterLogForwarder custom resource (CR). This CR defines the service account, permissible input log types, pipelines, outputs, and any optional filters.
			
					You need administrator permissions for the namespace where you create the ClusterLogForwarder CR.
				
ClusterLogForwarder CR example
- 1
- The type of output that you want to forward logs to. The value of this field can beazureMonitor,cloudwatch,elasticsearch,googleCloudLogging,http,kafka,loki,lokistack,otlp,splunk, orsyslog.
- 2
- A list of inputs. The namesapplication,audit, andinfrastructureare reserved for the default inputs.
- 3
- A list of filters to apply to records going through this pipeline. Each filter is applied in the order defined here. If a filter drops a records, subsequent filters are not applied.
- 4
- This value should be the same as the input name. You can also use the default input namesapplication,infrastructure, andaudit.
- 5
- This value should be the same as the output name.
- 6
- This value should be the same as the filter name.
- 7
- The name of your service account.
1.7. Tuning log payloads and delivery
				The tuning spec in the ClusterLogForwarder custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.
			
For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.
					To use this feature, your logging deployment must be configured to use the Vector collector. The tuning spec in the ClusterLogForwarder CR is not supported when using the Fluentd collector.
				
				The following example shows the ClusterLogForwarder CR options that you can modify to tune log forwarder outputs:
			
Example ClusterLogForwarder CR tuning options
- 1
- Specify the delivery mode for log forwarding.- 
								AtLeastOncedelivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
- 
								AtMostOncedelivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
 
- 
								
- 2
- Specifying acompressionconfiguration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. For more information, see "Supported compression types for tuning outputs".
- 3
- Specifies a limit for the maximum payload of a single send operation to the output.
- 4
- Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
- 5
- Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
| Compression algorithm | Splunk | Amazon Cloudwatch | Elasticsearch 8 | LokiStack | Apache Kafka | HTTP | Syslog | Google Cloud | Microsoft Azure Monitoring | 
|---|---|---|---|---|---|---|---|---|---|
| 
								 | X | X | X | X | X | ||||
| 
								 | X | X | X | X | |||||
| 
								 | X | X | X | ||||||
| 
								 | X | X | X | ||||||
| 
								 | X | 
1.7.1. Enabling multi-line exception detection
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
    at testjava.Main.handle(Main.java:47)
    at testjava.Main.printMe(Main.java:19)
    at testjava.Main.main(Main.java:10)
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
    at testjava.Main.handle(Main.java:47)
    at testjava.Main.printMe(Main.java:19)
    at testjava.Main.main(Main.java:10)- 
							To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarderCustom Resource (CR) contains adetectMultilineErrorsfield under the.spec.filters.
Example ClusterLogForwarder CR
1.7.1.1. Details
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
1.8. Forwarding logs to Google Cloud Platform (GCP)
You can forward logs to Google Cloud Logging.
Forwarding logs to GCP is not supported on Red Hat OpenShift on AWS.
Prerequisites
- Red Hat OpenShift Logging Operator has been installed.
Procedure
- Create a secret using your Google service account key. - oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json> - $ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a - ClusterLogForwarderCustom Resource YAML using the template below:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 1
- The name of your service account.
- 2
- Set aproject,folder,organization, orbillingAccountfield and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy.
- 3
- Set the value to add to thelogNamefield of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by||, followed by another field path or a static value. A dynamic value must be encased in single curly brackets{}and must end with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes.
- 4
- Specify the the names of inputs, defined in theinput.namefield for this pipeline. You can also use the built-in valuesapplication,infrastructure,audit.
1.9. Forwarding logs to Splunk
You can forward logs to the Splunk HTTP Event Collector (HEC).
Prerequisites
- Red Hat OpenShift Logging Operator has been installed
- You have obtained a Base64 encoded Splunk HEC token.
Procedure
- Create a secret using your Base64 encoded Splunk HEC token. - oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token> - $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create or edit the - ClusterLogForwarderCustom Resource (CR) using the template below:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- The name of your service account.
- 2
- Specify a name for the output.
- 3
- Specify the output type assplunk.
- 5
- Specify the name of the secret that contains your HEC token.
- 4
- Specify the URL, including port, of your Splunk HEC.
- 6
- Specify the name of the index to send events to. If you do not specify an index, the default index of the splunk server configuration is used. This is an optional field.
- 7
- Specify the source of events to be sent to this sink. You can configure dynamic per-event values. This field is optional.
- 8
- Specify the fields to be added to the Splunk index. This field is optional.
- 9
- Specify the record field to be used as the payload. This field is optional.
- 10
- Specify the compression configuration, which can be eithergzipornone. The default value isnone. This field is optional.
- 11
- Specify the input names.
- 12
- Specify the name of the output to use when forwarding logs with this pipeline.
 
1.10. Forwarding logs over HTTP
				To enable forwarding logs over HTTP, specify http as the output type in the ClusterLogForwarder custom resource (CR).
			
Procedure
- Create or edit the - ClusterLogForwarderCR using the template below:- Example ClusterLogForwarder CR - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Additional headers to send with the log record.
- 2
- Optional: URL of the HTTP/HTTPS proxy that should be used to forward logs over http or https from this output. This setting overrides any default proxy settings for the cluster or the node.
- 3
- Destination address for logs.
- 4
- Values are eithertrueorfalse.
- 5
- Secret name for destination credentials.
- 6
- This value should be the same as the output name.
- 7
- The name of your service account.
 
1.11. Forwarding to Azure Monitor Logs
You can forward logs to Azure Monitor Logs. This functionality is provided by the Vector Azure Monitor Logs sink.
Prerequisites
- You have basic familiarity with Azure services.
- You have an Azure account configured for Azure Portal or Azure CLI access.
- You have obtained your Azure Monitor Logs primary or the secondary security key.
- You have determined which log types to forward.
- 
						You installed the OpenShift CLI (oc).
- You have installed Red Hat OpenShift Logging Operator.
- You have administrator permissions.
Procedure
- Enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:
Create a secret with your shared key:
- 1
- Must contain a primary or secondary key for the Log Analytics workspace making the request.- To obtain a shared key, you can use this command in Azure CLI:
 
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”- 
						Create or edit your ClusterLogForwarderCR using the template matching your log selection.
Forward all logs
- 1
- The name of your service account.
- 2
- Unique identifier for the Log Analytics workspace. Required field.
- 3
- Record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. For more information, see Azure record type in the Microsoft Azure documentation.
1.12. Forwarding application logs from specific projects
You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
				To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.
			
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
- Create or edit a YAML file that defines the - ClusterLogForwarderCR:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the name for the input.
- 2
- Specify the type asapplicationto collect logs from applications.
- 3
- Specify the set of namespaces and containers to include when collecting logs.
- 4
- Specify the labels to be applied to log records passing through this pipeline. These labels appear in theopenshift.labelsmap in the log record.
- 5
- Specify a name for the pipeline.
 
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.13. Forwarding application logs from specific pods
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
				To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.
			
Procedure
- Create or edit a YAML file that defines the - ClusterLogForwarderCR object. In the file, specify the pod labels using simple equality-based selectors under- inputs[].name.application.selector.matchLabels, as shown in the following example.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the service account name.
- 2
- Specify a name for the input.
- 3
- Specify the type asapplicationto collect logs from applications.
- 4
- Specify the set of namespaces to include when collecting logs.
- 5
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
 
- Optional: You can send log data from additional applications that have different pod labels to the same pipeline. - 
								For each unique combination of pod labels, create an additional inputs[].namesection similar to the one shown.
- 
								Update the selectorsto match the pod labels of this application.
- Add the new - inputs[].namevalue to- inputRefs. For example:- - inputRefs: [ myAppLogData, myOtherAppLogData ] - - inputRefs: [ myAppLogData, myOtherAppLogData ]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- 
								For each unique combination of pod labels, create an additional 
- Create the CR object: - oc create -f <file-name>.yaml - $ oc create -f <file-name>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.13.1. Forwarding logs using the syslog protocol
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
					To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
				
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
- Create or edit a YAML file that defines the - ClusterLogForwarderCR object:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify a name for the output.
- 2
- Optional: Specify the value for theAPP-NAMEpart of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
- 3
- Optional: Specify the value forFacilitypart of the syslog-msg header.
- 4
- Optional: Specify the value forMSGIDpart of the syslog-msg header. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 32 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
- 5
- Optional: Specify the record field to use as the payload. ThepayloadKeyvalue must be a single field path encased in single curly brackets{}. Example: {.<value>}.
- 6
- Optional: Specify the value for thePROCIDpart of the syslog message header. The value must conform with The Syslog Protocol. The value can be a combination of static and dynamic values consisting of field paths followed by||, and then followed by another field path or a static value. The maximum length of the final values is truncated to 48 characters. You must encase a dynamic value curly brackets and the value must be followed with a static fallback value separated with||. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes. Example value: <value1>-{.<value2>||"none"}.
- 7
- Optional: Set the RFC that the generated messages conform to. The value can beRFC3164orRFC5424.
- 8
- Optional: Set the severity level for the message. For more information, see The Syslog Protocol.
- 9
- Optional: Set the delivery mode for log forwarding. The value can be eitherAtLeastOnce, orAtMostOnce.
- 10
- Specify the absolute URL with a scheme. Valid schemes are:tcp,tls, andudp. For example:tls://syslog-receiver.example.com:6514.
- 11
- Specify the settings for controlling options of the transport layer security (TLS) client connections.
- 12
- Specify which log types to forward by using the pipeline:application,infrastructure, oraudit.
- 13
- Specify a name for the pipeline.
- 14
- The name of your service account.
 
- Create the CR object: - oc create -f <filename>.yaml - $ oc create -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.13.1.1. Adding log source information to the message output
						You can add namespace_name, pod_name, and container_name elements to the message field of the record by adding the enrichment field to your ClusterLogForwarder custom resource (CR).
					
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output with enrichment: None
 2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: {...}
 2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: {...}Example syslog message output with enrichment: KubernetesMinimal
2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}
2025-03-03T11:48:01+00:00  example-worker-x  syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}1.14. Forwarding logs to Amazon CloudWatch from STS-enabled clusters
Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on Amazon Web Services (AWS). You can forward logs from OpenShift Logging to CloudWatch securely by leveraging AWS’s Identity and Access Management (IAM) Roles for Service Accounts (IRSA), which uses AWS Security Token Service (STS).
The authentication with CloudWatch works as follows:
- The log collector requests temporary AWS credentials from Security Token Service (STS) by presenting its service account token to the OpenID Connect (OIDC) provider in AWS.
- AWS validates the token. Afterward, depending on the trust policy, AWS issues short-lived, temporary credentials, including an access key ID, secret access key, and session token, for the log collector to use.
				On STS-enabled clusters such as Red Hat OpenShift Service on AWS, AWS roles are pre-configured with the required trust policies. This allows service accounts to assume the roles. Therefore, you can create a secret for AWS with STS that uses the IAM role. You can then create or update a ClusterLogForwarder custom resource (CR) that uses the secret to forward logs to CloudWatch output. Follow these procedures to create a secret and a ClusterLogForwarder CR if roles have been pre-configured:
			
- Creating a secret for CloudWatch with an existing AWS role
- Forwarding logs to Amazon CloudWatch from STS-enabled clusters
				If you do not have an AWS IAM role pre-configured with trust policies, you must first create the role with the required trust policies. Complete the following procedures to create a secret, ClusterLogForwarder CR, and role.
			
1.14.1. Creating an AWS IAM role
Create an Amazon Web Services (AWS) IAM role that your service account can assume to securely access AWS resources.
					The following procedure demonstrates creating an AWS IAM role by using the AWS CLI. You can alternatively use the Cloud Credential Operator (CCO) utility ccoctl. Using the ccoctl utility creates many fields in the IAM role policy that are not required by the ClusterLogForwarder custom resource (CR). These extra fields are ignored by the CR. However, the ccoctl utility provides a convenient way for configuring IAM roles. For more information see Manual mode with short-term credentials for components.
				
Prerequisites
- You have access to an Red Hat OpenShift Logging cluster with Security Token Service (STS) enabled and configured for AWS.
- You have administrator access to the AWS account.
- AWS CLI installed.
Procedure
- Create an IAM policy that grants permissions to write logs to CloudWatch. - Create a file, for example - cw-iam-role-policy.json, with the following content:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the IAM policy based on the previous policy definition by running the following command: - aws iam create-policy \ --policy-name cluster-logging-allow \ --policy-document file://cw-iam-role-policy.json- aws iam create-policy \ --policy-name cluster-logging-allow \ --policy-document file://cw-iam-role-policy.json- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note the - Arnvalue of the created policy.
 
- Create a trust policy to allow the logging service account to assume an IAM role: - Create a file, for example - cw-trust-policy.json, with the following content:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create an IAM role based on the previously defined trust policy by running the following command: - aws iam create-role --role-name openshift-logger --assume-role-policy-document file://cw-trust-policy.json - $ aws iam create-role --role-name openshift-logger --assume-role-policy-document file://cw-trust-policy.json- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note the - Arnvalue of the created role.
- Attach the policy to the role by running the following command: - aws iam put-role-policy \ --role-name openshift-logger --policy-name cluster-logging-allow \ --policy-document file://cw-role-policy.json- $ aws iam put-role-policy \ --role-name openshift-logger --policy-name cluster-logging-allow \ --policy-document file://cw-role-policy.json- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Verify the role and the permissions policy by running the following command: - aws iam get-role --role-name openshift-logger - $ aws iam get-role --role-name openshift-logger- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - ROLE arn:aws:iam::123456789012:role/openshift-logger ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:openshift-logging:openshift-logger PRINCIPAL arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL> - ROLE arn:aws:iam::123456789012:role/openshift-logger ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:openshift-logging:openshift-logger PRINCIPAL arn:aws:iam::123456789012:oidc-provider/<OPENSHIFT_OIDC_PROVIDER_URL>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.14.2. Creating a secret for AWS CloudWatch with an existing AWS role
					Create a secret for Amazon Web Services (AWS) Security Token Service (STS) from the configured AWS IAM role by using the oc create secret --from-literal command.
				
Prerequisites
- You have created an AWS IAM role.
- You have administrator access to Red Hat OpenShift Logging.
Procedure
- In the CLI, enter the following to generate a secret for AWS: - oc create secret generic sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/openshift-logger - $ oc create secret generic sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/openshift-logger- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example Secret - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.14.3. Forwarding logs to Amazon CloudWatch from STS-enabled clusters
You can forward logs from logging for Red Hat OpenShift deployed on clusters with Amazon Web Services (AWS) Security Token Service (STS)-enabled to Amazon CloudWatch. Amazon CloudWatch is a service that helps administrators observe and monitor resources and applications on AWS.
Prerequisites
- Red Hat OpenShift Logging Operator has been installed.
- You have configured a credential secret.
- You have administrator access to Red Hat OpenShift Logging.
Procedure
- Create or update a - ClusterLogForwardercustom resource (CR):- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the service account.
- 2
- Specify a name for the output.
- 3
- Specify thecloudwatchtype.
- 4
- Specify the group name for the log stream.
- 5
- Specify the AWS region.
- 6
- SpecifyiamRoleas the authentication type for STS.
- 7
- Specify the name of the secret and the key where therole_arnresource is stored.
- 8
- Specify the service account token to use for authentication. To use the projected service account token, usefrom: serviceAccount.
- 9
- Specify which log types to forward by using the pipeline:application,infrastructure, oraudit.
- 10
- Specify the names of the output to use when forwarding logs with this pipeline.
 
1.14.4. Configuring content filters to drop unwanted log records
					Collecting all cluster logs produces a large amount of data, which can be expensive to move and store. To reduce volume, you can configure the drop filter to exclude unwanted log records before forwarding. The log collector evaluates log streams against the filter and drops records that match specified conditions.
				
					The drop filter uses the test field to define one or more conditions for evaluating log records. The filter applies the following rules to check whether to drop a record:
				
- A test passes if all its specified conditions evaluate to true.
- If a test passes, the filter drops the log record.
- 
							If you define several tests in the dropfilter configuration, the filter drops the log record if any of the tests pass.
- If there is an error evaluating a condition, for example, the referenced field is missing, that condition evaluates to false.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
- 
							You have created a ClusterLogForwardercustom resource (CR).
- 
							You have installed the OpenShift CLI (oc).
Procedure
- Extract the existing - ClusterLogForwarderconfiguration and save it as a local file.- oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml - $ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Where: - 
									<name>is the name of theClusterLogForwarderinstance you want to configure.
- 
									<namespace>is the namespace where you created theClusterLogForwarderinstance, for exampleopenshift-logging.
- 
									<filename>is the name of the local file where you save the configuration.
 
- 
									
- Add a configuration to drop unwanted log records to the - filtersspec in the- ClusterLogForwarderCR.- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the type of filter. Thedropfilter drops log records that match the filter configuration.
- 2
- Specify configuration options for thedropfilter.
- 3
- Specify conditions for tests to evaluate whether the filter drops a log record.
- 4
- Specify dot-delimited paths to fields in log records.- 
											Each path segment can contain alphanumeric characters and underscores, a-z,A-Z,0-9,_, for example,.kubernetes.namespace_name.
- 
											If segments contain different characters, the segment must be in quotes, for example, .kubernetes.labels."app.version-1.2/beta".
- 
											You can include several field paths in a single testconfiguration, but they must all evaluate to true for the test to pass and thedropfilter to apply.
 
- 
											Each path segment can contain alphanumeric characters and underscores, 
- 5
- Specify a regular expression. If log records match this regular expression, they are dropped.
- 6
- Specify a regular expression. If log records do not match this regular expression, they are dropped.
- 7
- Specify the pipeline that uses thedropfilter.
 Note- You can set either the - matchesor- notMatchescondition for a single- fieldpath, but not both.- Example configuration that keeps only high-priority log records - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example configuration with several tests - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.14.5. API audit filter overview
OpenShift API servers generate audit events for every API call. These events include details about the request, the response, and the identity of the requester. This can lead to large volumes of data.
					The API audit filter helps manage the audit trail by using rules to exclude non-essential events and to reduce the event size. Rules are checked in order, and checking stops at the first match. The amount of data in an event depends on the value of the level field:
				
- 
							None: The event is dropped.
- 
							Metadata: The event includes audit metadata and excludes request and response bodies.
- 
							Request: The event includes audit metadata and the request body, and excludes the response body.
- 
							RequestResponse: The event includes all data: metadata, request body and response body. The response body can be very large. For example,oc get pods -Agenerates a response body containing the YAML description of every pod in the cluster.
You can only use the API audit filter feature if the Vector collector is set up in your logging deployment.
					The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy. The ClusterLogForwarder CR provides the following additional functions:
				
- Wildcards
- 
								Names of users, groups, namespaces, and resources can have a leading or trailing *asterisk character. For example, theopenshift-\*namespace matchesopenshift-apiserveroropenshift-authenticationnamespaces. The\*/statusresource matchesPod/statusorDeployment/statusresources.
- Default Rules
- Events that do not match any rule in the policy are filtered as follows: - 
										Read-only system events such as get,list, andwatchare dropped.
- Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
 - To disable these defaults, either end your rules list with a rule that has only a - levelfield or add an empty rule.
- 
										Read-only system events such as 
- Omit Response Codes
- 
								A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodesfield, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]. If the value is an empty list,[], no status codes are omitted.
					The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.
				
- 
								You must have the collect-audit-logscluster role to collect the audit logs.
- The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
1.14.6. Filtering application logs at input by including the label expressions or a matching label key and values
					You can include the application logs based on the label expressions or a matching label key and its values by using the input selector.
				
Procedure
- Add a configuration for a filter to the - inputspec in the- ClusterLogForwarderCR.- The following example shows how to configure the - ClusterLogForwarderCR to include logs based on label expressions or matched label key/values:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.14.7. Configuring content filters to prune log records
					If you configure the prune filter, the log collector evaluates log streams against the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
				
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
- 
							You have created a ClusterLogForwardercustom resource (CR).
- 
							You have installed the OpenShift CLI (oc).
Procedure
- Extract the existing - ClusterLogForwarderconfiguration and save it as a local file.- oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml - $ oc get clusterlogforwarder <name> -n <namespace> -o yaml > <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Where: - 
									<name>is the name of theClusterLogForwarderinstance you want to configure.
- 
									<namespace>is the namespace where you created theClusterLogForwarderinstance, for exampleopenshift-logging.
- 
									<filename>is the name of the local file where you save the configuration.
 
- 
									
- Add a configuration to prune log records to the - filtersspec in the- ClusterLogForwarderCR.Important- If you specify both - inand- notInparameters, the- notInarray takes precedence over- induring pruning. After records are pruned by using the- notInarray, they are then pruned by using the- inarray.- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the type of filter. Theprunefilter prunes log records by configured fields.
- 2
- Specify configuration options for theprunefilter.- 
											The inandnotInfields are arrays of dot-delimited paths to fields in log records.
- 
											Each path segment can contain alpha-numeric characters and underscores, a-z,A-Z,0-9,_, for example,.kubernetes.namespace_name.
- 
											If segments contain different characters, the segment must be in quotes, for example, .kubernetes.labels."app.version-1.2/beta".
 
- 
											The 
- 3
- Optional: Specify fields to remove from the log record. The log collector keeps all other fields.
- 4
- Optional: Specify fields to keep in the log record. The log collector removes all other fields.
- 5
- Specify the pipeline that theprunefilter is applied to.Important- 
												The filters cannot remove the .log_type,.log_source,.messagefields from the log records. You must include them in thenotInfield.
- 
												If you use the googleCloudLoggingoutput, you must include.hostnamein thenotInfield.
 
- 
												The filters cannot remove the 
 
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.15. Filtering the audit and infrastructure log inputs by source
				You can define the list of audit and infrastructure sources to collect the logs by using the input selector.
			
Procedure
- Add a configuration to define the - auditand- infrastructuresources in the- ClusterLogForwarderCR.- The following example shows how to configure the - ClusterLogForwarderCR to define- auditand- infrastructuresources:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specifies the list of infrastructure sources to collect. The valid sources include:- 
										node: Journal log from the node
- 
										container: Logs from the workloads deployed in the namespaces
 
- 
										
- 2
- Specifies the list of audit sources to collect. The valid sources include:- 
										kubeAPI: Logs from the Kubernetes API servers
- 
										openshiftAPI: Logs from the OpenShift API servers
- 
										auditd: Logs from a node auditd service
- 
										ovn: Logs from an open virtual network service
 
- 
										
 
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
1.16. Filtering application logs at input by including or excluding the namespace or container name
				You can include or exclude the application logs based on the namespace and container name by using the input selector.
			
Procedure
- Add a configuration to include or exclude the namespace and container names in the - ClusterLogForwarderCR.- The following example shows how to configure the - ClusterLogForwarderCR to include or exclude namespaces and container names:- Example - ClusterLogForwarderCR- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The - excludesfield takes precedence over the- includesfield.
- Apply the - ClusterLogForwarderCR by running the following command:- oc apply -f <filename>.yaml - $ oc apply -f <filename>.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow