This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 7. Forwarding logs to external third-party logging systems
By default, the logging subsystem sends container and infrastructure logs to the default internal Elasticsearch log store defined in the ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
To send logs to other log aggregators, you use the OpenShift Container Platform Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forward audit logs to the log store.
When you forward logs externally, the logging subsystem creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
You cannot use the config map methods and the Cluster Log Forwarder in the same cluster.
7.1. About forwarding logs to third-party systems Copy linkLink copied to clipboard!
To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder
custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.
- output
The destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
-
elasticsearch
. An external Elasticsearch instance. Theelasticsearch
output can use a TLS connection. -
fluentdForward
. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. ThefluentForward
output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS. -
syslog
. An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. Thesyslog
output can use a UDP, TCP, or TLS connection. -
cloudwatch
. Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). -
loki
. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. -
kafka
. A Kafka broker. Thekafka
output can use a TCP or TLS connection. -
default
. The internal OpenShift Container Platform Elasticsearch instance. You are not required to configure the default output. If you do configure adefault
output, you receive an error message because thedefault
output is reserved for the Red Hat OpenShift Logging Operator.
-
- pipeline
Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
-
application
. Container logs generated by user applications running in the cluster, except infrastructure container applications. -
infrastructure
. Container logs from pods that run in theopenshift*
,kube*
, ordefault
projects and journal logs sourced from node file system. -
audit
. Audit logs generated by the node audit system,auditd
, Kubernetes API server, OpenShift API server, and OVN network.
You can add labels to outbound log messages by using
key:value
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.-
- input
Forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an
inputRef
parameter and where to forward the logs to using anoutputRef
parameter.- Secret
-
A
key:value map
that contains confidential data such as user credentials.
Note the following:
-
If a
ClusterLogForwarder
CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with thedefault
output. -
By default, the logging subsystem sends container and infrastructure logs to the default internal Elasticsearch log store defined in the
ClusterLogging
custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API. -
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the
application
andaudit
types, but do not specify a pipeline for theinfrastructure
type,infrastructure
logs are dropped. -
You can use multiple types of outputs in the
ClusterLogForwarder
custom resource (CR) to send logs to servers that support different protocols. - The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. The logging subsystem does not comply with those regulations.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs
project to the internal Elasticsearch instance.
Sample log forwarding outputs and pipelines
- 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Configuration for an secure Elasticsearch output using a secret with a secure URL.
- A name to describe the output.
-
The type of output:
elasticsearch
. - The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
-
The secret required by the endpoint for TLS communication. The secret must exist in the
openshift-logging
project.
- 4
- Configuration for an insecure Elasticsearch output:
- A name to describe the output.
-
The type of output:
elasticsearch
. - The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
- 5
- Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
- A name to describe the output.
-
The type of output:
kafka
. - Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
- 6
- Configuration for an input to filter application logs from the
my-project
namespace. - 7
- Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
- A name to describe the pipeline.
-
The
inputRefs
is the log type, in this exampleaudit
. -
The
outputRefs
is the name of the output to use, in this exampleelasticsearch-secure
to forward to the secure Elasticsearch instance anddefault
to forward to the internal Elasticsearch instance. - Optional: Labels to add to the logs.
- 8
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 9
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 10
- Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
- 11
- Configuration for a pipeline to send logs from the
my-project
project to the internal Elasticsearch instance.- A name to describe the pipeline.
-
The
inputRefs
is a specific input:my-app-logs
. -
The
outputRefs
isdefault
. - Optional: String. One or more labels to add to the logs.
- 12
- Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
-
The
inputRefs
is the log type, in this exampleapplication
. -
The
outputRefs
is the name of the output to use. - Optional: String. One or more labels to add to the logs.
-
The
Fluentd log handling when the external log aggregator is unavailable
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
Using a TLS URL ('http://…' or 'ssl://…') without a Secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a Secret and setting the following optional fields:
-
tls.crt
: (string) File name containing a client certificate. Enables mutual authentication. Requirestls.key
. -
tls.key
: (string) File name containing the private key to unlock the client certificate. Requirestls.crt
. -
passphrase
: (string) Passphrase to decode an encoded TLS private key. Requirestls.key
. -
ca-bundle.crt
: (string) File name of a customer CA for server authentication.
-
- Username and Password
-
username
: (string) Authentication user name. Requirespassword
. -
password
: (string) Authentication password. Requiresusername
.
-
- Simple Authentication Security Layer (SASL)
-
sasl.enable
(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the othersasl.
keys are set. -
sasl.mechanisms
: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. -
sasl.allow-insecure
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
-
7.1.1. Creating a Secret Copy linkLink copied to clipboard!
You can create a secret in the directory that contains your certificate and key files by using the following command:
Generic or opaque secrets are recommended for best results.
7.2. Supported log data output types in OpenShift Logging 5.1 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.1 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
elasticsearch | elasticsearch | Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 |
fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
kafka | kafka 0.11 | kafka 2.4.1 kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
7.3. Supported log data output types in OpenShift Logging 5.2 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.2 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
elasticsearch | elasticsearch | Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 |
fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
Loki | REST over HTTP and HTTPS | Loki 2.3.0 deployed on OCP and Grafana labs |
kafka | kafka 0.11 | kafka 2.4.1 kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.4. Supported log data output types in OpenShift Logging 5.3 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.3 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
Loki | REST over HTTP and HTTPS | Loki 2.2.1 deployed on OCP |
kafka | kafka 0.11 | kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.5. Supported log data output types in OpenShift Logging 5.4 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.4 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
fluentdForward | fluentd forward v1 | fluentd 1.14.5 logstash 7.10.1 |
Loki | REST over HTTP and HTTPS | Loki 2.2.1 deployed on OCP |
kafka | kafka 0.11 | kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.6. Supported log data output types in OpenShift Logging 5.5 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.5 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
fluentdForward | fluentd forward v1 | fluentd 1.14.6 logstash 7.10.1 |
Loki | REST over HTTP and HTTPS | Loki 2.5.0 deployed on OCP |
kafka | kafka 0.11 | kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.7. Supported log data output types in OpenShift Logging 5.6 Copy linkLink copied to clipboard!
Red Hat OpenShift Logging 5.6 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
Output types | Protocols | Tested with |
---|---|---|
Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
elasticsearch | elasticsearch | Elasticsearch 6.8.23 Elasticsearch 7.10.1 Elasticsearch 8.6.1 |
fluentdForward | fluentd forward v1 | fluentd 1.14.6 logstash 7.10.1 |
Loki | REST over HTTP and HTTPS | Loki 2.5.0 deployed on OCP |
kafka | kafka 0.11 | kafka 2.7.0 |
syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
Fluentd doesn’t support Elasticsearch 8 as of 5.6.2. Vector doesn’t support fluentd/logstash/rsyslog before 5.7.0.
7.8. Forwarding logs to an external Elasticsearch instance Copy linkLink copied to clipboard!
You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OpenShift Container Platform Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.
To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder
custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default
output to forward logs to the internal instance. You do not need to create a default
output. If you do configure a default
output, you receive an error message because the default
output is reserved for the Red Hat OpenShift Logging Operator.
If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, you do not need to create a ClusterLogForwarder
CR.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the
elasticsearch
type. - 5
- Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the
http
(insecure) orhttps
(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 6
- For a secure connection, you can specify an
https
orhttp
URL that you authenticate by specifying asecret
. - 7
- For an
https
prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, forhttp
andhttps
prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
default
output to send the logs to the internal Elasticsearch instance. - 12
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 13
- Optional: String. One or more labels to add to the logs.
- 14
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The
inputRefs
is the log type to forward by using the pipeline:application,
infrastructure
, oraudit
. -
The
outputRefs
is the name of the output to use. - Optional: String. One or more labels to add to the logs.
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example: Setting a secret that contains a username and password
You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.
For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.
Create a
Secret
YAML file similar to the following example. Use base64-encoded values for theusername
andpassword
fields. The secret type is opaque by default.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret:
oc create secret -n openshift-logging openshift-test-secret.yaml
$ oc create secret -n openshift-logging openshift-test-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the secret in the
ClusterLogForwarder
CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the value of the
url
field, the prefix can behttp
orhttps
.Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9. Forwarding logs using the Fluentd forward protocol Copy linkLink copied to clipboard!
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.
To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.
Alternately, you can use a config map to forward logs using the forward protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the
fluentdForward
type. - 5
- Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the
tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 6
- If using a
tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 7
- Optional: Specify a name for the pipeline.
- 8
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 9
- Specify the name of the output to use when forwarding logs with this pipeline.
- 10
- Optional: Specify the
default
output to forward logs to the internal Elasticsearch instance. - 11
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 12
- Optional: String. One or more labels to add to the logs.
- 13
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The
inputRefs
is the log type to forward by using the pipeline:application,
infrastructure
, oraudit
. -
The
outputRefs
is the name of the output to use. - Optional: String. One or more labels to add to the logs.
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.9.1. Enabling nanosecond precision for Logstash to ingest data from fluentd Copy linkLink copied to clipboard!
For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.
Procedure
-
In the Logstash configuration file, set
nanosecond_precision
totrue
.
Example Logstash configuration file
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }
7.10. Forwarding logs using the syslog protocol Copy linkLink copied to clipboard!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder
custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.
Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the
syslog
type. - 5
- Optional: Specify the syslog parameters, listed below.
- 6
- Specify the URL and port of the external syslog instance. You can use the
udp
(insecure),tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 7
- If using a
tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
default
output to forward logs to the internal Elasticsearch instance. - 12
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 13
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 14
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The
inputRefs
is the log type to forward by using the pipeline:application,
infrastructure
, oraudit
. -
The
outputRefs
is the name of the output to use. - Optional: String. One or more labels to add to the logs.
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.10.1. Adding log source information to message output Copy linkLink copied to clipboard!
You can add namespace_name
, pod_name
, and container_name
elements to the message
field of the record by adding the AddLogSource
field to your ClusterLogForwarder
custom resource (CR).
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output without AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
Example syslog message output with AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
7.10.2. Syslog parameters Copy linkLink copied to clipboard!
You can configure the following for the syslog
outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
-
0
orkern
for kernel messages -
1
oruser
for user-level messages, the default. -
2
ormail
for the mail system -
3
ordaemon
for system daemons -
4
orauth
for security/authentication messages -
5
orsyslog
for messages generated internally by syslogd -
6
orlpr
for the line printer subsystem -
7
ornews
for the network news subsystem -
8
oruucp
for the UUCP subsystem -
9
orcron
for the clock daemon -
10
orauthpriv
for security authentication messages -
11
orftp
for the FTP daemon -
12
orntp
for the NTP subsystem -
13
orsecurity
for the syslog audit log -
14
orconsole
for the syslog alert log -
15
orsolaris-cron
for the scheduling daemon -
16
–23
orlocal0
–local7
for locally used facilities
-
Optional:
payloadKey
: The record field to use as payload for the syslog message.NoteConfiguring the
payloadKey
parameter prevents other parameters from being forwarded to the syslog.- rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
-
0
orEmergency
for messages indicating the system is unusable -
1
orAlert
for messages indicating action must be taken immediately -
2
orCritical
for messages indicating critical conditions -
3
orError
for messages indicating error conditions -
4
orWarning
for messages indicating warning conditions -
5
orNotice
for messages indicating normal but significant conditions -
6
orInformational
for messages indicating informational messages -
7
orDebug
for messages indicating debug-level messages, the default
-
- tag: Tag specifies a record field to use as a tag on the syslog message.
- trimPrefix: Remove the specified prefix from the tag.
7.10.3. Additional RFC5424 syslog parameters Copy linkLink copied to clipboard!
The following parameters apply to RFC5424:
-
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for
RFC5424
. -
msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for
RFC5424
. -
procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for
RFC5424
.
7.11. Forwarding logs to Amazon CloudWatch Copy linkLink copied to clipboard!
You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default logging subsystem managed Elasticsearch log store.
To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder
custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output.
Procedure
Create a
Secret
YAML file that uses theaws_access_key_id
andaws_secret_access_key
fields to specify your base64-encoded AWS credentials. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret. For example:
oc apply -f cw-secret.yaml
$ oc apply -f cw-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or edit a YAML file that defines the
ClusterLogForwarder
CR object. In the file, specify the name of the secret. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the
cloudwatch
type. - 5
- Optional: Specify how to group the logs:
-
logType
creates log groups for each log type -
namespaceName
creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs. -
namespaceUUID
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
-
- 6
- Optional: Specify a string to replace the default
infrastructureName
prefix in the names of the log groups. - 7
- Specify the AWS region.
- 8
- Specify the name of the secret that contains your AWS credentials.
- 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example: Using ClusterLogForwarder with Amazon CloudWatch
Here, you see an example ClusterLogForwarder
custom resource (CR) and the log data that it outputs to Amazon CloudWatch.
Suppose that you are running an OpenShift Container Platform cluster named mycluster
. The following command returns the cluster’s infrastructureName
, which you will use to compose aws
commands later on:
oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"
To generate log data for this example, you run a busybox
pod in a namespace called app
. The busybox
pod writes a message to stdout every three seconds:
You can look up the UUID of the app
namespace where the busybox
pod runs:
oc get ns/app -ojson | jq .metadata.uid
$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"
In your ClusterLogForwarder
custom resource (CR), you configure the infrastructure
, audit
, and application
log types as inputs to the all-logs
pipeline. You also connect this pipeline to cw
output, which forwards the logs to a CloudWatch instance in the us-east-2
region:
Each region in CloudWatch contains three levels of objects:
log group
log stream
- log event
With groupBy: logType
in the ClusterLogForwarding
CR, the three log types in the inputRefs
produce three log groups in Amazon Cloudwatch:
aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
Each of the log groups contains log streams:
aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...
Each log stream contains log events. To see a log event from the busybox
Pod, you specify its log stream from the application
log group:
Example: Customizing the prefix in log group names
In the log group names, you can replace the default infrastructureName
prefix, mycluster-7977k
, with an arbitrary string like demo-group-prefix
. To make this change, you update the groupPrefix
field in the ClusterLogForwarding
CR:
cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2
cloudwatch:
groupBy: logType
groupPrefix: demo-group-prefix
region: us-east-2
The value of groupPrefix
replaces the default infrastructureName
prefix:
aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
Example: Naming log groups after application namespace names
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.
If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.
If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.
To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy
field to namespaceName
in the ClusterLogForwarder
CR:
cloudwatch: groupBy: namespaceName region: us-east-2
cloudwatch:
groupBy: namespaceName
region: us-east-2
Setting groupBy
to namespaceName
affects the application log group only. It does not affect the audit
and infrastructure
log groups.
In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app
log group instead of mycluster-7977k.application
:
aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.
The groupBy
field affects the application log group only. It does not affect the audit
and infrastructure
log groups.
Example: Naming log groups after application namespace UUIDs
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.
If you delete an application namespace object and create a new one, CloudWatch creates a new log group.
If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.
To name log groups after application namespace UUIDs, you set the value of the groupBy
field to namespaceUUID
in the ClusterLogForwarder
CR:
cloudwatch: groupBy: namespaceUUID region: us-east-2
cloudwatch:
groupBy: namespaceUUID
region: us-east-2
In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
log group instead of mycluster-7977k.application
:
aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
The groupBy
field affects the application log group only. It does not affect the audit
and infrastructure
log groups.
7.12. Forwarding logs to Loki Copy linkLink copied to clipboard!
You can forward logs to an external Loki logging system in addition to, or instead of, the internal default OpenShift Container Platform Elasticsearch instance.
To configure log forwarding to Loki, you must create a ClusterLogForwarder
custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
Prerequisites
-
You must have a Loki logging system running at the URL you specify with the
url
field in the CR.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the type as
"loki"
. - 5
- Specify the URL and port of the Loki system as a valid absolute URL. You can use the
http
(insecure) orhttps
(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 6
- For a secure connection, you can specify an
https
orhttp
URL that you authenticate by specifying asecret
. - 7
- For an
https
prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, forhttp
andhttps
prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify a meta-data key field to generate values for the
TenantID
field in Loki. For example, settingtenantKey: kubernetes.namespace_name
uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. - 12
- Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression
[a-zA-Z_:][a-zA-Z0-9_:]*
. Illegal characters in meta-data keys are replaced with_
to form the label name. For example, thekubernetes.labels.foo
meta-data key becomes Loki labelkubernetes_labels_foo
. If you do not setlabelKeys
, the default value is:[log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]
. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters.
NoteBecause Loki requires log streams to be correctly ordered by timestamp,
labelKeys
always includes thekubernetes_host
label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.12.1. Troubleshooting Loki "entry out of order" errors Copy linkLink copied to clipboard!
If your Fluentd forwards a large block of messages to a Loki logging system that exceeds the rate limit, Loki to generates "entry out of order" errors. To fix this issue, you update some values in the Loki server configuration file, loki.yaml
.
loki.yaml
is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.
Conditions
-
The
ClusterLogForwarder
custom resource is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki, such as:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you enter
oc logs -c fluentd
, the Fluentd logs in your OpenShift Logging cluster show the following messages:429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'
429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you open the logs on the Loki server, they display
entry out of order
messages like these:,\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"},\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread="flush_thread_0", log_type="audit"}
,\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"},\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread="flush_thread_0", log_type="audit"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Update the following fields in the
loki.yaml
configuration file on the Loki server with the values shown here:-
grpc_server_max_recv_msg_size: 8388608
-
chunk_target_size: 8388608
-
ingestion_rate_mb: 8
-
ingestion_burst_size_mb: 16
-
-
Apply the changes in
loki.yaml
to the Loki server.
Example loki.yaml
file
7.13. Forwarding application logs from specific projects Copy linkLink copied to clipboard!
You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a ClusterLogForwarder
custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify a name for the output.
- 4
- Specify the output type:
elasticsearch
,fluentdForward
,syslog
, orkafka
. - 5
- Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
- 6
- If using a
tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent. - 7
- Configuration for an input to filter application logs from the specified projects.
- 8
- Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
- 9
- The
my-app-logs
input. - 10
- The name of the output to use.
- 11
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 12
- Optional: String. One or more labels to add to the logs.
- 13
- Configuration for a pipeline to send logs to other log aggregators.
- Optional: Specify a name for the pipeline.
-
Specify which log types to forward by using the pipeline:
application,
infrastructure
, oraudit
. - Specify the name of the output to use when forwarding logs with this pipeline.
-
Optional: Specify the
default
output to forward logs to the internal Elasticsearch instance. - Optional: String. One or more labels to add to the logs.
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.14. Forwarding application logs from specific pods Copy linkLink copied to clipboard!
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more matchLabels
key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.
Procedure
Create or edit a YAML file that defines the
ClusterLogForwarder
CR object. In the file, specify the pod labels using simple equality-based selectors underinputs[].name.application.selector.matchLabels
, as shown in the following example.Example
ClusterLogForwarder
CR YAML fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the
ClusterLogForwarder
CR must beinstance
. - 2
- The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. - 3
- Specify one or more comma-separated values from
inputs[].name
. - 4
- Specify one or more comma-separated values from
outputs[]
. - 5
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. - 6
- Define a unique
inputs[].name
for each application that has a unique set of pod labels. - 7
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
- 8
- Optional: Specify one or more namespaces.
- 9
- Specify one or more outputs to forward your log data to. The optional
default
output shown here sends log data to the internal Elasticsearch instance.
-
Optional: To restrict the gathering of log data to specific namespaces, use
inputs[].name.application.namespaces
, as shown in the preceding example. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
-
For each unique combination of pod labels, create an additional
inputs[].name
section similar to the one shown. -
Update the
selectors
to match the pod labels of this application. Add the new
inputs[].name
value toinputRefs
. For example:- inputRefs: [ myAppLogData, myOtherAppLogData ]
- inputRefs: [ myAppLogData, myOtherAppLogData ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
For each unique combination of pod labels, create an additional
Create the CR object:
oc create -f <file-name>.yaml
$ oc create -f <file-name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15. Troubleshooting log forwarding Copy linkLink copied to clipboard!
When you create a ClusterLogForwarder
custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.
Prerequisites
-
You have created a
ClusterLogForwarder
custom resource (CR) object.
Procedure
Delete the Fluentd pods to force them to redeploy.
oc delete pod --selector logging-infra=collector
$ oc delete pod --selector logging-infra=collector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow