Este contenido no está disponible en el idioma seleccionado.
Chapter 7. Forwarding logs to external third-party logging systems
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the
ClusterLogging
To send logs to other log aggregators, you use the OpenShift Container Platform Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forward audit logs to the log store.
7.1. About forwarding logs to third-party systems Copiar enlaceEnlace copiado en el portapapeles!
Forwarding cluster logs to external third-party systems requires a combination of outputs and pipelines specified in a
ClusterLogForwarder
An output is the destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
-
. An external Elasticsearch instance. The
elasticsearchoutput can use a TLS connection.elasticsearch -
. An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The
fluentdForwardoutput can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS.fluentForward -
. An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The
syslogoutput can use a UDP, TCP, or TLS connection.syslog -
. Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
cloudwatch -
. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
loki -
. A Kafka broker. The
kafkaoutput can use a TCP or TLS connection.kafka -
. The internal OpenShift Container Platform Elasticsearch instance. You are not required to configure the default output. If you do configure a
defaultoutput, you receive an error message because thedefaultoutput is reserved for the Red Hat OpenShift Logging Operator.default
If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-side authentication is enabled. To also enable client authentication, the output must name a secret in the
project. The secret must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.openshift-logging-
A pipeline defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
-
. Container logs generated by user applications running in the cluster, except infrastructure container applications.
application -
. Container logs from pods that run in the
infrastructure,openshift*, orkube*projects and journal logs sourced from node file system.default -
. Audit logs generated by the node audit system,
audit, Kubernetes API server, OpenShift API server, and OVN network.auditd
You can add labels to outbound log messages by using
pairs in the pipeline. For example, you might add a label to messages that are forwarded to others data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.key:value-
- An input forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an
inputRef
outputRef
Note the following:
-
If a CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the
ClusterLogForwarderoutput.default -
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.
ClusterLogging -
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the and
applicationtypes, but do not specify a pipeline for theaudittype,infrastructurelogs are dropped.infrastructure -
You can use multiple types of outputs in the custom resource (CR) to send logs to servers that support different protocols.
ClusterLogForwarder - The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Logging does not comply with those regulations.
- You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the
my-apps-logs
Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: elasticsearch-secure
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: elasticsearch
- name: elasticsearch-insecure
type: "elasticsearch"
url: http://elasticsearch.insecure.com:9200
- name: kafka-app
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs:
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: audit-logs
inputRefs:
- audit
outputRefs:
- elasticsearch-secure
- default
parse: json
labels:
secure: "true"
datacenter: "east"
- name: infrastructure-logs
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs:
- application
outputRefs:
- kafka-app
labels:
datacenter: "south"
- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Configuration for an secure Elasticsearch output using a secret with a secure URL.
- A name to describe the output.
-
The type of output: .
elasticsearch - The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
-
The secret required by the endpoint for TLS communication. The secret must exist in the project.
openshift-logging
- 4
- Configuration for an insecure Elasticsearch output:
- A name to describe the output.
-
The type of output: .
elasticsearch - The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
- 5
- Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
- A name to describe the output.
-
The type of output: .
kafka - Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
- 6
- Configuration for an input to filter application logs from the
my-projectnamespace. - 7
- Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
- A name to describe the pipeline.
-
The is the log type, in this example
inputRefs.audit -
The is the name of the output to use, in this example
outputRefsto forward to the secure Elasticsearch instance andelasticsearch-secureto forward to the internal Elasticsearch instance.default - Optional: Labels to add to the logs.
- 8
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 9
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 10
- Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
- 11
- Configuration for a pipeline to send logs from the
my-projectproject to the internal Elasticsearch instance.- A name to describe the pipeline.
-
The is a specific input:
inputRefs.my-app-logs -
The is
outputRefs.default - Optional: String. One or more labels to add to the logs.
- 12
- Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
-
The is the log type, in this example
inputRefs.application -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
-
The
Fluentd log handling when the external log aggregator is unavailable
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
Using a TLS URL ('http://…' or 'ssl://…') without a Secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a Secret and setting the following optional fields:
-
: (string) File name containing a client certificate. Enables mutual authentication. Requires
tls.crt.tls.key -
: (string) File name containing the private key to unlock the client certificate. Requires
tls.key.tls.crt -
: (string) Passphrase to decode an encoded TLS private key. Requires
passphrase.tls.key -
: (string) File name of a customer CA for server authentication.
ca-bundle.crt
-
- Username and Password
-
: (string) Authentication user name. Requires
username.password -
: (string) Authentication password. Requires
password.username
-
- Simple Authentication Security Layer (SASL)
-
(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other
sasl.enablekeys are set.sasl. -
: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
sasl.mechanisms -
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
sasl.allow-insecure
-
7.1.1. Creating a Secret Copiar enlaceEnlace copiado en el portapapeles!
You can create a secret in the directory that contains your certificate and key files by using the following command:
$ oc create secret generic -n openshift-logging <my-secret> \
--from-file=tls.key=<your_key_file>
--from-file=tls.crt=<your_crt_file>
--from-file=ca-bundle.crt=<your_bundle_file>
--from-literal=username=<your_username>
--from-literal=password=<your_password>
Generic or opaque secrets are recommended for best results.
7.2. Supported log data output types in OpenShift Logging 5.1 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.1 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| elasticsearch | elasticsearch | Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 |
| fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
| kafka | kafka 0.11 | kafka 2.4.1 kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
7.3. Supported log data output types in OpenShift Logging 5.2 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.2 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
| elasticsearch | elasticsearch | Elasticsearch 6.8.1 Elasticsearch 6.8.4 Elasticsearch 7.12.2 |
| fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
| Loki | REST over HTTP and HTTPS | Loki 2.3.0 deployed on OCP and Grafana labs |
| kafka | kafka 0.11 | kafka 2.4.1 kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.4. Supported log data output types in OpenShift Logging 5.3 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.3 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
| elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
| fluentdForward | fluentd forward v1 | fluentd 1.7.4 logstash 7.10.1 |
| Loki | REST over HTTP and HTTPS | Loki 2.2.1 deployed on OCP |
| kafka | kafka 0.11 | kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.5. Supported log data output types in OpenShift Logging 5.4 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.4 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
| elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
| fluentdForward | fluentd forward v1 | fluentd 1.14.5 logstash 7.10.1 |
| Loki | REST over HTTP and HTTPS | Loki 2.2.1 deployed on OCP |
| kafka | kafka 0.11 | kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.6. Supported log data output types in OpenShift Logging 5.5 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.5 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
| elasticsearch | elasticsearch | Elasticsearch 7.10.1 |
| fluentdForward | fluentd forward v1 | fluentd 1.14.6 logstash 7.10.1 |
| Loki | REST over HTTP and HTTPS | Loki 2.5.0 deployed on OCP |
| kafka | kafka 0.11 | kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
7.7. Supported log data output types in OpenShift Logging 5.6 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat OpenShift Logging 5.6 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
| Output types | Protocols | Tested with |
|---|---|---|
| Amazon CloudWatch | REST over HTTPS | The current version of Amazon CloudWatch |
| elasticsearch | elasticsearch | Elasticsearch 6.8.23 Elasticsearch 7.10.1 Elasticsearch 8.6.1 |
| fluentdForward | fluentd forward v1 | fluentd 1.14.6 logstash 7.10.1 |
| Loki | REST over HTTP and HTTPS | Loki 2.5.0 deployed on OCP |
| kafka | kafka 0.11 | kafka 2.7.0 |
| syslog | RFC-3164, RFC-5424 | rsyslog-8.39.0 |
Fluentd doesn’t support Elasticsearch 8 as of 5.6.2. Vector doesn’t support fluentd/logstash/rsyslog before 5.7.0.
7.8. Forwarding logs to an external Elasticsearch instance Copiar enlaceEnlace copiado en el portapapeles!
You can optionally forward logs to an external Elasticsearch instance in addition to, or instead of, the internal OpenShift Container Platform Elasticsearch instance. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.
To configure log forwarding to an external Elasticsearch instance, you must create a
ClusterLogForwarder
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the
default
default
default
default
If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, you do not need to create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: elasticsearch-insecure3 type: "elasticsearch"4 url: http://elasticsearch.insecure.com:92005 - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch.secure.com:92006 secret: name: es-secret7 pipelines: - name: application-logs8 inputRefs:9 - application - audit outputRefs: - elasticsearch-secure10 - default11 parse: json12 labels: myLabel: "myValue"13 - name: infrastructure-audit-logs14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: "audit-infra"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
elasticsearchtype. - 5
- Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 6
- For a secure connection, you can specify an
httpsorhttpURL that you authenticate by specifying asecret. - 7
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
defaultoutput to send the logs to the internal Elasticsearch instance. - 12
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 13
- Optional: String. One or more labels to add to the logs.
- 14
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
Example: Setting a secret that contains a username and password
You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.
For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.
Create a
YAML file similar to the following example. Use base64-encoded values for theSecretandusernamefields. The secret type is opaque by default.passwordapiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: dGVzdHVzZXJuYW1lCg== password: dGVzdHBhc3N3b3JkCg==Create the secret:
$ oc create secret -n openshift-logging openshift-test-secret.yamlSpecify the name of the secret in the
CR:ClusterLogForwarderkind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secretNoteIn the value of the
field, the prefix can beurlorhttp.httpsCreate the CR object:
$ oc create -f <file-name>.yaml
7.9. Forwarding logs using the Fluentd forward protocol Copiar enlaceEnlace copiado en el portapapeles!
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.
To configure log forwarding using the forward protocol, you must create a
ClusterLogForwarder
Alternately, you can use a config map to forward logs using the forward protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure7 inputRefs:8 - application - audit outputRefs: - fluentd-server-secure9 - default10 parse: json11 labels: clusterId: "C1234"12 - name: forward-to-fluentd-insecure13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
fluentdForwardtype. - 5
- Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the
tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 6
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 7
- Optional: Specify a name for the pipeline.
- 8
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 9
- Specify the name of the output to use when forwarding logs with this pipeline.
- 10
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 11
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 12
- Optional: String. One or more labels to add to the logs.
- 13
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
7.9.1. Enabling nanosecond precision for Logstash to ingest data from fluentd Copiar enlaceEnlace copiado en el portapapeles!
For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.
Procedure
-
In the Logstash configuration file, set to
nanosecond_precision.true
Example Logstash configuration file
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }
7.10. Forwarding logs using the syslog protocol Copiar enlaceEnlace copiado en el portapapeles!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a
ClusterLogForwarder
Alternately, you can use a config map to forward logs using the syslog RFC3164 protocols. However, this method is deprecated in OpenShift Container Platform and will be removed in a future release.
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: rsyslog-east3 type: syslog4 syslog:5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514'6 secret:7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east8 inputRefs:9 - audit - application outputRefs:10 - rsyslog-east - default11 parse: json12 labels: secure: "true"13 syslog: "east" - name: syslog-west14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
syslogtype. - 5
- Optional: Specify the syslog parameters, listed below.
- 6
- Specify the URL and port of the external syslog instance. You can use the
udp(insecure),tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 7
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 12
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 13
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 14
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
7.10.1. Adding log source information to message output Copiar enlaceEnlace copiado en el portapapeles!
You can add
namespace_name
pod_name
container_name
message
AddLogSource
ClusterLogForwarder
spec:
outputs:
- name: syslogout
syslog:
addLogSource: true
facility: user
payloadKey: message
rfc: RFC3164
severity: debug
tag: mytag
type: syslog
url: tls://syslog-receiver.openshift-logging.svc:24224
pipelines:
- inputRefs:
- application
name: test-app
outputRefs:
- syslogout
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output without AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
Example syslog message output with AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
7.10.2. Syslog parameters Copiar enlaceEnlace copiado en el portapapeles!
You can configure the following for the
syslog
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for kernel messageskern -
or
1for user-level messages, the default.user -
or
2for the mail systemmail -
or
3for system daemonsdaemon -
or
4for security/authentication messagesauth -
or
5for messages generated internally by syslogdsyslog -
or
6for the line printer subsystemlpr -
or
7for the network news subsystemnews -
or
8for the UUCP subsystemuucp -
or
9for the clock daemoncron -
or
10for security authentication messagesauthpriv -
or
11for the FTP daemonftp -
or
12for the NTP subsystemntp -
or
13for the syslog audit logsecurity -
or
14for the syslog alert logconsole -
or
15for the scheduling daemonsolaris-cron -
–
16or23–local0for locally used facilitieslocal7
-
Optional:
: The record field to use as payload for the syslog message.payloadKeyNoteConfiguring the
parameter prevents other parameters from being forwarded to the syslog.payloadKey- rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for messages indicating the system is unusableEmergency -
or
1for messages indicating action must be taken immediatelyAlert -
or
2for messages indicating critical conditionsCritical -
or
3for messages indicating error conditionsError -
or
4for messages indicating warning conditionsWarning -
or
5for messages indicating normal but significant conditionsNotice -
or
6for messages indicating informational messagesInformational -
or
7for messages indicating debug-level messages, the defaultDebug
-
- tag: Tag specifies a record field to use as a tag on the syslog message.
- trimPrefix: Remove the specified prefix from the tag.
7.10.3. Additional RFC5424 syslog parameters Copiar enlaceEnlace copiado en el portapapeles!
The following parameters apply to RFC5424:
-
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for .
RFC5424 -
msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for .
RFC5424 -
procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for .
RFC5424
7.11. Forwarding logs to Amazon CloudWatch Copiar enlaceEnlace copiado en el portapapeles!
You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default OpenShift Logging-managed Elasticsearch log store.
To configure log forwarding to CloudWatch, you must create a
ClusterLogForwarder
Procedure
Create a
YAML file that uses theSecretandaws_access_key_idfields to specify your base64-encoded AWS credentials. For example:aws_secret_access_keyapiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=Create the secret. For example:
$ oc apply -f cw-secret.yamlCreate or edit a YAML file that defines the
CR object. In the file, specify the name of the secret. For example:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: cw3 type: cloudwatch4 cloudwatch: groupBy: logType5 groupPrefix: <group prefix>6 region: us-east-27 secret: name: cw-secret8 pipelines: - name: infra-logs9 inputRefs:10 - infrastructure - audit - application outputRefs: - cw11 - 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
cloudwatchtype. - 5
- Optional: Specify how to group the logs:
-
creates log groups for each log type
logType -
creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.
namespaceName -
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
namespaceUUID
-
- 6
- Optional: Specify a string to replace the default
infrastructureNameprefix in the names of the log groups. - 7
- Specify the AWS region.
- 8
- Specify the name of the secret that contains your AWS credentials.
- 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
Create the CR object:
$ oc create -f <file-name>.yaml
Example: Using ClusterLogForwarder with Amazon CloudWatch
Here, you see an example
ClusterLogForwarder
Suppose that you are running an OpenShift Container Platform cluster named
mycluster
infrastructureName
aws
$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"
To generate log data for this example, you run a
busybox
app
busybox
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox
My life is my message
My life is my message
My life is my message
...
You can look up the UUID of the
app
busybox
$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"
In your
ClusterLogForwarder
infrastructure
audit
application
all-logs
cw
us-east-2
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: cw
type: cloudwatch
cloudwatch:
groupBy: logType
region: us-east-2
secret:
name: cw-secret
pipelines:
- name: all-logs
inputRefs:
- infrastructure
- audit
- application
outputRefs:
- cw
Each region in CloudWatch contains three levels of objects:
log group
log stream
- log event
With
groupBy: logType
ClusterLogForwarding
inputRefs
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
Each of the log groups contains log streams:
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...
Each log stream contains log events. To see a log event from the
busybox
application
$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
"events": [
{
"timestamp": 1629422704178,
"message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
"ingestionTime": 1629422744016
},
...
Example: Customizing the prefix in log group names
In the log group names, you can replace the default
infrastructureName
mycluster-7977k
demo-group-prefix
groupPrefix
ClusterLogForwarding
cloudwatch:
groupBy: logType
groupPrefix: demo-group-prefix
region: us-east-2
The value of
groupPrefix
infrastructureName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
Example: Naming log groups after application namespace names
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.
If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.
If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.
To create application log groups whose names are based on the names of the application namespaces, you set the value of the
groupBy
namespaceName
ClusterLogForwarder
cloudwatch:
groupBy: namespaceName
region: us-east-2
Setting
groupBy
namespaceName
audit
infrastructure
In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.app
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.
The
groupBy
audit
infrastructure
Example: Naming log groups after application namespace UUIDs
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.
If you delete an application namespace object and create a new one, CloudWatch creates a new log group.
If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.
To name log groups after application namespace UUIDs, you set the value of the
groupBy
namespaceUUID
ClusterLogForwarder
cloudwatch:
groupBy: namespaceUUID
region: us-east-2
In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
The
groupBy
audit
infrastructure
7.12. Forwarding logs to Loki Copiar enlaceEnlace copiado en el portapapeles!
You can forward logs to an external Loki logging system in addition to, or instead of, the internal default OpenShift Container Platform Elasticsearch instance.
To configure log forwarding to Loki, you must create a
ClusterLogForwarder
Prerequisites
-
You must have a Loki logging system running at the URL you specify with the field in the CR.
url
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: loki-insecure3 type: "loki"4 url: http://loki.insecure.com:31005 - name: loki-secure type: "loki" url: https://loki.secure.com:31006 secret: name: loki-secret7 pipelines: - name: application-logs8 inputRefs:9 - application - audit outputRefs: - loki-secure10 loki: tenantKey: kubernetes.namespace_name11 labelKeys: kubernetes.labels.foo12 - 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the type as
"loki". - 5
- Specify the URL and port of the Loki system as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 6
- For a secure connection, you can specify an
httpsorhttpURL that you authenticate by specifying asecret. - 7
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject, and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify a meta-data key field to generate values for the
TenantIDfield in Loki. For example, settingtenantKey: kubernetes.namespace_nameuses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. - 12
- Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression
[a-zA-Z_:][a-zA-Z0-9_:]*. Illegal characters in meta-data keys are replaced with_to form the label name. For example, thekubernetes.labels.foometa-data key becomes Loki labelkubernetes_labels_foo. If you do not setlabelKeys, the default value is:[log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters.
NoteBecause Loki requires log streams to be correctly ordered by timestamp,
always includes thelabelKeyslabel set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.kubernetes_hostCreate the CR object:
$ oc create -f <file-name>.yaml
7.12.1. Troubleshooting Loki "entry out of order" errors Copiar enlaceEnlace copiado en el portapapeles!
If your Fluentd forwards a large block of messages to a Loki logging system that exceeds the rate limit, Loki to generates "entry out of order" errors. To fix this issue, you update some values in the Loki server configuration file,
loki.yaml
loki.yaml
Conditions
-
The custom resource is configured to forward logs to Loki.
ClusterLogForwarder Your system sends a block of messages that is larger than 2 MB to Loki, such as:
"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}When you enter
, the Fluentd logs in your OpenShift Logging cluster show the following messages:oc logs -c fluentd429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'When you open the logs on the Loki server, they display
messages like these:entry out of order,\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"},\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread="flush_thread_0", log_type="audit"}
Procedure
Update the following fields in the
configuration file on the Loki server with the values shown here:loki.yaml-
grpc_server_max_recv_msg_size: 8388608 -
chunk_target_size: 8388608 -
ingestion_rate_mb: 8 -
ingestion_burst_size_mb: 16
-
-
Apply the changes in to the Loki server.
loki.yaml
Example loki.yaml file
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
grpc_server_max_recv_msg_size: 8388608
ingester:
wal:
enabled: true
dir: /tmp/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
chunk_target_size: 8388608
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /tmp/loki/boltdb-shipper-active
cache_location: /tmp/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /tmp/loki/chunks
compactor:
working_directory: /tmp/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 12h
ingestion_rate_mb: 8
ingestion_burst_size_mb: 16
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /tmp/loki/rules
rule_path: /tmp/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
7.13. Forwarding application logs from specific projects Copiar enlaceEnlace copiado en el portapapeles!
You can use the Cluster Log Forwarder to send a copy of the application logs from specific projects to an external log aggregator. You can do this in addition to, or instead of, using the default Elasticsearch log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs:7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure8 inputRefs:9 - my-app-logs outputRefs:10 - fluentd-server-insecure parse: json11 labels: project: "my-project"12 - name: forward-to-fluentd-secure13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the output type:
elasticsearch,fluentdForward,syslog, orkafka. - 5
- Specify the URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
- 6
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent. - 7
- Configuration for an input to filter application logs from the specified projects.
- 8
- Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
- 9
- The
my-app-logsinput. - 10
- The name of the output to use.
- 11
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 12
- Optional: String. One or more labels to add to the logs.
- 13
- Configuration for a pipeline to send logs to other log aggregators.
- Optional: Specify a name for the pipeline.
-
Specify which log types to forward by using the pipeline:
application,, orinfrastructure.audit - Specify the name of the output to use when forwarding logs with this pipeline.
-
Optional: Specify the output to forward logs to the internal Elasticsearch instance.
default - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
7.14. Forwarding application logs from specific pods Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more
matchLabels
Procedure
Create or edit a YAML file that defines the
CR object. In the file, specify the pod labels using simple equality-based selectors underClusterLogForwarder, as shown in the following example.inputs[].name.application.selector.matchLabelsExample
ClusterLogForwarderCR YAML fileapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: pipelines: - inputRefs: [ myAppLogData ]3 outputRefs: [ default ]4 parse: json5 inputs:6 - name: myAppLogData application: selector: matchLabels:7 environment: production app: nginx namespaces:8 - app1 - app2 outputs:9 - default ...- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify one or more comma-separated values from
inputs[].name. - 4
- Specify one or more comma-separated values from
outputs[]. - 5
- Optional: Specify whether to forward structured JSON log entries as JSON objects in the
structuredfield. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructuredfield and instead sends the log entry to the default index,app-00000x. - 6
- Define a unique
inputs[].namefor each application that has a unique set of pod labels. - 7
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
- 8
- Optional: Specify one or more namespaces.
- 9
- Specify one or more outputs to forward your log data to. The optional
defaultoutput shown here sends log data to the internal Elasticsearch instance.
-
Optional: To restrict the gathering of log data to specific namespaces, use , as shown in the preceding example.
inputs[].name.application.namespaces Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
-
For each unique combination of pod labels, create an additional section similar to the one shown.
inputs[].name -
Update the to match the pod labels of this application.
selectors Add the new
value toinputs[].name. For example:inputRefs- inputRefs: [ myAppLogData, myOtherAppLogData ]
-
For each unique combination of pod labels, create an additional
Create the CR object:
$ oc create -f <file-name>.yaml
7.15. Collecting OVN network policy audit logs Copiar enlaceEnlace copiado en el portapapeles!
You can collect the OVN network policy audit logs from the
/var/log/ovn/acl-audit-log.log
Prerequisites
- You are using OpenShift Container Platform version 4.8 or later.
- You are using Cluster Logging 5.2 or later.
-
You have already set up a custom resource (CR) object.
ClusterLogForwarder - The OpenShift Container Platform cluster is configured for OVN-Kubernetes network policy audit logging. See the following "Additional resources" section.
Often, logging servers that store audit data must meet organizational and governmental requirements for compliance and security.
Procedure
-
Create or edit a YAML file that defines the CR object as described in other topics on forwarding logs to third-party systems.
ClusterLogForwarder In the YAML file, add the
log type to theauditelement in a pipeline. For example:inputRefspipelines: - name: audit-logs inputRefs: - audit1 outputRefs: - secure-logging-server2 Recreate the updated CR object:
$ oc create -f <file-name>.yaml
Verification
Verify that audit log entries from the nodes that you are monitoring are present among the log data gathered by the logging server.
Find an original audit log entry in
/var/log/ovn/acl-audit-log.log
For example, an original log entry in
/var/log/ovn/acl-audit-log.log
2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name="verify-audit-
logging_deny-all", verdict=drop, severity=alert:
icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10
.129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
And the corresponding OVN audit log entry you find on the logging server might look like this:
{
"@timestamp" : "2021-07-06T08:26:58..687000+00:00",
"hostname":"ip.abc.iternal",
"level":"info",
"message" : "2021-07-06T08:26:58.687Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:12,dl_dst=0a:58:0a:81:02:14,nw_src=10.129.2.18,nw_dst=10.129.2.20,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0"
}
Where:
-
is the timestamp of the log entry.
@timestamp -
is the node from which the log originated.
hostname -
is the log entry.
level -
is the original audit log message.
message
On an Elasticsearch server, look for log entries whose indices begin with
audit-00000
Troubleshooting
- Verify that your OpenShift Container Platform cluster meets all the prerequisites.
- Verify that you have completed the procedure.
-
Verify that the nodes generating OVN logs are enabled and have files.
/var/log/ovn/acl-audit-log.log - Check the Fluentd pod logs for issues.
7.16. Troubleshooting log forwarding Copiar enlaceEnlace copiado en el portapapeles!
When you create a
ClusterLogForwarder
Prerequisites
-
You have created a custom resource (CR) object.
ClusterLogForwarder
Procedure
Delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=collector