Questo contenuto non è disponibile nella lingua selezionata.
Chapter 9. Log collection and forwarding
9.1. About log collection and forwarding Copia collegamentoCollegamento copiato negli appunti!
The Red Hat OpenShift Logging Operator deploys a collector based on the
ClusterLogForwarder
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
9.1.1. Log collection Copia collegamentoCollegamento copiato negli appunti!
The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs.
By default, the log collector uses the following sources:
- System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
-
for all container logs.
/var/log/containers/*.log
If you configure the log collector to collect audit logs, it collects them from
/var/log/audit/audit.log
The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration.
9.1.1.1. Log collector types Copia collegamentoCollegamento copiato negli appunti!
Vector is a log collector offered as an alternative to Fluentd for the logging.
You can configure which logging collector type your cluster uses by modifying the
ClusterLogging
collection
Example ClusterLogging CR that configures Vector as the collector
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
vector: {}
# ...
9.1.1.2. Log collection limitations Copia collegamentoCollegamento copiato negli appunti!
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort.
The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source.
9.1.1.3. Log collector features by type Copia collegamentoCollegamento copiato negli appunti!
| Feature | Fluentd | Vector |
|---|---|---|
| App container logs | ✓ | ✓ |
| App-specific routing | ✓ | ✓ |
| App-specific routing by namespace | ✓ | ✓ |
| Infra container logs | ✓ | ✓ |
| Infra journal logs | ✓ | ✓ |
| Kube API audit logs | ✓ | ✓ |
| OpenShift API audit logs | ✓ | ✓ |
| Open Virtual Network (OVN) audit logs | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Elasticsearch certificates | ✓ | ✓ |
| Elasticsearch username / password | ✓ | ✓ |
| Cloudwatch keys | ✓ | ✓ |
| Cloudwatch STS | ✓ | ✓ |
| Kafka certificates | ✓ | ✓ |
| Kafka username / password | ✓ | ✓ |
| Kafka SASL | ✓ | ✓ |
| Loki bearer token | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Viaq data model - app | ✓ | ✓ |
| Viaq data model - infra | ✓ | ✓ |
| Viaq data model - infra(journal) | ✓ | ✓ |
| Viaq data model - Linux audit | ✓ | ✓ |
| Viaq data model - kube-apiserver audit | ✓ | ✓ |
| Viaq data model - OpenShift API audit | ✓ | ✓ |
| Viaq data model - OVN | ✓ | ✓ |
| Loglevel Normalization | ✓ | ✓ |
| JSON parsing | ✓ | ✓ |
| Structured Index | ✓ | ✓ |
| Multiline error detection | ✓ | ✓ |
| Multicontainer / split indices | ✓ | ✓ |
| Flatten labels | ✓ | ✓ |
| CLF static labels | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Fluentd readlinelimit | ✓ | |
| Fluentd buffer | ✓ | |
| - chunklimitsize | ✓ | |
| - totallimitsize | ✓ | |
| - overflowaction | ✓ | |
| - flushthreadcount | ✓ | |
| - flushmode | ✓ | |
| - flushinterval | ✓ | |
| - retrywait | ✓ | |
| - retrytype | ✓ | |
| - retrymaxinterval | ✓ | |
| - retrytimeout | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Metrics | ✓ | ✓ |
| Dashboard | ✓ | ✓ |
| Alerts | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Global proxy support | ✓ | ✓ |
| x86 support | ✓ | ✓ |
| ARM support | ✓ | ✓ |
| PowerPC support | ✓ | ✓ |
| IBM Z support | ✓ | ✓ |
| IPv6 support | ✓ | ✓ |
| Log event buffering | ✓ | |
| Disconnected Cluster | ✓ | ✓ |
9.1.1.4. Collector outputs Copia collegamentoCollegamento copiato negli appunti!
The following collector outputs are supported:
| Feature | Fluentd | Vector |
|---|---|---|
| Elasticsearch v6-v8 | ✓ | ✓ |
| Fluent forward | ✓ | |
| Syslog RFC3164 | ✓ | ✓ (Logging 5.7+) |
| Syslog RFC5424 | ✓ | ✓ (Logging 5.7+) |
| Kafka | ✓ | ✓ |
| Cloudwatch | ✓ | ✓ |
| Cloudwatch STS | ✓ | ✓ |
| Loki | ✓ | ✓ |
| HTTP | ✓ | ✓ (Logging 5.7+) |
| Google Cloud Logging | ✓ | ✓ |
| Splunk | ✓ (Logging 5.6+) |
9.1.2. Log forwarding Copia collegamentoCollegamento copiato negli appunti!
Administrators can create
ClusterLogForwarder
ClusterLogForwarder
Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.
9.2. Log output types Copia collegamentoCollegamento copiato negli appunti!
Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the
ClusterLogForwarder
9.2.1. Supported log forwarding outputs Copia collegamentoCollegamento copiato negli appunti!
Outputs can be any of the following types:
| Output type | Protocol | Tested with | Logging versions | Supported collector type |
|---|---|---|---|---|
| Elasticsearch v6 | HTTP 1.1 | 6.8.1, 6.8.23 | 5.6+ | Fluentd, Vector |
| Elasticsearch v7 | HTTP 1.1 | 7.12.2, 7.17.7, 7.10.1 | 5.6+ | Fluentd, Vector |
| Elasticsearch v8 | HTTP 1.1 | 8.4.3, 8.6.1 | 5.6+ | Fluentd [1], Vector |
| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5 | 5.4+ | Fluentd |
| Google Cloud Logging | REST over HTTPS | Latest | 5.7+ | Vector |
| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | 5.7+ | Fluentd, Vector |
| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | 5.4+ | Fluentd, Vector |
| Loki | REST over HTTP and HTTPS | 2.3.0, 2.5.0, 2.7, 2.2.1 | 5.4+ | Fluentd, Vector |
| Splunk | HEC | 8.2.9, 9.0.0 | 5.7+ | Vector |
| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7, rsyslog-8.39.0 | 5.4+ | Fluentd, Vector [2] |
| Amazon CloudWatch | REST over HTTPS | Latest | 5.4+ | Fluentd, Vector |
- Fluentd does not support Elasticsearch 8 in the logging version 5.6.2.
- Vector supports Syslog in the logging version 5.7 and higher.
9.2.2. Output type descriptions Copia collegamentoCollegamento copiato negli appunti!
defaultThe on-cluster, Red Hat managed log store. You are not required to configure the default output.
NoteIf you configure a
output, you receive an error message, because thedefaultoutput name is reserved for referencing the on-cluster, Red Hat managed log store.defaultloki- Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
kafka-
A Kafka broker. The
kafkaoutput can use a TCP or TLS connection. elasticsearch-
An external Elasticsearch instance. The
elasticsearchoutput can use a TLS connection. fluentdForwardAn external log aggregation solution that supports Fluentd. This option uses the Fluentd
protocols. Theforwardoutput can use a TCP or TLS connection and supports shared-key authentication by providing afluentForwardfield in a secret. Shared-key authentication can be used with or without TLS.shared_keyImportantThe
output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using thefluentdForwardoutput.httpsyslog-
An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The
syslogoutput can use a UDP, TCP, or TLS connection. cloudwatch- Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
9.3. Enabling JSON log forwarding Copia collegamentoCollegamento copiato negli appunti!
You can configure the Log Forwarding API to parse JSON strings into a structured object.
9.3.1. Parsing JSON logs Copia collegamentoCollegamento copiato negli appunti!
You can use a
ClusterLogForwarder
To illustrate how this works, suppose that you have the following structured JSON log entry:
Example structured JSON log entry
{"level":"info","name":"fred","home":"bedrock"}
To enable parsing JSON log, you add
parse: json
ClusterLogForwarder
Example snippet showing parse: json
pipelines:
- inputRefs: [ application ]
outputRefs: myFluentd
parse: json
When you enable parsing JSON logs by using
parse: json
structured
Example structured output containing the structured JSON log entry
{"structured": { "level": "info", "name": "fred", "home": "bedrock" },
"more fields..."}
If the log entry does not contain valid structured JSON, the
structured
9.3.2. Configuring JSON log data for Elasticsearch Copia collegamentoCollegamento copiato negli appunti!
If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the
ClusterLogForwarder
If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Structure types
You can use the following structure types in the
ClusterLogForwarder
- is the name of a message field. The value of that field is used to construct the index name.
structuredTypeKey-
is the Kubernetes pod label whose value is used to construct the index name.
kubernetes.labels.<key> -
is the
openshift.labels.<key>element in thepipeline.label.<key>CR whose value is used to construct the index name.ClusterLogForwarder -
uses the container name to construct the index name.
kubernetes.container_name
-
-
: If the
structuredTypeNamefield is not set or its key is not present, thestructuredTypeKeyvalue is used as the structured type. When you use both thestructuredTypeNamefield and thestructuredTypeKeyfield together, thestructuredTypeNamevalue provides a fallback index name if the key in thestructuredTypeNamefield is missing from the JSON log data.structuredTypeKey
Although you can set the value of
structuredTypeKey
A structuredTypeKey: kubernetes.labels.<key> example
Suppose the following:
- Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google".
-
The user labels these application pods with and
logFormat=apache.logFormat=google -
You use the following snippet in your CR YAML file.
ClusterLogForwarder
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
# ...
outputDefaults:
elasticsearch:
structuredTypeKey: kubernetes.labels.logFormat
structuredTypeName: nologformat
pipelines:
- inputRefs:
- application
outputRefs:
- default
parse: json
In that case, the following structured log record goes to the
app-apache-write
{
"structured":{"name":"fred","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "apache", ...}}
}
And the following structured log record goes to the
app-google-write
{
"structured":{"name":"wilma","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "google", ...}}
}
A structuredTypeKey: openshift.labels.<key> example
Suppose that you use the following snippet in your
ClusterLogForwarder
outputDefaults:
elasticsearch:
structuredTypeKey: openshift.labels.myLabel
structuredTypeName: nologformat
pipelines:
- name: application-logs
inputRefs:
- application
- audit
outputRefs:
- elasticsearch-secure
- default
parse: json
labels:
myLabel: myValue
In that case, the following structured log record goes to the
app-myValue-write
{
"structured":{"name":"fred","home":"bedrock"},
"openshift":{"labels":{"myLabel": "myValue", ...}}
}
Additional considerations
- The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write".
- Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices.
-
If there is no non-empty structured type, forward an unstructured record with no field.
structured
It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats, not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as
LogApache
9.3.3. Forwarding JSON logs to the Elasticsearch log store Copia collegamentoCollegamento copiato negli appunti!
For an Elasticsearch log store, if your JSON log entries follow different schemas, configure the
ClusterLogForwarder
Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.
To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Procedure
Add the following snippet to your
CR YAML file.ClusterLogForwarderoutputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json-
Use field to specify one of the log record fields.
structuredTypeKey Use
field to specify a name.structuredTypeNameImportantTo parse JSON logs, you must set both the
andstructuredTypeKeyfields.structuredTypeName-
For , specify which log types to forward by using that pipeline, such as
inputRefsapplication,, orinfrastructure.audit -
Add the element to pipelines.
parse: json Create the CR object:
$ oc create -f <filename>.yamlThe Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy.
$ oc delete pod --selector logging-infra=collector
9.3.4. Forwarding JSON logs from containers in the same pod to separate indices Copia collegamentoCollegamento copiato negli appunti!
You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of
app-
JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats.
Prerequisites
- Logging for Red Hat OpenShift: 5.5
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat1 structuredTypeName: nologformat enableStructuredContainerLogs: true2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: jsonCreate or edit a YAML file that defines the
CR object:PodapiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy2 image: heavyimage - name: low image: lowimage
This configuration might significantly increase the number of shards on the cluster.
Additional Resources
9.4. Configuring log forwarding Copia collegamentoCollegamento copiato negli appunti!
By default, the logging sends container and infrastructure logs to the default internal log store defined in the
ClusterLogging
To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in Forwarding audit logs to the log store.
9.4.1. About forwarding logs to third-party systems Copia collegamentoCollegamento copiato negli appunti!
To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a
ClusterLogForwarder
- pipeline
Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
-
. Container logs generated by user applications running in the cluster, except infrastructure container applications.
application -
. Container logs from pods that run in the
infrastructure,openshift*, orkube*projects and journal logs sourced from node file system.default -
. Audit logs generated by the node audit system,
audit, Kubernetes API server, OpenShift API server, and OVN network.auditd
You can add labels to outbound log messages by using
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.key:value-
- input
Forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an
parameter and where to forward the logs to using aninputRefparameter.outputRef- Secret
-
A
key:value mapthat contains confidential data such as user credentials.
Note the following:
-
If a CR object exists, logs are not forwarded to the default Elasticsearch instance, unless there is a pipeline with the
ClusterLogForwarderoutput.default -
By default, the logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, do not configure the Log Forwarding API.
ClusterLogging -
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the and
applicationtypes, but do not specify a pipeline for theaudittype,infrastructurelogs are dropped.infrastructure -
You can use multiple types of outputs in the custom resource (CR) to send logs to servers that support different protocols.
ClusterLogForwarder - The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. The logging does not comply with those regulations.
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the
my-apps-logs
Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: elasticsearch-secure
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: elasticsearch
- name: elasticsearch-insecure
type: "elasticsearch"
url: http://elasticsearch.insecure.com:9200
- name: kafka-app
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs:
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: audit-logs
inputRefs:
- audit
outputRefs:
- elasticsearch-secure
- default
labels:
secure: "true"
datacenter: "east"
- name: infrastructure-logs
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs:
- application
outputRefs:
- kafka-app
labels:
datacenter: "south"
- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Configuration for an secure Elasticsearch output using a secret with a secure URL.
- A name to describe the output.
-
The type of output: .
elasticsearch - The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
-
The secret required by the endpoint for TLS communication. The secret must exist in the project.
openshift-logging
- 4
- Configuration for an insecure Elasticsearch output:
- A name to describe the output.
-
The type of output: .
elasticsearch - The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
- 5
- Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL
- A name to describe the output.
-
The type of output: .
kafka - Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
- 6
- Configuration for an input to filter application logs from the
my-projectnamespace. - 7
- Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
- A name to describe the pipeline.
-
The is the log type, in this example
inputRefs.audit -
The is the name of the output to use, in this example
outputRefsto forward to the secure Elasticsearch instance andelasticsearch-secureto forward to the internal Elasticsearch instance.default - Optional: Labels to add to the logs.
- 8
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 9
- Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
- 10
- Configuration for a pipeline to send logs from the
my-projectproject to the internal Elasticsearch instance.- A name to describe the pipeline.
-
The is a specific input:
inputRefs.my-app-logs -
The is
outputRefs.default - Optional: String. One or more labels to add to the logs.
- 11
- Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
-
The is the log type, in this example
inputRefs.application -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
-
The
Fluentd log handling when the external log aggregator is unavailable
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
Using a TLS URL (
orhttp://...) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:ssl://...-
: (string) Passphrase to decode an encoded TLS private key. Requires
passphrase.tls.key -
: (string) File name of a customer CA for server authentication.
ca-bundle.crt
-
- Username and Password
-
: (string) Authentication user name. Requires
username.password -
: (string) Authentication password. Requires
password.username
-
- Simple Authentication Security Layer (SASL)
-
(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other
sasl.enablekeys are set.sasl. -
: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
sasl.mechanisms -
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
sasl.allow-insecure
-
9.4.1.1. Creating a Secret Copia collegamentoCollegamento copiato negli appunti!
You can create a secret in the directory that contains your certificate and key files by using the following command:
$ oc create secret generic -n <namespace> <secret_name> \
--from-file=ca-bundle.crt=<your_bundle_file> \
--from-literal=username=<your_username> \
--from-literal=password=<your_password>
Generic or opaque secrets are recommended for best results.
9.4.2. Creating a log forwarder Copia collegamentoCollegamento copiato negli appunti!
To create a log forwarder, you must create a
ClusterLogForwarder
ClusterLogForwarder
instance
openshift-logging
You need administrator permissions for the
openshift-logging
ClusterLogForwarder resource example
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
# ...
pipelines:
- inputRefs:
- <log_type>
outputRefs:
- <output_name>
outputs:
- name: <output_name>
type: <output_type>
url: <log_output_url>
# ...
- 1
- The CR name must be
instance. - 2
- The CR namespace must be
openshift-logging. - 3
- The log types that are collected. The value for this field can be
auditfor audit logs,applicationfor application logs,infrastructurefor infrastructure logs, or a named input that has been defined for your application. - 4 5
- A name for the output that you want to forward logs to.
- 6
- The type of output that you want to forward logs to. The value of this field can be
default,loki,kafka,elasticsearch,fluentdForward,syslog, orcloudwatch. - 7
- The URL of the output that you want to forward logs to.
9.4.3. Enabling multi-line exception detection Copia collegamentoCollegamento copiato negli appunti!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the Custom Resource (CR) contains a
ClusterLogForwarderfield, with a value ofdetectMultilineErrors.true
Example ClusterLogForwarder CR
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: my-app-logs
inputRefs:
- application
outputRefs:
- default
detectMultilineErrors: true
9.4.3.1. Details Copia collegamentoCollegamento copiato negli appunti!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
| Language | Fluentd | Vector |
|---|---|---|
| Java | ✓ | ✓ |
| JS | ✓ | ✓ |
| Ruby | ✓ | ✓ |
| Python | ✓ | ✓ |
| Golang | ✓ | ✓ |
| PHP | ✓ | |
| Dart | ✓ | ✓ |
9.4.3.2. Troubleshooting Copia collegamentoCollegamento copiato negli appunti!
When enabled, the collector configuration will include a new section with type:
detect_exceptions
Example vector configuration section
[transforms.detect_exceptions_app-logs]
type = "detect_exceptions"
inputs = ["application"]
languages = ["All"]
group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"]
expire_after_ms = 2000
multiline_flush_interval_ms = 1000
Example fluentd config section
<label @MULTILINE_APP_LOGS>
<match kubernetes.**>
@type detect_exceptions
remove_tag_prefix 'kubernetes'
message message
force_line_breaks true
multiline_flush_interval .2
</match>
</label>
9.4.4. Forwarding logs to Google Cloud Platform (GCP) Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported.
Prerequisites
- Red Hat OpenShift Logging Operator 5.5.1 and later
Procedure
Create a secret using your Google service account key.
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>Create a
Custom Resource YAML using the template below:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance" namespace: "openshift-logging" spec: outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : "openshift-gce-devel"1 logId : "app-gcp"2 pipelines: - name: test-app inputRefs:3 - application outputRefs: - gcp-1- 1
- Set either a
projectId,folderId,organizationId, orbillingAccountIdfield and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy. - 2
- Set the value to add to the
logNamefield of the Log Entry. - 3
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit.
9.4.5. Forwarding logs to Splunk Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported.
Prerequisites
- Red Hat OpenShift Logging Operator 5.6 or later
-
A instance with
ClusterLoggingspecified as the collectorvector - Base64 encoded Splunk HEC token
Procedure
Create a secret using your Base64 encoded Splunk HEC token.
$ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>Create or edit the
Custom Resource (CR) using the template below:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance"1 namespace: "openshift-logging"2 spec: outputs: - name: splunk-receiver3 secret: name: vector-splunk-secret4 type: splunk5 url: <http://your.splunk.hec.url:8088>6 pipelines:7 - inputRefs: - application - infrastructure name:8 outputRefs: - splunk-receiver9 - 1
- The name of the ClusterLogForwarder CR must be
instance. - 2
- The namespace for the ClusterLogForwarder CR must be
openshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the name of the secret that contains your HEC token.
- 5
- Specify the output type as
splunk. - 6
- Specify the URL (including port) of your Splunk HEC.
- 7
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify the name of the output to use when forwarding logs with this pipeline.
9.4.6. Forwarding logs over HTTP Copia collegamentoCollegamento copiato negli appunti!
Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify
http
ClusterLogForwarder
Procedure
Create or edit the
CR using the template below:ClusterLogForwarderExample ClusterLogForwarder CR
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: "instance" namespace: "openshift-logging" spec: outputs: - name: httpout-app type: http url:1 http: headers:2 h1: v1 h2: v2 method: POST secret: name:3 tls: insecureSkipVerify:4 pipelines: - name: inputRefs: - application outputRefs: -5
9.4.7. Forwarding application logs from specific projects Copia collegamentoCollegamento copiato negli appunti!
You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs:7 - name: my-app-logs application: namespaces: - my-project8 pipelines: - name: forward-to-fluentd-insecure9 inputRefs:10 - my-app-logs outputRefs:11 - fluentd-server-insecure labels: project: "my-project"12 - name: forward-to-fluentd-secure13 inputRefs: - application14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- The name of the output.
- 4
- The output type:
elasticsearch,fluentdForward,syslog, orkafka. - 5
- The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
- 6
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent. - 7
- The configuration for an input to filter application logs from the specified projects.
- 8
- If no namespace is specified, logs are collected from all namespaces.
- 9
- The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named
forward-to-fluentd-insecureforwards logs from an input namedmy-app-logsto an output namedfluentd-server-insecure. - 10
- A list of inputs.
- 11
- The name of the output to use.
- 12
- Optional: String. One or more labels to add to the logs.
- 13
- Configuration for a pipeline to send logs to other log aggregators.
- Optional: Specify a name for the pipeline.
-
Specify which log types to forward by using the pipeline:
application,, orinfrastructure.audit - Specify the name of the output to use when forwarding logs with this pipeline.
-
Optional: Specify the output to forward logs to the default log store.
default - Optional: String. One or more labels to add to the logs.
- 14
- Note that application logs from all namespaces are collected when using this configuration.
Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
9.4.8. Forwarding application logs from specific pods Copia collegamentoCollegamento copiato negli appunti!
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more
matchLabels
Procedure
Create or edit a YAML file that defines the
CR object. In the file, specify the pod labels using simple equality-based selectors underClusterLogForwarder, as shown in the following example.inputs[].name.application.selector.matchLabelsExample
ClusterLogForwarderCR YAML fileapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: pipelines: - inputRefs: [ myAppLogData ]3 outputRefs: [ default ]4 inputs:5 - name: myAppLogData application: selector: matchLabels:6 environment: production app: nginx namespaces:7 - app1 - app2 outputs:8 - default ...- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify one or more comma-separated values from
inputs[].name. - 4
- Specify one or more comma-separated values from
outputs[]. - 5
- Define a unique
inputs[].namefor each application that has a unique set of pod labels. - 6
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
- 7
- Optional: Specify one or more namespaces.
- 8
- Specify one or more outputs to forward your log data to. The optional
defaultoutput shown here sends log data to the internal Elasticsearch instance.
-
Optional: To restrict the gathering of log data to specific namespaces, use , as shown in the preceding example.
inputs[].name.application.namespaces Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
-
For each unique combination of pod labels, create an additional section similar to the one shown.
inputs[].name -
Update the to match the pod labels of this application.
selectors Add the new
value toinputs[].name. For example:inputRefs- inputRefs: [ myAppLogData, myOtherAppLogData ]
-
For each unique combination of pod labels, create an additional
Create the CR object:
$ oc create -f <file-name>.yaml
9.4.9. Forwarding logs to an external Loki logging system Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to an external Loki logging system in addition to, or instead of, the default log store.
To configure log forwarding to Loki, you must create a
ClusterLogForwarder
Prerequisites
-
You must have a Loki logging system running at the URL you specify with the field in the CR.
url
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: loki-insecure3 type: "loki"4 url: http://loki.insecure.com:31005 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure6 type: "loki" url: https://loki.secure.com:3100 secret: name: loki-secret7 loki: tenantKey: kubernetes.namespace_name8 labelKeys: - kubernetes.labels.foo9 pipelines: - name: application-logs10 inputRefs:11 - application - audit outputRefs:12 - loki-secure- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the type as
"loki". - 5
- Specify the URL and port of the Loki system as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki’s default port for HTTP(S) communication is 3100. - 6
- For a secure connection, you can specify an
httpsorhttpURL that you authenticate by specifying asecret. - 7
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and must contain aca-bundle.crtkey that points to the certificate it represents. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." - 8
- Optional: Specify a meta-data key field to generate values for the
TenantIDfield in Loki. For example, settingtenantKey: kubernetes.namespace_nameuses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. - 9
- Optional: Specify a list of meta-data field keys to replace the default Loki labels. Loki label names must match the regular expression
[a-zA-Z_:][a-zA-Z0-9_:]*. Illegal characters in meta-data keys are replaced with_to form the label name. For example, thekubernetes.labels.foometa-data key becomes Loki labelkubernetes_labels_foo. If you do not setlabelKeys, the default value is:[log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters. - 10
- Optional: Specify a name for the pipeline.
- 11
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 12
- Specify the name of the output to use when forwarding logs with this pipeline.
NoteBecause Loki requires log streams to be correctly ordered by timestamp,
always includes thelabelKeyslabel set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.kubernetes_hostApply the
CR object by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
9.4.10. Forwarding logs to an external Elasticsearch instance Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.
To configure log forwarding to an external Elasticsearch instance, you must create a
ClusterLogForwarder
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the
default
If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: elasticsearch-example3 type: elasticsearch4 elasticsearch: version: 85 url: http://elasticsearch.example.com:92006 secret: name: es-secret7 pipelines: - name: application-logs8 inputRefs:9 - application - audit outputRefs: - elasticsearch-example10 - default11 labels: myLabel: "myValue"12 # ...- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
elasticsearchtype. - 5
- Specify the Elasticsearch version. This can be
6,7, or8. - 6
- Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 7
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain aca-bundle.crtkey that points to the certificate it represents. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. The secret must exist in theopenshift-loggingproject. For more information, see the following "Example: Setting a secret that contains a username and password." - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
defaultoutput to send the logs to the internal Elasticsearch instance. - 12
- Optional: String. One or more labels to add to the logs.
Apply the
CR:ClusterLogForwarder$ oc apply -f <filename>.yaml
Example: Setting a secret that contains a username and password
You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.
For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.
Create a
YAML file similar to the following example. Use base64-encoded values for theSecretandusernamefields. The secret type is opaque by default.passwordapiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password> # ...Create the secret:
$ oc create secret -n openshift-logging openshift-test-secret.yamlSpecify the name of the secret in the
CR:ClusterLogForwarderkind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret # ...NoteIn the value of the
field, the prefix can beurlorhttp.httpsApply the CR object:
$ oc apply -f <filename>.yaml
9.4.11. Forwarding logs using the Fluentd forward protocol Copia collegamentoCollegamento copiato negli appunti!
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.
To configure log forwarding using the forward protocol, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure7 inputRefs:8 - application - audit outputRefs: - fluentd-server-secure9 - default10 labels: clusterId: "C1234"11 - name: forward-to-fluentd-insecure12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
fluentdForwardtype. - 5
- Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the
tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 6
- If you are using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and must contain aca-bundle.crtkey that points to the certificate it represents. - 7
- Optional: Specify a name for the pipeline.
- 8
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 9
- Specify the name of the output to use when forwarding logs with this pipeline.
- 10
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 11
- Optional: String. One or more labels to add to the logs.
- 12
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
9.4.11.1. Enabling nanosecond precision for Logstash to ingest data from fluentd Copia collegamentoCollegamento copiato negli appunti!
For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.
Procedure
-
In the Logstash configuration file, set to
nanosecond_precision.true
Example Logstash configuration file
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }
9.4.12. Forwarding logs using the syslog protocol Copia collegamentoCollegamento copiato negli appunti!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: rsyslog-east3 type: syslog4 syslog:5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514'6 secret:7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east8 inputRefs:9 - audit - application outputRefs:10 - rsyslog-east - default11 labels: secure: "true"12 syslog: "east" - name: syslog-west13 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
syslogtype. - 5
- Optional: Specify the syslog parameters, listed below.
- 6
- Specify the URL and port of the external syslog instance. You can use the
udp(insecure),tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 7
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and must contain aca-bundle.crtkey that points to the certificate it represents. - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 12
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 13
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <filename>.yaml
9.4.12.1. Adding log source information to message output Copia collegamentoCollegamento copiato negli appunti!
You can add
namespace_name
pod_name
container_name
message
AddLogSource
ClusterLogForwarder
spec:
outputs:
- name: syslogout
syslog:
addLogSource: true
facility: user
payloadKey: message
rfc: RFC3164
severity: debug
tag: mytag
type: syslog
url: tls://syslog-receiver.openshift-logging.svc:24224
pipelines:
- inputRefs:
- application
name: test-app
outputRefs:
- syslogout
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output without AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
Example syslog message output with AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
9.4.12.2. Syslog parameters Copia collegamentoCollegamento copiato negli appunti!
You can configure the following for the
syslog
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for kernel messageskern -
or
1for user-level messages, the default.user -
or
2for the mail systemmail -
or
3for system daemonsdaemon -
or
4for security/authentication messagesauth -
or
5for messages generated internally by syslogdsyslog -
or
6for the line printer subsystemlpr -
or
7for the network news subsystemnews -
or
8for the UUCP subsystemuucp -
or
9for the clock daemoncron -
or
10for security authentication messagesauthpriv -
or
11for the FTP daemonftp -
or
12for the NTP subsystemntp -
or
13for the syslog audit logsecurity -
or
14for the syslog alert logconsole -
or
15for the scheduling daemonsolaris-cron -
–
16or23–local0for locally used facilitieslocal7
-
Optional:
: The record field to use as payload for the syslog message.payloadKeyNoteConfiguring the
parameter prevents other parameters from being forwarded to the syslog.payloadKey- rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for messages indicating the system is unusableEmergency -
or
1for messages indicating action must be taken immediatelyAlert -
or
2for messages indicating critical conditionsCritical -
or
3for messages indicating error conditionsError -
or
4for messages indicating warning conditionsWarning -
or
5for messages indicating normal but significant conditionsNotice -
or
6for messages indicating informational messagesInformational -
or
7for messages indicating debug-level messages, the defaultDebug
-
- tag: Tag specifies a record field to use as a tag on the syslog message.
- trimPrefix: Remove the specified prefix from the tag.
9.4.12.3. Additional RFC5424 syslog parameters Copia collegamentoCollegamento copiato negli appunti!
The following parameters apply to RFC5424:
-
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for .
RFC5424 -
msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for .
RFC5424 -
procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for .
RFC5424
9.4.13. Forwarding logs to a Kafka broker Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to an external Kafka broker in addition to, or instead of, the default log store.
To configure log forwarding to an external Kafka instance, you must create a
ClusterLogForwarder
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: app-logs3 type: kafka4 url: tls://kafka.example.devlab.com:9093/app-topic5 secret: name: kafka-secret6 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic7 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic8 inputRefs:9 - application outputRefs:10 - app-logs labels: logType: "application"11 - name: infra-topic12 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs - default13 labels: logType: "audit"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
kafkatype. - 5
- Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the
tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 6
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and must contain aca-bundle.crtkey that points to the certificate it represents. - 7
- Optional: To send an insecure output, use a
tcpprefix in front of the URL. Also omit thesecretkey and itsnamefrom this output. - 8
- Optional: Specify a name for the pipeline.
- 9
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 10
- Specify the name of the output to use when forwarding logs with this pipeline.
- 11
- Optional: String. One or more labels to add to the logs.
- 12
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
- 13
- Optional: Specify
defaultto forward logs to the internal Elasticsearch instance.
Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
# ... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka:1 brokers:2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic3 # ...Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
9.4.14. Forwarding logs to Amazon CloudWatch Copia collegamentoCollegamento copiato negli appunti!
You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store.
To configure log forwarding to CloudWatch, you must create a
ClusterLogForwarder
Procedure
Create a
YAML file that uses theSecretandaws_access_key_idfields to specify your base64-encoded AWS credentials. For example:aws_secret_access_keyapiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=Create the secret. For example:
$ oc apply -f cw-secret.yamlCreate or edit a YAML file that defines the
CR object. In the file, specify the name of the secret. For example:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: cw3 type: cloudwatch4 cloudwatch: groupBy: logType5 groupPrefix: <group prefix>6 region: us-east-27 secret: name: cw-secret8 pipelines: - name: infra-logs9 inputRefs:10 - infrastructure - audit - application outputRefs: - cw11 - 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
cloudwatchtype. - 5
- Optional: Specify how to group the logs:
-
creates log groups for each log type
logType -
creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.
namespaceName -
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
namespaceUUID
-
- 6
- Optional: Specify a string to replace the default
infrastructureNameprefix in the names of the log groups. - 7
- Specify the AWS region.
- 8
- Specify the name of the secret that contains your AWS credentials.
- 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
Create the CR object:
$ oc create -f <file-name>.yaml
Example: Using ClusterLogForwarder with Amazon CloudWatch
Here, you see an example
ClusterLogForwarder
Suppose that you are running an OpenShift Container Platform cluster named
mycluster
infrastructureName
aws
$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"
To generate log data for this example, you run a
busybox
app
busybox
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox
My life is my message
My life is my message
My life is my message
...
You can look up the UUID of the
app
busybox
$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"
In your
ClusterLogForwarder
infrastructure
audit
application
all-logs
cw
us-east-2
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: cw
type: cloudwatch
cloudwatch:
groupBy: logType
region: us-east-2
secret:
name: cw-secret
pipelines:
- name: all-logs
inputRefs:
- infrastructure
- audit
- application
outputRefs:
- cw
Each region in CloudWatch contains three levels of objects:
log group
log stream
- log event
With
groupBy: logType
ClusterLogForwarding
inputRefs
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
Each of the log groups contains log streams:
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...
Each log stream contains log events. To see a log event from the
busybox
application
$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
"events": [
{
"timestamp": 1629422704178,
"message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
"ingestionTime": 1629422744016
},
...
Example: Customizing the prefix in log group names
In the log group names, you can replace the default
infrastructureName
mycluster-7977k
demo-group-prefix
groupPrefix
ClusterLogForwarding
cloudwatch:
groupBy: logType
groupPrefix: demo-group-prefix
region: us-east-2
The value of
groupPrefix
infrastructureName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
Example: Naming log groups after application namespace names
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.
If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.
If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.
To create application log groups whose names are based on the names of the application namespaces, you set the value of the
groupBy
namespaceName
ClusterLogForwarder
cloudwatch:
groupBy: namespaceName
region: us-east-2
Setting
groupBy
namespaceName
audit
infrastructure
In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.app
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.
The
groupBy
audit
infrastructure
Example: Naming log groups after application namespace UUIDs
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.
If you delete an application namespace object and create a new one, CloudWatch creates a new log group.
If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.
To name log groups after application namespace UUIDs, you set the value of the
groupBy
namespaceUUID
ClusterLogForwarder
cloudwatch:
groupBy: namespaceUUID
region: us-east-2
In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
The
groupBy
audit
infrastructure
9.4.15. Forwarding logs to Amazon CloudWatch from STS enabled clusters Copia collegamentoCollegamento copiato negli appunti!
For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the Cloud Credential Operator(CCO) utility
ccoctl
Prerequisites
- Logging for Red Hat OpenShift: 5.5 and later
Procedure
Create a
custom resource YAML by using the template below:CredentialsRequestCloudWatch credentials request template
apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollectorUse the
command to create a role for AWS using yourccoctlCR. With theCredentialsRequestobject, thisCredentialsRequestcommand creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file inccoctl. This secret file contains the/<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yamlkey/value used during authentication with the AWS IAM identity provider.role_arn$ ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com1 - 1
- <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install
Apply the secret created:
$ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yamlCreate or edit a
custom resource:ClusterLogForwarderapiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: cw3 type: cloudwatch4 cloudwatch: groupBy: logType5 groupPrefix: <group prefix>6 region: us-east-27 secret: name: <your_role_name>8 pipelines: - name: to-cloudwatch9 inputRefs:10 - infrastructure - audit - application outputRefs: - cw11 - 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
cloudwatchtype. - 5
- Optional: Specify how to group the logs:
-
creates log groups for each log type
logType -
creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by
namespaceName.logType -
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
namespaceUUID
-
- 6
- Optional: Specify a string to replace the default
infrastructureNameprefix in the names of the log groups. - 7
- Specify the AWS region.
- 8
- Specify the name of the secret that contains your AWS credentials.
- 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
9.4.16. Creating a secret for AWS CloudWatch with an existing AWS role Copia collegamentoCollegamento copiato negli appunti!
If you have an existing role for AWS, you can create a secret for AWS with STS using the
oc create secret --from-literal
$ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions
Example Secret
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-logging
name: my-secret-name
stringData:
role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions
9.5. Configuring the logging collector Copia collegamentoCollegamento copiato negli appunti!
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes. All supported modifications to the log collector can be performed though the
spec.collection.log.fluentd
ClusterLogging
9.5.1. Configuring the log collector Copia collegamentoCollegamento copiato negli appunti!
You can configure which log collector type your logging uses by modifying the
ClusterLogging
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI ().
oc - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a CR.
ClusterLogging
Procedure
Modify the
CRClusterLoggingspec:collectionClusterLoggingCR exampleapiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: <log_collector_type>1 resources: {} tolerations: {} # ...- 1
- The log collector type you want to use for the logging. This can be
vectororfluentd.
Apply the
CR by running the following command:ClusterLogging$ oc apply -f <filename>.yaml
9.5.2. Viewing logging collector pods Copia collegamentoCollegamento copiato negli appunti!
You can view the logging collector pods and the corresponding nodes that they are running on.
Procedure
Run the following command in a project to view the logging collector pods and their details:
$ oc get pods --selector component=collector -o wide -n <project_name>Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>
9.5.3. Configure log collector CPU and memory limits Copia collegamentoCollegamento copiato negli appunti!
The log collector allows for adjustments to both the CPU and memory limits.
Procedure
Edit the
custom resource (CR) in theClusterLoggingproject:openshift-logging$ oc -n openshift-logging edit ClusterLogging instanceapiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits:1 memory: 736Mi requests: cpu: 100m memory: 736Mi # ...- 1
- Specify the CPU and memory limits and requests as needed. The values shown are the default values.
9.5.4. Advanced configuration for the Fluentd log forwarder Copia collegamentoCollegamento copiato negli appunti!
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:
- Chunk and chunk buffer sizes
- Chunk flushing behavior
- Chunk forwarding retry behavior
Fluentd collects log data in a single blob called a chunk. When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.
By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval.
These parameters can help you determine the trade-offs between latency and throughput.
- To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
- To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.
You can configure the chunking and flushing behavior using the following parameters in the
ClusterLogging
These parameters are:
- Not relevant to most users. The default settings should give good general performance.
- Only for advanced users with detailed knowledge of Fluentd configuration and performance.
- Only for performance tuning. They have no effect on functional aspects of logging.
| Parameter | Description | Default |
|---|---|---|
|
| The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. |
|
|
| The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. | Approximately 15% of the node disk distributed across all outputs. |
|
| The interval between chunk flushes. You can use
|
|
|
| The method to perform flushes:
|
|
|
| The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. |
|
|
| The chunking behavior when the queue is full:
|
|
|
| The maximum time in seconds for the
|
|
|
| The retry method when flushing fails:
|
|
|
| The maximum time interval to attempt retries before the record is discarded. |
|
|
| The time in seconds before the next chunk flush. |
|
For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation.
Procedure
Edit the
custom resource (CR) in theClusterLoggingproject:openshift-logging$ oc edit ClusterLogging instanceAdd or modify any of the following parameters:
apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m1 flushInterval: 5s2 flushMode: interval3 flushThreadCount: 34 overflowAction: throw_exception5 retryMaxInterval: "300s"6 retryType: periodic7 retryWait: 1s8 totalLimitSize: 32m9 # ...- 1
- Specify the maximum size of each chunk before it is queued for flushing.
- 2
- Specify the interval between chunk flushes.
- 3
- Specify the method to perform chunk flushes:
lazy,interval, orimmediate. - 4
- Specify the number of threads to use for chunk flushes.
- 5
- Specify the chunking behavior when the queue is full:
throw_exception,block, ordrop_oldest_chunk. - 6
- Specify the maximum interval in seconds for the
exponential_backoffchunk flushing method. - 7
- Specify the retry type when chunk flushing fails:
exponential_backofforperiodic. - 8
- Specify the time in seconds before the next chunk flush.
- 9
- Specify the maximum size of the chunk buffer.
Verify that the Fluentd pods are redeployed:
$ oc get pods -l component=collector -n openshift-loggingCheck that the new values are in the
config map:fluentd$ oc extract configmap/collector --confirmExample fluentd.conf
<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>
9.6. Collecting and storing Kubernetes events Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router.
The Event Router collects events from all projects and writes them to
STDOUT
ClusterLogForwarder
The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed.
9.6.1. Deploying and configuring the Event Router Copia collegamentoCollegamento copiato negli appunti!
Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the
openshift-logging
The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately.
The following
Template
Prerequisites
- You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role.
- The Red Hat OpenShift Logging Operator must be installed.
Procedure
Create a template for the Event Router:
apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount1 apiVersion: v1 metadata: name: eventrouter namespace: ${NAMESPACE} - kind: ClusterRole2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: ${NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap4 apiVersion: v1 metadata: name: eventrouter namespace: ${NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment5 apiVersion: apps/v1 metadata: name: eventrouter namespace: ${NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: ${IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: ${CPU} memory: ${MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4" - name: CPU7 displayName: CPU value: "100m" - name: MEMORY8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging"9 - 1
- Creates a Service Account in the
openshift-loggingproject for the Event Router. - 2
- Creates a ClusterRole to monitor for events in the cluster.
- 3
- Creates a ClusterRoleBinding to bind the ClusterRole to the service account.
- 4
- Creates a config map in the
openshift-loggingproject to generate the requiredconfig.jsonfile. - 5
- Creates a deployment in the
openshift-loggingproject to generate and configure the Event Router pod. - 6
- Specifies the image, identified by a tag such as
v0.4. - 7
- Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to
100m. - 8
- Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to
128Mi. - 9
- Specifies the
openshift-loggingproject to install objects in.
Use the following command to process and apply the template:
$ oc process -f <templatefile> | oc apply -n openshift-logging -f -For example:
$ oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -Example output
serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter createdValidate that the Event Router installed in the
project:openshift-loggingView the new Event Router pod:
$ oc get pods --selector component=eventrouter -o name -n openshift-loggingExample output
pod/cluster-logging-eventrouter-d649f97c8-qvv8rView the events collected by the Event Router:
$ oc logs <cluster_logging_eventrouter_pod> -n openshift-loggingFor example:
$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-loggingExample output
{"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}}You can also use Kibana to view events by creating an index pattern using the Elasticsearch
index.infra