Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 10. Log collection and forwarding
10.1. About log collection and forwarding Link kopierenLink in die Zwischenablage kopiert!
The Red Hat OpenShift Logging Operator deploys a collector based on the
ClusterLogForwarder
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
10.1.1. Log collection Link kopierenLink in die Zwischenablage kopiert!
The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs.
By default, the log collector uses the following sources:
- System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
-
for all container logs.
/var/log/containers/*.log
If you configure the log collector to collect audit logs, it collects them from
/var/log/audit/audit.log
The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration.
10.1.1.1. Log collector types Link kopierenLink in die Zwischenablage kopiert!
Vector is a log collector offered as an alternative to Fluentd for the logging.
You can configure which logging collector type your cluster uses by modifying the
ClusterLogging
collection
Example ClusterLogging CR that configures Vector as the collector
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
vector: {}
# ...
10.1.1.2. Log collection limitations Link kopierenLink in die Zwischenablage kopiert!
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort.
The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source.
10.1.1.3. Log collector features by type Link kopierenLink in die Zwischenablage kopiert!
| Feature | Fluentd | Vector |
|---|---|---|
| App container logs | ✓ | ✓ |
| App-specific routing | ✓ | ✓ |
| App-specific routing by namespace | ✓ | ✓ |
| Infra container logs | ✓ | ✓ |
| Infra journal logs | ✓ | ✓ |
| Kube API audit logs | ✓ | ✓ |
| OpenShift API audit logs | ✓ | ✓ |
| Open Virtual Network (OVN) audit logs | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Elasticsearch certificates | ✓ | ✓ |
| Elasticsearch username / password | ✓ | ✓ |
| Amazon Cloudwatch keys | ✓ | ✓ |
| Amazon Cloudwatch STS | ✓ | ✓ |
| Kafka certificates | ✓ | ✓ |
| Kafka username / password | ✓ | ✓ |
| Kafka SASL | ✓ | ✓ |
| Loki bearer token | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Viaq data model - app | ✓ | ✓ |
| Viaq data model - infra | ✓ | ✓ |
| Viaq data model - infra(journal) | ✓ | ✓ |
| Viaq data model - Linux audit | ✓ | ✓ |
| Viaq data model - kube-apiserver audit | ✓ | ✓ |
| Viaq data model - OpenShift API audit | ✓ | ✓ |
| Viaq data model - OVN | ✓ | ✓ |
| Loglevel Normalization | ✓ | ✓ |
| JSON parsing | ✓ | ✓ |
| Structured Index | ✓ | ✓ |
| Multiline error detection | ✓ | ✓ |
| Multicontainer / split indices | ✓ | ✓ |
| Flatten labels | ✓ | ✓ |
| CLF static labels | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Fluentd readlinelimit | ✓ | |
| Fluentd buffer | ✓ | |
| - chunklimitsize | ✓ | |
| - totallimitsize | ✓ | |
| - overflowaction | ✓ | |
| - flushthreadcount | ✓ | |
| - flushmode | ✓ | |
| - flushinterval | ✓ | |
| - retrywait | ✓ | |
| - retrytype | ✓ | |
| - retrymaxinterval | ✓ | |
| - retrytimeout | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Metrics | ✓ | ✓ |
| Dashboard | ✓ | ✓ |
| Alerts | ✓ | ✓ |
| Feature | Fluentd | Vector |
|---|---|---|
| Global proxy support | ✓ | ✓ |
| x86 support | ✓ | ✓ |
| ARM support | ✓ | ✓ |
| IBM Power® support | ✓ | ✓ |
| IBM Z® support | ✓ | ✓ |
| IPv6 support | ✓ | ✓ |
| Log event buffering | ✓ | |
| Disconnected Cluster | ✓ | ✓ |
10.1.1.4. Collector outputs Link kopierenLink in die Zwischenablage kopiert!
The following collector outputs are supported:
| Feature | Fluentd | Vector |
|---|---|---|
| Elasticsearch v6-v8 | ✓ | ✓ |
| Fluent forward | ✓ | |
| Syslog RFC3164 | ✓ | ✓ (Logging 5.7+) |
| Syslog RFC5424 | ✓ | ✓ (Logging 5.7+) |
| Kafka | ✓ | ✓ |
| Amazon Cloudwatch | ✓ | ✓ |
| Amazon Cloudwatch STS | ✓ | ✓ |
| Loki | ✓ | ✓ |
| HTTP | ✓ | ✓ (Logging 5.7+) |
| Google Cloud Logging | ✓ | ✓ |
| Splunk | ✓ (Logging 5.6+) |
10.1.2. Log forwarding Link kopierenLink in die Zwischenablage kopiert!
Administrators can create
ClusterLogForwarder
ClusterLogForwarder
Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.
10.1.2.1. Log forwarding implementations Link kopierenLink in die Zwischenablage kopiert!
There are two log forwarding implementations available: the legacy implementation, and the multi log forwarder feature.
Only the Vector collector is supported for use with the multi log forwarder feature. The Fluentd collector can only be used with legacy implementations.
10.1.2.1.1. Legacy implementation Link kopierenLink in die Zwischenablage kopiert!
In legacy implementations, you can only use one log forwarder in your cluster. The
ClusterLogForwarder
instance
openshift-logging
ClusterLogForwarder
ClusterLogging
instance
openshift-logging
10.1.2.1.2. Multi log forwarder feature Link kopierenLink in die Zwischenablage kopiert!
The multi log forwarder feature is available in logging 5.8 and later, and provides the following functionality:
- Administrators can control which users are allowed to define log collection and which logs they are allowed to collect.
- Users who have the required permissions are able to specify additional log collection configurations.
- Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated.
In multi log forwarder implementations, you are not required to create a corresponding
ClusterLogging
ClusterLogForwarder
ClusterLogForwarder
-
You cannot create a resource named
ClusterLogForwarderin theinstancenamespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector.openshift-logging -
You cannot create a resource named
ClusterLogForwarderin thecollectornamespace, because this is reserved for the collector.openshift-logging
10.1.2.2. Enabling the multi log forwarder feature for a cluster Link kopierenLink in die Zwischenablage kopiert!
To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the
ClusterLogForwarder
In order to support multi log forwarding in additional namespaces other than the
openshift-logging
10.1.2.2.1. Authorizing log collection RBAC permissions Link kopierenLink in die Zwischenablage kopiert!
In logging 5.8 and later, the Red Hat OpenShift Logging Operator provides
collect-audit-logs
collect-application-logs
collect-infrastructure-logs
You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account.
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the namespace.
openshift-logging - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
10.2. Log output types Link kopierenLink in die Zwischenablage kopiert!
Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the
ClusterLogForwarder
10.2.1. Supported log forwarding outputs Link kopierenLink in die Zwischenablage kopiert!
Outputs can be any of the following types:
| Output type | Protocol | Tested with | Logging versions | Supported collector type |
|---|---|---|---|---|
| Elasticsearch v6 | HTTP 1.1 | 6.8.1, 6.8.23 | 5.6+ | Fluentd, Vector |
| Elasticsearch v7 | HTTP 1.1 | 7.12.2, 7.17.7, 7.10.1 | 5.6+ | Fluentd, Vector |
| Elasticsearch v8 | HTTP 1.1 | 8.4.3, 8.6.1 | 5.6+ | Fluentd [1], Vector |
| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5 | 5.4+ | Fluentd |
| Google Cloud Logging | REST over HTTPS | Latest | 5.7+ | Vector |
| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | 5.7+ | Fluentd, Vector |
| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | 5.4+ | Fluentd, Vector |
| Loki | REST over HTTP and HTTPS | 2.3.0, 2.5.0, 2.7, 2.2.1 | 5.4+ | Fluentd, Vector |
| Splunk | HEC | 8.2.9, 9.0.0 | 5.6+ | Vector |
| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7, rsyslog-8.39.0 | 5.4+ | Fluentd, Vector [2] |
| Amazon CloudWatch | REST over HTTPS | Latest | 5.4+ | Fluentd, Vector |
- Fluentd does not support Elasticsearch 8 in the logging version 5.6.2.
- Vector supports Syslog in the logging version 5.7 and higher.
10.2.2. Output type descriptions Link kopierenLink in die Zwischenablage kopiert!
defaultThe on-cluster, Red Hat managed log store. You are not required to configure the default output.
NoteIf you configure a
output, you receive an error message, because thedefaultoutput name is reserved for referencing the on-cluster, Red Hat managed log store.defaultloki- Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
kafka-
A Kafka broker. The
kafkaoutput can use a TCP or TLS connection. elasticsearch-
An external Elasticsearch instance. The
elasticsearchoutput can use a TLS connection. fluentdForwardAn external log aggregation solution that supports Fluentd. This option uses the Fluentd
protocols. Theforwardoutput can use a TCP or TLS connection and supports shared-key authentication by providing afluentForwardfield in a secret. Shared-key authentication can be used with or without TLS.shared_keyImportantThe
output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using thefluentdForwardoutput.httpsyslog-
An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The
syslogoutput can use a UDP, TCP, or TLS connection. cloudwatch- Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
10.3. Enabling JSON log forwarding Link kopierenLink in die Zwischenablage kopiert!
You can configure the Log Forwarding API to parse JSON strings into a structured object.
10.3.1. Parsing JSON logs Link kopierenLink in die Zwischenablage kopiert!
You can use a
ClusterLogForwarder
To illustrate how this works, suppose that you have the following structured JSON log entry:
Example structured JSON log entry
{"level":"info","name":"fred","home":"bedrock"}
To enable parsing JSON log, you add
parse: json
ClusterLogForwarder
Example snippet showing parse: json
pipelines:
- inputRefs: [ application ]
outputRefs: myFluentd
parse: json
When you enable parsing JSON logs by using
parse: json
structured
Example structured output containing the structured JSON log entry
{"structured": { "level": "info", "name": "fred", "home": "bedrock" },
"more fields..."}
If the log entry does not contain valid structured JSON, the
structured
10.3.2. Configuring JSON log data for Elasticsearch Link kopierenLink in die Zwischenablage kopiert!
If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the
ClusterLogForwarder
If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Structure types
You can use the following structure types in the
ClusterLogForwarder
- is the name of a message field. The value of that field is used to construct the index name.
structuredTypeKey-
is the Kubernetes pod label whose value is used to construct the index name.
kubernetes.labels.<key> -
is the
openshift.labels.<key>element in thepipeline.label.<key>CR whose value is used to construct the index name.ClusterLogForwarder -
uses the container name to construct the index name.
kubernetes.container_name
-
-
: If the
structuredTypeNamefield is not set or its key is not present, thestructuredTypeKeyvalue is used as the structured type. When you use both thestructuredTypeNamefield and thestructuredTypeKeyfield together, thestructuredTypeNamevalue provides a fallback index name if the key in thestructuredTypeNamefield is missing from the JSON log data.structuredTypeKey
Although you can set the value of
structuredTypeKey
A structuredTypeKey: kubernetes.labels.<key> example
Suppose the following:
- Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google".
-
The user labels these application pods with and
logFormat=apache.logFormat=google -
You use the following snippet in your CR YAML file.
ClusterLogForwarder
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
# ...
outputDefaults:
elasticsearch:
structuredTypeKey: kubernetes.labels.logFormat
structuredTypeName: nologformat
pipelines:
- inputRefs:
- application
outputRefs:
- default
parse: json
In that case, the following structured log record goes to the
app-apache-write
{
"structured":{"name":"fred","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "apache", ...}}
}
And the following structured log record goes to the
app-google-write
{
"structured":{"name":"wilma","home":"bedrock"},
"kubernetes":{"labels":{"logFormat": "google", ...}}
}
A structuredTypeKey: openshift.labels.<key> example
Suppose that you use the following snippet in your
ClusterLogForwarder
outputDefaults:
elasticsearch:
structuredTypeKey: openshift.labels.myLabel
structuredTypeName: nologformat
pipelines:
- name: application-logs
inputRefs:
- application
- audit
outputRefs:
- elasticsearch-secure
- default
parse: json
labels:
myLabel: myValue
In that case, the following structured log record goes to the
app-myValue-write
{
"structured":{"name":"fred","home":"bedrock"},
"openshift":{"labels":{"myLabel": "myValue", ...}}
}
Additional considerations
- The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write".
- Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices.
-
If there is no non-empty structured type, forward an unstructured record with no field.
structured
It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats, not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as
LogApache
10.3.3. Forwarding JSON logs to the Elasticsearch log store Link kopierenLink in die Zwischenablage kopiert!
For an Elasticsearch log store, if your JSON log entries follow different schemas, configure the
ClusterLogForwarder
Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.
To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
Procedure
Add the following snippet to your
CR YAML file.ClusterLogForwarderoutputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json-
Use field to specify one of the log record fields.
structuredTypeKey Use
field to specify a name.structuredTypeNameImportantTo parse JSON logs, you must set both the
andstructuredTypeKeyfields.structuredTypeName-
For , specify which log types to forward by using that pipeline, such as
inputRefsapplication,, orinfrastructure.audit -
Add the element to pipelines.
parse: json Create the CR object:
$ oc create -f <filename>.yamlThe Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy.
$ oc delete pod --selector logging-infra=collector
10.3.4. Forwarding JSON logs from containers in the same pod to separate indices Link kopierenLink in die Zwischenablage kopiert!
You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of
app-
JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats.
Prerequisites
- Logging for Red Hat OpenShift: 5.5
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat1 structuredTypeName: nologformat enableStructuredContainerLogs: true2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: jsonCreate or edit a YAML file that defines the
CR object:PodapiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy2 image: heavyimage - name: low image: lowimage
This configuration might significantly increase the number of shards on the cluster.
Additional resources
10.4. Configuring log forwarding Link kopierenLink in die Zwischenablage kopiert!
In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the
ClusterLogging
Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured.
If this default configuration meets your needs, you do not need to configure a
ClusterLogForwarder
ClusterLogForwarder
default
10.4.1. About forwarding logs to third-party systems Link kopierenLink in die Zwischenablage kopiert!
To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a
ClusterLogForwarder
- pipeline
Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
-
. Container logs generated by user applications running in the cluster, except infrastructure container applications.
application -
. Container logs from pods that run in the
infrastructure,openshift*, orkube*projects and journal logs sourced from node file system.default -
. Audit logs generated by the node audit system,
audit, Kubernetes API server, OpenShift API server, and OVN network.auditd
You can add labels to outbound log messages by using
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.key:value-
- input
Forwards the application logs associated with a specific project to a pipeline.
In the pipeline, you define which log types to forward using an
parameter and where to forward the logs to using aninputRefparameter.outputRef- Secret
-
A
key:value mapthat contains confidential data such as user credentials.
Note the following:
-
If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the and
applicationtypes, but do not specify a pipeline for theaudittype,infrastructurelogs are dropped.infrastructure -
You can use multiple types of outputs in the custom resource (CR) to send logs to servers that support different protocols.
ClusterLogForwarder
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the
my-apps-logs
Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccountName: <service_account_name>
outputs:
- name: elasticsearch-secure
type: "elasticsearch"
url: https://elasticsearch.secure.com:9200
secret:
name: elasticsearch
- name: elasticsearch-insecure
type: "elasticsearch"
url: http://elasticsearch.insecure.com:9200
- name: kafka-app
type: "kafka"
url: tls://kafka.secure.com:9093/app-topic
inputs:
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: audit-logs
inputRefs:
- audit
outputRefs:
- elasticsearch-secure
- default
labels:
secure: "true"
datacenter: "east"
- name: infrastructure-logs
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs:
- application
outputRefs:
- kafka-app
labels:
datacenter: "south"
- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Configuration for an secure Elasticsearch output using a secret with a secure URL.
- A name to describe the output.
-
The type of output: .
elasticsearch - The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
-
The secret required by the endpoint for TLS communication. The secret must exist in the project.
openshift-logging
- 5
- Configuration for an insecure Elasticsearch output:
- A name to describe the output.
-
The type of output: .
elasticsearch - The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
- 6
- Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL:
- A name to describe the output.
-
The type of output: .
kafka - Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
- 7
- Configuration for an input to filter application logs from the
my-projectnamespace. - 8
- Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
- A name to describe the pipeline.
-
The is the log type, in this example
inputRefs.audit -
The is the name of the output to use, in this example
outputRefsto forward to the secure Elasticsearch instance andelasticsearch-secureto forward to the internal Elasticsearch instance.default - Optional: Labels to add to the logs.
- 9
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 10
- Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
- 11
- Configuration for a pipeline to send logs from the
my-projectproject to the internal Elasticsearch instance.- A name to describe the pipeline.
-
The is a specific input:
inputRefs.my-app-logs -
The is
outputRefs.default - Optional: String. One or more labels to add to the logs.
- 12
- Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
-
The is the log type, in this example
inputRefs.application -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
-
The
Fluentd log handling when the external log aggregator is unavailable
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.
- Transport Layer Security (TLS)
Using a TLS URL (
orhttp://...) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:ssl://...-
: (string) Passphrase to decode an encoded TLS private key. Requires
passphrase.tls.key -
: (string) File name of a customer CA for server authentication.
ca-bundle.crt
-
- Username and Password
-
: (string) Authentication user name. Requires
username.password -
: (string) Authentication password. Requires
password.username
-
- Simple Authentication Security Layer (SASL)
-
(boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other
sasl.enablekeys are set.sasl. -
: (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
sasl.mechanisms -
: (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.
sasl.allow-insecure
-
10.4.1.1. Creating a Secret Link kopierenLink in die Zwischenablage kopiert!
You can create a secret in the directory that contains your certificate and key files by using the following command:
$ oc create secret generic -n <namespace> <secret_name> \
--from-file=ca-bundle.crt=<your_bundle_file> \
--from-literal=username=<your_username> \
--from-literal=password=<your_password>
Generic or opaque secrets are recommended for best results.
10.4.2. Creating a log forwarder Link kopierenLink in die Zwischenablage kopiert!
To create a log forwarder, you must create a
ClusterLogForwarder
ClusterLogForwarder
If you are using the multi log forwarder feature on your cluster, you can create
ClusterLogForwarder
ClusterLogForwarder
instance
openshift-logging
You need administrator permissions for the namespace where you create the
ClusterLogForwarder
ClusterLogForwarder resource example
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccountName: <service_account_name>
pipelines:
- inputRefs:
- <log_type>
outputRefs:
- <output_name>
outputs:
- name: <output_name>
type: <output_type>
url: <log_output_url>
# ...
- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- The log types that are collected. The value for this field can be
auditfor audit logs,applicationfor application logs,infrastructurefor infrastructure logs, or a named input that has been defined for your application. - 5 7
- The type of output that you want to forward logs to. The value of this field can be
default,loki,kafka,elasticsearch,fluentdForward,syslog, orcloudwatch.NoteThe
output type is not supported in mutli log forwarder implementations.default - 6
- A name for the output that you want to forward logs to.
- 8
- The URL of the output that you want to forward logs to.
10.4.3. Tuning log payloads and delivery Link kopierenLink in die Zwischenablage kopiert!
In logging 5.9 and newer versions, the
tuning
ClusterLogForwarder
For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.
To use this feature, your logging deployment must be configured to use the Vector collector. The
tuning
ClusterLogForwarder
The following example shows the
ClusterLogForwarder
Example ClusterLogForwarder CR tuning options
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
tuning:
delivery: AtLeastOnce
compression: none
maxWrite: <integer>
minRetryDuration: 1s
maxRetryDuration: 1s
# ...
- 1
- Specify the delivery mode for log forwarding.
-
delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
AtLeastOnce -
delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
AtMostOnce
-
- 2
- Specifying a
compressionconfiguration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration arenonefor no compression,gzip,snappy,zlib, orzstd.lz4compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information. - 3
- Specifies a limit for the maximum payload of a single send operation to the output.
- 4
- Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (
ms), seconds (s), or minutes (m). - 5
- Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (
ms), seconds (s), or minutes (m).
| Compression algorithm | Splunk | Amazon Cloudwatch | Elasticsearch 8 | LokiStack | Apache Kafka | HTTP | Syslog | Google Cloud | Microsoft Azure Monitoring |
|---|---|---|---|---|---|---|---|---|---|
|
| X | X | X | X | X | ||||
|
| X | X | X | X | |||||
|
| X | X | X | ||||||
|
| X | X | X | ||||||
|
| X |
10.4.4. Enabling multi-line exception detection Link kopierenLink in die Zwischenablage kopiert!
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the Custom Resource (CR) contains a
ClusterLogForwarderfield, with a value ofdetectMultilineErrors.true
Example ClusterLogForwarder CR
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: my-app-logs
inputRefs:
- application
outputRefs:
- default
detectMultilineErrors: true
10.4.4.1. Details Link kopierenLink in die Zwischenablage kopiert!
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
| Language | Fluentd | Vector |
|---|---|---|
| Java | ✓ | ✓ |
| JS | ✓ | ✓ |
| Ruby | ✓ | ✓ |
| Python | ✓ | ✓ |
| Golang | ✓ | ✓ |
| PHP | ✓ | ✓ |
| Dart | ✓ | ✓ |
10.4.4.2. Troubleshooting Link kopierenLink in die Zwischenablage kopiert!
When enabled, the collector configuration will include a new section with type:
detect_exceptions
Example vector configuration section
[transforms.detect_exceptions_app-logs]
type = "detect_exceptions"
inputs = ["application"]
languages = ["All"]
group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"]
expire_after_ms = 2000
multiline_flush_interval_ms = 1000
Example fluentd config section
<label @MULTILINE_APP_LOGS>
<match kubernetes.**>
@type detect_exceptions
remove_tag_prefix 'kubernetes'
message message
force_line_breaks true
multiline_flush_interval .2
</match>
</label>
10.4.5. Forwarding logs to Google Cloud Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported.
Prerequisites
- Red Hat OpenShift Logging Operator 5.5.1 and later
Procedure
Create a secret using your Google service account key.
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>Create a
Custom Resource YAML using the template below:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : "openshift-gce-devel"4 logId : "app-gcp"5 pipelines: - name: test-app inputRefs:6 - application outputRefs: - gcp-1- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Set a
projectId,folderId,organizationId, orbillingAccountIdfield and its corresponding value, depending on where you want to store your logs in the Google Cloud resource hierarchy. - 5
- Set the value to add to the
logNamefield of the Log Entry. - 6
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit.
10.4.6. Forwarding logs to Splunk Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store.
Using this feature with Fluentd is not supported.
Prerequisites
- Red Hat OpenShift Logging Operator 5.6 or later
-
A instance with
ClusterLoggingspecified as the collectorvector - Base64 encoded Splunk HEC token
Procedure
Create a secret using your Base64 encoded Splunk HEC token.
$ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>Create or edit the
Custom Resource (CR) using the template below:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: splunk-receiver4 secret: name: vector-splunk-secret5 type: splunk6 url: <http://your.splunk.hec.url:8088>7 pipelines:8 - inputRefs: - application - infrastructure name:9 outputRefs: - splunk-receiver10 - 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the name of the secret that contains your HEC token.
- 6
- Specify the output type as
splunk. - 7
- Specify the URL (including port) of your Splunk HEC.
- 8
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 9
- Optional: Specify a name for the pipeline.
- 10
- Specify the name of the output to use when forwarding logs with this pipeline.
10.4.7. Forwarding logs over HTTP Link kopierenLink in die Zwischenablage kopiert!
Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify
http
ClusterLogForwarder
Procedure
Create or edit the
CR using the template below:ClusterLogForwarderExample ClusterLogForwarder CR
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: httpout-app type: http url:4 http: headers:5 h1: v1 h2: v2 method: POST secret: name:6 tls: insecureSkipVerify:7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app8 - 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Destination address for logs.
- 5
- Additional headers to send with the log record.
- 6
- Secret name for destination credentials.
- 7
- Values are either
trueorfalse. - 8
- This value should be the same as the output name.
10.4.8. Forwarding to Azure Monitor Logs Link kopierenLink in die Zwischenablage kopiert!
With logging 5.9 and later, you can forward logs to Azure Monitor Logs in addition to, or instead of, the default log store. This functionality is provided by the Vector Azure Monitor Logs sink.
Prerequisites
-
You are familiar with how to administer and create a custom resource (CR) instance.
ClusterLogging -
You are familiar with how to administer and create a CR instance.
ClusterLogForwarder -
You understand the CR specifications.
ClusterLogForwarder - You have basic familiarity with Azure services.
- You have an Azure account configured for Azure Portal or Azure CLI access.
- You have obtained your Azure Monitor Logs primary or the secondary security key.
- You have determined which log types to forward.
To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:
Create a secret with your shared key:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: openshift-logging
type: Opaque
data:
shared_key: <your_shared_key>
- 1
- Must contain a primary or secondary key for the Log Analytics workspace making the request.
To obtain a shared key, you can use this command in Azure CLI:
Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”
Create or edit your
ClusterLogForwarder
Forward all logs
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: azure-monitor
type: azureMonitor
azureMonitor:
customerId: my-customer-id
logType: my_log_type
secret:
name: my-secret
pipelines:
- name: app-pipeline
inputRefs:
- application
outputRefs:
- azure-monitor
- 1
- Unique identifier for the Log Analytics workspace. Required field.
- 2
- Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Forward application and infrastructure logs
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: azure-monitor-app
type: azureMonitor
azureMonitor:
customerId: my-customer-id
logType: application_log
secret:
name: my-secret
- name: azure-monitor-infra
type: azureMonitor
azureMonitor:
customerId: my-customer-id
logType: infra_log #
secret:
name: my-secret
pipelines:
- name: app-pipeline
inputRefs:
- application
outputRefs:
- azure-monitor-app
- name: infra-pipeline
inputRefs:
- infrastructure
outputRefs:
- azure-monitor-infra
- 1
- Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Advanced configuration options
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: azure-monitor
type: azureMonitor
azureMonitor:
customerId: my-customer-id
logType: my_log_type
azureResourceId: "/subscriptions/111111111"
host: "ods.opinsights.azure.com"
secret:
name: my-secret
pipelines:
- name: app-pipeline
inputRefs:
- application
outputRefs:
- azure-monitor
10.4.9. Forwarding application logs from specific projects Link kopierenLink in die Zwischenablage kopiert!
You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.
To configure forwarding application logs from a project, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs:7 - name: my-app-logs application: namespaces: - my-project8 pipelines: - name: forward-to-fluentd-insecure9 inputRefs:10 - my-app-logs outputRefs:11 - fluentd-server-insecure labels: project: "my-project"12 - name: forward-to-fluentd-secure13 inputRefs: - application14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- The name of the output.
- 4
- The output type:
elasticsearch,fluentdForward,syslog, orkafka. - 5
- The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
- 6
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent. - 7
- The configuration for an input to filter application logs from the specified projects.
- 8
- If no namespace is specified, logs are collected from all namespaces.
- 9
- The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named
forward-to-fluentd-insecureforwards logs from an input namedmy-app-logsto an output namedfluentd-server-insecure. - 10
- A list of inputs.
- 11
- The name of the output to use.
- 12
- Optional: String. One or more labels to add to the logs.
- 13
- Configuration for a pipeline to send logs to other log aggregators.
- Optional: Specify a name for the pipeline.
-
Specify which log types to forward by using the pipeline:
application,, orinfrastructure.audit - Specify the name of the output to use when forwarding logs with this pipeline.
-
Optional: Specify the output to forward logs to the default log store.
default - Optional: String. One or more labels to add to the logs.
- 14
- Note that application logs from all namespaces are collected when using this configuration.
Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
10.4.10. Forwarding application logs from specific pods Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.
Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.
To specify the pod labels, you use one or more
matchLabels
Procedure
Create or edit a YAML file that defines the
CR object. In the file, specify the pod labels using simple equality-based selectors underClusterLogForwarder, as shown in the following example.inputs[].name.application.selector.matchLabelsExample
ClusterLogForwarderCR YAML fileapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: pipelines: - inputRefs: [ myAppLogData ]3 outputRefs: [ default ]4 inputs:5 - name: myAppLogData application: selector: matchLabels:6 environment: production app: nginx namespaces:7 - app1 - app2 outputs:8 - <output_name> ...- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- Specify one or more comma-separated values from
inputs[].name. - 4
- Specify one or more comma-separated values from
outputs[]. - 5
- Define a unique
inputs[].namefor each application that has a unique set of pod labels. - 6
- Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
- 7
- Optional: Specify one or more namespaces.
- 8
- Specify one or more outputs to forward your log data to.
-
Optional: To restrict the gathering of log data to specific namespaces, use , as shown in the preceding example.
inputs[].name.application.namespaces Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
-
For each unique combination of pod labels, create an additional section similar to the one shown.
inputs[].name -
Update the to match the pod labels of this application.
selectors Add the new
value toinputs[].name. For example:inputRefs- inputRefs: [ myAppLogData, myOtherAppLogData ]
-
For each unique combination of pod labels, create an additional
Create the CR object:
$ oc create -f <file-name>.yaml
10.4.11. Overview of API audit filter Link kopierenLink in die Zwischenablage kopiert!
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the
level
-
: The event is dropped.
None -
: Audit metadata is included, request and response bodies are removed.
Metadata -
: Audit metadata and the request body are included, the response body is removed.
Request -
: All data is included: metadata, request body and response body. The response body can be very large. For example,
RequestResponsegenerates a response body containing the YAML description of every pod in the cluster.oc get pods -A
You can use this feature only if the Vector collector is set up in your logging deployment.
In logging 5.8 and later, the
ClusterLogForwarder
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*asterisk character. For example, namespaceopenshift-\*matchesopenshift-apiserveroropenshift-authentication. Resource\*/statusmatchesPod/statusorDeployment/status. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as ,
get,listare dropped.watch - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a
level
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodesfield, a list of HTTP status code for which no events are created. The default value is[404, 409, 422, 429]. If the value is an empty list,[], then no status codes are omitted.
The
ClusterLogForwarder
ClusterLogForwarder
The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: my-pipeline
inputRefs: audit
filterRefs: my-policy
outputRefs: default
filters:
- name: my-policy
type: kubeAPIAudit
kubeAPIAudit:
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
10.4.12. Forwarding logs to an external Loki logging system Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to an external Loki logging system in addition to, or instead of, the default log store.
To configure log forwarding to Loki, you must create a
ClusterLogForwarder
Prerequisites
-
You must have a Loki logging system running at the URL you specify with the field in the CR.
url
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: loki-insecure4 type: "loki"5 url: http://loki.insecure.com:31006 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure7 type: "loki" url: https://loki.secure.com:3100 secret: name: loki-secret8 loki: tenantKey: kubernetes.namespace_name9 labelKeys: - kubernetes.labels.foo10 pipelines: - name: application-logs11 inputRefs:12 - application - audit outputRefs:13 - loki-secure- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the type as
"loki". - 6
- Specify the URL and port of the Loki system as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki’s default port for HTTP(S) communication is 3100. - 7
- For a secure connection, you can specify an
httpsorhttpURL that you authenticate by specifying asecret. - 8
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain aca-bundle.crtkey that points to the certificates it represents. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in theopenshift-loggingproject. For more information, see the following "Example: Setting a secret that contains a username and password." - 9
- Optional: Specify a metadata key field to generate values for the
TenantIDfield in Loki. For example, settingtenantKey: kubernetes.namespace_nameuses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. - 10
- Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression
[a-zA-Z_:][a-zA-Z0-9_:]*. Illegal characters in metadata keys are replaced with_to form the label name. For example, thekubernetes.labels.foometadata key becomes Loki labelkubernetes_labels_foo. If you do not setlabelKeys, the default value is:[log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters. - 11
- Optional: Specify a name for the pipeline.
- 12
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 13
- Specify the name of the output to use when forwarding logs with this pipeline.
NoteBecause Loki requires log streams to be correctly ordered by timestamp,
always includes thelabelKeyslabel set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.kubernetes_hostApply the
CR object by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
10.4.13. Forwarding logs to an external Elasticsearch instance Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.
To configure log forwarding to an external Elasticsearch instance, you must create a
ClusterLogForwarder
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the
default
If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR:ClusterLogForwarderExample
ClusterLogForwarderCRapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: elasticsearch-example4 type: elasticsearch5 elasticsearch: version: 86 url: http://elasticsearch.example.com:92007 secret: name: es-secret8 pipelines: - name: application-logs9 inputRefs:10 - application - audit outputRefs: - elasticsearch-example11 - default12 labels: myLabel: "myValue"13 # ...- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the
elasticsearchtype. - 6
- Specify the Elasticsearch version. This can be
6,7, or8. - 7
- Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the
http(insecure) orhttps(secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. - 8
- For an
httpsprefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain aca-bundle.crtkey that points to the certificate it represents. Otherwise, forhttpandhttpsprefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in theopenshift-loggingproject. For more information, see the following "Example: Setting a secret that contains a username and password." - 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
- 12
- Optional: Specify the
defaultoutput to send the logs to the internal Elasticsearch instance. - 13
- Optional: String. One or more labels to add to the logs.
Apply the
CR:ClusterLogForwarder$ oc apply -f <filename>.yaml
Example: Setting a secret that contains a username and password
You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.
For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.
Create a
YAML file similar to the following example. Use base64-encoded values for theSecretandusernamefields. The secret type is opaque by default.passwordapiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password> # ...Create the secret:
$ oc create secret -n openshift-logging openshift-test-secret.yamlSpecify the name of the secret in the
CR:ClusterLogForwarderkind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret # ...NoteIn the value of the
field, the prefix can beurlorhttp.httpsApply the CR object:
$ oc apply -f <filename>.yaml
10.4.14. Forwarding logs using the Fluentd forward protocol Link kopierenLink in die Zwischenablage kopiert!
You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.
To configure log forwarding using the forward protocol, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance1 namespace: openshift-logging2 spec: outputs: - name: fluentd-server-secure3 type: fluentdForward4 url: 'tls://fluentdserver.security.example.com:24224'5 secret:6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure7 inputRefs:8 - application - audit outputRefs: - fluentd-server-secure9 - default10 labels: clusterId: "C1234"11 - name: forward-to-fluentd-insecure12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234"- 1
- The name of the
ClusterLogForwarderCR must beinstance. - 2
- The namespace for the
ClusterLogForwarderCR must beopenshift-logging. - 3
- Specify a name for the output.
- 4
- Specify the
fluentdForwardtype. - 5
- Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the
tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 6
- If you are using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-loggingproject and must contain aca-bundle.crtkey that points to the certificate it represents. - 7
- Optional: Specify a name for the pipeline.
- 8
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 9
- Specify the name of the output to use when forwarding logs with this pipeline.
- 10
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 11
- Optional: String. One or more labels to add to the logs.
- 12
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <file-name>.yaml
10.4.14.1. Enabling nanosecond precision for Logstash to ingest data from fluentd Link kopierenLink in die Zwischenablage kopiert!
For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.
Procedure
-
In the Logstash configuration file, set to
nanosecond_precision.true
Example Logstash configuration file
input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }
10.4.15. Forwarding logs using the syslog protocol Link kopierenLink in die Zwischenablage kopiert!
You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.
To configure log forwarding using the syslog protocol, you must create a
ClusterLogForwarder
Prerequisites
- You must have a logging server that is configured to receive the logging data using the specified protocol or format.
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: rsyslog-east4 type: syslog5 syslog:6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514'7 secret:8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east9 inputRefs:10 - audit - application outputRefs:11 - rsyslog-east - default12 labels: secure: "true"13 syslog: "east" - name: syslog-west14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west"- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the
syslogtype. - 6
- Optional: Specify the syslog parameters, listed below.
- 7
- Specify the URL and port of the external syslog instance. You can use the
udp(insecure),tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 8
- If using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain aca-bundle.crtkey that points to the certificate it represents. In legacy implementations, the secret must exist in theopenshift-loggingproject. - 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
- 12
- Optional: Specify the
defaultoutput to forward logs to the internal Elasticsearch instance. - 13
- Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
- 14
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Create the CR object:
$ oc create -f <filename>.yaml
10.4.15.1. Adding log source information to message output Link kopierenLink in die Zwischenablage kopiert!
You can add
namespace_name
pod_name
container_name
message
AddLogSource
ClusterLogForwarder
spec:
outputs:
- name: syslogout
syslog:
addLogSource: true
facility: user
payloadKey: message
rfc: RFC3164
severity: debug
tag: mytag
type: syslog
url: tls://syslog-receiver.openshift-logging.svc:24224
pipelines:
- inputRefs:
- application
name: test-app
outputRefs:
- syslogout
This configuration is compatible with both RFC3164 and RFC5424.
Example syslog message output without AddLogSource
<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}
Example syslog message output with AddLogSource
<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}
10.4.15.2. Syslog parameters Link kopierenLink in die Zwischenablage kopiert!
You can configure the following for the
syslog
facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for kernel messageskern -
or
1for user-level messages, the default.user -
or
2for the mail systemmail -
or
3for system daemonsdaemon -
or
4for security/authentication messagesauth -
or
5for messages generated internally by syslogdsyslog -
or
6for the line printer subsystemlpr -
or
7for the network news subsystemnews -
or
8for the UUCP subsystemuucp -
or
9for the clock daemoncron -
or
10for security authentication messagesauthpriv -
or
11for the FTP daemonftp -
or
12for the NTP subsystemntp -
or
13for the syslog audit logsecurity -
or
14for the syslog alert logconsole -
or
15for the scheduling daemonsolaris-cron -
–
16or23–local0for locally used facilitieslocal7
-
Optional:
: The record field to use as payload for the syslog message.payloadKeyNoteConfiguring the
parameter prevents other parameters from being forwarded to the syslog.payloadKey- rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:
-
or
0for messages indicating the system is unusableEmergency -
or
1for messages indicating action must be taken immediatelyAlert -
or
2for messages indicating critical conditionsCritical -
or
3for messages indicating error conditionsError -
or
4for messages indicating warning conditionsWarning -
or
5for messages indicating normal but significant conditionsNotice -
or
6for messages indicating informational messagesInformational -
or
7for messages indicating debug-level messages, the defaultDebug
-
- tag: Tag specifies a record field to use as a tag on the syslog message.
- trimPrefix: Remove the specified prefix from the tag.
10.4.15.3. Additional RFC5424 syslog parameters Link kopierenLink in die Zwischenablage kopiert!
The following parameters apply to RFC5424:
-
appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for .
RFC5424 -
msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for .
RFC5424 -
procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for .
RFC5424
10.4.16. Forwarding logs to a Kafka broker Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to an external Kafka broker in addition to, or instead of, the default log store.
To configure log forwarding to an external Kafka instance, you must create a
ClusterLogForwarder
Procedure
Create or edit a YAML file that defines the
CR object:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: app-logs4 type: kafka5 url: tls://kafka.example.devlab.com:9093/app-topic6 secret: name: kafka-secret7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic9 inputRefs:10 - application outputRefs:11 - app-logs labels: logType: "application"12 - name: infra-topic13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: "audit"- 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the
kafkatype. - 6
- Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the
tcp(insecure) ortls(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. - 7
- If you are using a
tlsprefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain aca-bundle.crtkey that points to the certificate it represents. In legacy implementations, the secret must exist in theopenshift-loggingproject. - 8
- Optional: To send an insecure output, use a
tcpprefix in front of the URL. Also omit thesecretkey and itsnamefrom this output. - 9
- Optional: Specify a name for the pipeline.
- 10
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 11
- Specify the name of the output to use when forwarding logs with this pipeline.
- 12
- Optional: String. One or more labels to add to the logs.
- 13
- Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
- A name to describe the pipeline.
-
The is the log type to forward by using the pipeline:
inputRefsapplication,, orinfrastructure.audit -
The is the name of the output to use.
outputRefs - Optional: String. One or more labels to add to the logs.
Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
# ... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka:1 brokers:2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic3 # ...Apply the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
10.4.17. Forwarding logs to Amazon CloudWatch Link kopierenLink in die Zwischenablage kopiert!
You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store.
To configure log forwarding to CloudWatch, you must create a
ClusterLogForwarder
Procedure
Create a
YAML file that uses theSecretandaws_access_key_idfields to specify your base64-encoded AWS credentials. For example:aws_secret_access_keyapiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=Create the secret. For example:
$ oc apply -f cw-secret.yamlCreate or edit a YAML file that defines the
CR object. In the file, specify the name of the secret. For example:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: <service_account_name>3 outputs: - name: cw4 type: cloudwatch5 cloudwatch: groupBy: logType6 groupPrefix: <group prefix>7 region: us-east-28 secret: name: cw-secret9 pipelines: - name: infra-logs10 inputRefs:11 - infrastructure - audit - application outputRefs: - cw12 - 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the
openshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the
cloudwatchtype. - 6
- Optional: Specify how to group the logs:
-
creates log groups for each log type.
logType -
creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.
namespaceName -
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
namespaceUUID
-
- 7
- Optional: Specify a string to replace the default
infrastructureNameprefix in the names of the log groups. - 8
- Specify the AWS region.
- 9
- Specify the name of the secret that contains your AWS credentials.
- 10
- Optional: Specify a name for the pipeline.
- 11
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 12
- Specify the name of the output to use when forwarding logs with this pipeline.
Create the CR object:
$ oc create -f <file-name>.yaml
Example: Using ClusterLogForwarder with Amazon CloudWatch
Here, you see an example
ClusterLogForwarder
Suppose that you are running an OpenShift Container Platform cluster named
mycluster
infrastructureName
aws
$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"
To generate log data for this example, you run a
busybox
app
busybox
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox
Example output
My life is my message
My life is my message
My life is my message
...
You can look up the UUID of the
app
busybox
$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"
In your
ClusterLogForwarder
infrastructure
audit
application
all-logs
cw
us-east-2
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: cw
type: cloudwatch
cloudwatch:
groupBy: logType
region: us-east-2
secret:
name: cw-secret
pipelines:
- name: all-logs
inputRefs:
- infrastructure
- audit
- application
outputRefs:
- cw
Each region in CloudWatch contains three levels of objects:
log group
log stream
- log event
With
groupBy: logType
ClusterLogForwarding
inputRefs
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
Each of the log groups contains log streams:
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...
Each log stream contains log events. To see a log event from the
busybox
application
$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
"events": [
{
"timestamp": 1629422704178,
"message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
"ingestionTime": 1629422744016
},
...
Example: Customizing the prefix in log group names
In the log group names, you can replace the default
infrastructureName
mycluster-7977k
demo-group-prefix
groupPrefix
ClusterLogForwarding
cloudwatch:
groupBy: logType
groupPrefix: demo-group-prefix
region: us-east-2
The value of
groupPrefix
infrastructureName
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"
Example: Naming log groups after application namespace names
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.
If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.
If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.
To create application log groups whose names are based on the names of the application namespaces, you set the value of the
groupBy
namespaceName
ClusterLogForwarder
cloudwatch:
groupBy: namespaceName
region: us-east-2
Setting
groupBy
namespaceName
audit
infrastructure
In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.app
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.
The
groupBy
audit
infrastructure
Example: Naming log groups after application namespace UUIDs
For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.
If you delete an application namespace object and create a new one, CloudWatch creates a new log group.
If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.
To name log groups after application namespace UUIDs, you set the value of the
groupBy
namespaceUUID
ClusterLogForwarder
cloudwatch:
groupBy: namespaceUUID
region: us-east-2
In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new
mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
mycluster-7977k.application
$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"
The
groupBy
audit
infrastructure
10.4.18. Creating a secret for AWS CloudWatch with an existing AWS role Link kopierenLink in die Zwischenablage kopiert!
If you have an existing role for AWS, you can create a secret for AWS with STS using the
oc create secret --from-literal
Procedure
In the CLI, enter the following to generate a secret for AWS:
$ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissionsExample Secret
apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions
10.4.19. Forwarding logs to Amazon CloudWatch from STS enabled clusters Link kopierenLink in die Zwischenablage kopiert!
For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the Cloud Credential Operator (CCO) utility
ccoctl
- Logging for Red Hat OpenShift: 5.5 and later
Procedure
Create a
custom resource YAML by using the template below:CredentialsRequestCloudWatch credentials request template
apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollectorUse the
command to create a role for AWS using yourccoctlCR. With theCredentialsRequestobject, thisCredentialsRequestcommand creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file inccoctl. This secret file contains the/<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yamlkey/value used during authentication with the AWS IAM identity provider.role_arn$ ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com1 - 1
- <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install
Apply the secret created:
$ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yamlCreate or edit a
custom resource:ClusterLogForwarderapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name>1 namespace: <log_forwarder_namespace>2 spec: serviceAccountName: clf-collector3 outputs: - name: cw4 type: cloudwatch5 cloudwatch: groupBy: logType6 groupPrefix: <group prefix>7 region: us-east-28 secret: name: <your_secret_name>9 pipelines: - name: to-cloudwatch10 inputRefs:11 - infrastructure - audit - application outputRefs: - cw12 - 1
- In legacy implementations, the CR name must be
instance. In multi log forwarder implementations, you can use any name. - 2
- In legacy implementations, the CR namespace must be
openshift-logging. In multi log forwarder implementations, you can use any namespace. - 3
- Specify the
clf-collectorservice account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in theopenshift-loggingnamespace. - 4
- Specify a name for the output.
- 5
- Specify the
cloudwatchtype. - 6
- Optional: Specify how to group the logs:
-
creates log groups for each log type.
logType -
creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by
namespaceName.logType -
creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
namespaceUUID
-
- 7
- Optional: Specify a string to replace the default
infrastructureNameprefix in the names of the log groups. - 8
- Specify the AWS region.
- 9
- Specify the name of the secret that contains your AWS credentials.
- 10
- Optional: Specify a name for the pipeline.
- 11
- Specify which log types to forward by using the pipeline:
application,infrastructure, oraudit. - 12
- Specify the name of the output to use when forwarding logs with this pipeline.
10.5. Configuring the logging collector Link kopierenLink in die Zwischenablage kopiert!
Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the
spec.collection
ClusterLogging
10.5.1. Configuring the log collector Link kopierenLink in die Zwischenablage kopiert!
You can configure which log collector type your logging uses by modifying the
ClusterLogging
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI ().
oc - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a CR.
ClusterLogging
Procedure
Modify the
CRClusterLoggingspec:collectionClusterLoggingCR exampleapiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: <log_collector_type>1 resources: {} tolerations: {} # ...- 1
- The log collector type you want to use for the logging. This can be
vectororfluentd.
Apply the
CR by running the following command:ClusterLogging$ oc apply -f <filename>.yaml
10.5.2. Creating a LogFileMetricExporter resource Link kopierenLink in die Zwischenablage kopiert!
In logging version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a
LogFileMetricExporter
If you do not create the
LogFileMetricExporter
Prerequisites
- You have administrator permissions.
- You have installed the Red Hat OpenShift Logging Operator.
-
You have installed the OpenShift CLI ().
oc
Procedure
Create a
CR as a YAML file:LogFileMetricExporterExample
LogFileMetricExporterCRapiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {}1 resources:2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: []3 # ...Apply the
CR by running the following command:LogFileMetricExporter$ oc apply -f <filename>.yaml
Verification
A
logfilesmetricexporter
collector
Verify that the
pods are running in the namespace where you have created thelogfilesmetricexporterCR, by running the following command and observing the output:LogFileMetricExporter$ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-loggingExample output
NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s
10.5.3. Configure log collector CPU and memory limits Link kopierenLink in die Zwischenablage kopiert!
The log collector allows for adjustments to both the CPU and memory limits.
Procedure
Edit the
custom resource (CR) in theClusterLoggingproject:openshift-logging$ oc -n openshift-logging edit ClusterLogging instanceapiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits:1 memory: 736Mi requests: cpu: 100m memory: 736Mi # ...- 1
- Specify the CPU and memory limits and requests as needed. The values shown are the default values.
10.5.4. Configuring input receivers Link kopierenLink in die Zwischenablage kopiert!
The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. The service name is generated based on the following:
-
For multi log forwarder CR deployments, the service name is in the format
ClusterLogForwarder. For example,<ClusterLogForwarder_CR_name>-<input_name>.example-http-receiver -
For legacy CR deployments, meaning those named
ClusterLogForwarderand located in theinstancenamespace, the service name is in the formatopenshift-logging. For example,collector-<input_name>.collector-http-receiver
10.5.4.1. Configuring the collector to receive audit logs as an HTTP server Link kopierenLink in die Zwischenablage kopiert!
You can configure your log collector to listen for HTTP connections and receive audit logs as an HTTP server by specifying
http
ClusterLogForwarder
Prerequisites
- You have administrator permissions.
-
You have installed the OpenShift CLI ().
oc - You have installed the Red Hat OpenShift Logging Operator.
-
You have created a CR.
ClusterLogForwarder
Procedure
Modify the
CR to add configuration for theClusterLogForwarderreceiver input:httpExample
ClusterLogForwarderCR if you are using a multi log forwarder deploymentapiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver1 receiver: type: http2 http: format: kubeAPIAudit3 port: 84434 pipelines:5 - name: http-pipeline inputRefs: - http-receiver # ...- 1
- Specify a name for your input receiver.
- 2
- Specify the input receiver type as
http. - 3
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 4
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443if this is not specified. - 5
- Configure a pipeline for your input receiver.
Example
ClusterLogForwarderCR if you are using a legacy deploymentapiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver1 receiver: type: http2 http: format: kubeAPIAudit3 port: 84434 pipelines:5 - inputRefs: - http-receiver name: http-pipeline # ...- 1
- Specify a name for your input receiver.
- 2
- Specify the input receiver type as
http. - 3
- Currently, only the
kube-apiserverwebhook format is supported forhttpinput receivers. - 4
- Optional: Specify the port that the input receiver listens on. This must be a value between
1024and65535. The default value is8443if this is not specified. - 5
- Configure a pipeline for your input receiver.
Apply the changes to the
CR by running the following command:ClusterLogForwarder$ oc apply -f <filename>.yaml
10.5.5. Advanced configuration for the Fluentd log forwarder Link kopierenLink in die Zwischenablage kopiert!
Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.
Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:
- Chunk and chunk buffer sizes
- Chunk flushing behavior
- Chunk forwarding retry behavior
Fluentd collects log data in a single blob called a chunk. When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.
By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval.
These parameters can help you determine the trade-offs between latency and throughput.
- To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
- To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.
You can configure the chunking and flushing behavior using the following parameters in the
ClusterLogging
These parameters are:
- Not relevant to most users. The default settings should give good general performance.
- Only for advanced users with detailed knowledge of Fluentd configuration and performance.
- Only for performance tuning. They have no effect on functional aspects of logging.
| Parameter | Description | Default |
|---|---|---|
|
| The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. |
|
|
| The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. | Approximately 15% of the node disk distributed across all outputs. |
|
| The interval between chunk flushes. You can use
|
|
|
| The method to perform flushes:
|
|
|
| The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. |
|
|
| The chunking behavior when the queue is full:
|
|
|
| The maximum time in seconds for the
|
|
|
| The retry method when flushing fails:
|
|
|
| The maximum time interval to attempt retries before the record is discarded. |
|
|
| The time in seconds before the next chunk flush. |
|
For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation.
Procedure
Edit the
custom resource (CR) in theClusterLoggingproject:openshift-logging$ oc edit ClusterLogging instanceAdd or modify any of the following parameters:
apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m1 flushInterval: 5s2 flushMode: interval3 flushThreadCount: 34 overflowAction: throw_exception5 retryMaxInterval: "300s"6 retryType: periodic7 retryWait: 1s8 totalLimitSize: 32m9 # ...- 1
- Specify the maximum size of each chunk before it is queued for flushing.
- 2
- Specify the interval between chunk flushes.
- 3
- Specify the method to perform chunk flushes:
lazy,interval, orimmediate. - 4
- Specify the number of threads to use for chunk flushes.
- 5
- Specify the chunking behavior when the queue is full:
throw_exception,block, ordrop_oldest_chunk. - 6
- Specify the maximum interval in seconds for the
exponential_backoffchunk flushing method. - 7
- Specify the retry type when chunk flushing fails:
exponential_backofforperiodic. - 8
- Specify the time in seconds before the next chunk flush.
- 9
- Specify the maximum size of the chunk buffer.
Verify that the Fluentd pods are redeployed:
$ oc get pods -l component=collector -n openshift-loggingCheck that the new values are in the
config map:fluentd$ oc extract configmap/collector-config --confirmExample fluentd.conf
<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>
10.6. Collecting and storing Kubernetes events Link kopierenLink in die Zwischenablage kopiert!
The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router.
The Event Router collects events from all projects and writes them to
STDOUT
ClusterLogForwarder
The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed.
10.6.1. Deploying and configuring the Event Router Link kopierenLink in die Zwischenablage kopiert!
Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the
openshift-logging
The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately.
The following
Template
Prerequisites
- You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role.
- The Red Hat OpenShift Logging Operator must be installed.
Procedure
Create a template for the Event Router:
apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount1 apiVersion: v1 metadata: name: eventrouter namespace: ${NAMESPACE} - kind: ClusterRole2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: ${NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap4 apiVersion: v1 metadata: name: eventrouter namespace: ${NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment5 apiVersion: apps/v1 metadata: name: eventrouter namespace: ${NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: ${IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: ${CPU} memory: ${MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4" - name: CPU7 displayName: CPU value: "100m" - name: MEMORY8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging"9 - 1
- Creates a Service Account in the
openshift-loggingproject for the Event Router. - 2
- Creates a ClusterRole to monitor for events in the cluster.
- 3
- Creates a ClusterRoleBinding to bind the ClusterRole to the service account.
- 4
- Creates a config map in the
openshift-loggingproject to generate the requiredconfig.jsonfile. - 5
- Creates a deployment in the
openshift-loggingproject to generate and configure the Event Router pod. - 6
- Specifies the image, identified by a tag such as
v0.4. - 7
- Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to
100m. - 8
- Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to
128Mi. - 9
- Specifies the
openshift-loggingproject to install objects in.
Use the following command to process and apply the template:
$ oc process -f <templatefile> | oc apply -n openshift-logging -f -For example:
$ oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -Example output
serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter createdValidate that the Event Router installed in the
project:openshift-loggingView the new Event Router pod:
$ oc get pods --selector component=eventrouter -o name -n openshift-loggingExample output
pod/cluster-logging-eventrouter-d649f97c8-qvv8rView the events collected by the Event Router:
$ oc logs <cluster_logging_eventrouter_pod> -n openshift-loggingFor example:
$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-loggingExample output
{"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}}You can also use Kibana to view events by creating an index pattern using the Elasticsearch
index.infra