Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 10. Log collection and forwarding


10.1. About log collection and forwarding

The Red Hat OpenShift Logging Operator deploys a collector based on the

ClusterLogForwarder
resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

10.1.1. Log collection

The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs.

By default, the log collector uses the following sources:

  • System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
  • /var/log/containers/*.log
    for all container logs.

If you configure the log collector to collect audit logs, it collects them from

/var/log/audit/audit.log
.

The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration.

10.1.1.1. Log collector types

Vector is a log collector offered as an alternative to Fluentd for the logging.

You can configure which logging collector type your cluster uses by modifying the

ClusterLogging
custom resource (CR)
collection
spec:

Example ClusterLogging CR that configures Vector as the collector

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  collection:
    logs:
      type: vector
      vector: {}
# ...

10.1.1.2. Log collection limitations

The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort.

Important

The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source.

10.1.1.3. Log collector features by type

Expand
Table 10.1. Log Sources
FeatureFluentdVector

App container logs

App-specific routing

App-specific routing by namespace

Infra container logs

Infra journal logs

Kube API audit logs

OpenShift API audit logs

Open Virtual Network (OVN) audit logs

Expand
Table 10.2. Authorization and Authentication
FeatureFluentdVector

Elasticsearch certificates

Elasticsearch username / password

Amazon Cloudwatch keys

Amazon Cloudwatch STS

Kafka certificates

Kafka username / password

Kafka SASL

Loki bearer token

Expand
Table 10.3. Normalizations and Transformations
FeatureFluentdVector

Viaq data model - app

Viaq data model - infra

Viaq data model - infra(journal)

Viaq data model - Linux audit

Viaq data model - kube-apiserver audit

Viaq data model - OpenShift API audit

Viaq data model - OVN

Loglevel Normalization

JSON parsing

Structured Index

Multiline error detection

Multicontainer / split indices

Flatten labels

CLF static labels

Expand
Table 10.4. Tuning
FeatureFluentdVector

Fluentd readlinelimit

 

Fluentd buffer

 

- chunklimitsize

 

- totallimitsize

 

- overflowaction

 

- flushthreadcount

 

- flushmode

 

- flushinterval

 

- retrywait

 

- retrytype

 

- retrymaxinterval

 

- retrytimeout

 
Expand
Table 10.5. Visibility
FeatureFluentdVector

Metrics

Dashboard

Alerts

Expand
Table 10.6. Miscellaneous
FeatureFluentdVector

Global proxy support

x86 support

ARM support

IBM Power® support

IBM Z® support

IPv6 support

Log event buffering

 

Disconnected Cluster

10.1.1.4. Collector outputs

The following collector outputs are supported:

Expand
Table 10.7. Supported outputs
FeatureFluentdVector

Elasticsearch v6-v8

Fluent forward

 

Syslog RFC3164

✓ (Logging 5.7+)

Syslog RFC5424

✓ (Logging 5.7+)

Kafka

Amazon Cloudwatch

Amazon Cloudwatch STS

Loki

HTTP

✓ (Logging 5.7+)

Google Cloud Logging

Splunk

 

✓ (Logging 5.6+)

10.1.2. Log forwarding

Administrators can create

ClusterLogForwarder
resources that specify which logs are collected, how they are transformed, and where they are forwarded to.

ClusterLogForwarder
resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely.

Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.

10.1.2.1. Log forwarding implementations

There are two log forwarding implementations available: the legacy implementation, and the multi log forwarder feature.

Important

Only the Vector collector is supported for use with the multi log forwarder feature. The Fluentd collector can only be used with legacy implementations.

10.1.2.1.1. Legacy implementation

In legacy implementations, you can only use one log forwarder in your cluster. The

ClusterLogForwarder
resource in this mode must be named
instance
, and must be created in the
openshift-logging
namespace. The
ClusterLogForwarder
resource also requires a corresponding
ClusterLogging
resource named
instance
in the
openshift-logging
namespace.

10.1.2.1.2. Multi log forwarder feature

The multi log forwarder feature is available in logging 5.8 and later, and provides the following functionality:

  • Administrators can control which users are allowed to define log collection and which logs they are allowed to collect.
  • Users who have the required permissions are able to specify additional log collection configurations.
  • Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated.

In multi log forwarder implementations, you are not required to create a corresponding

ClusterLogging
resource for your
ClusterLogForwarder
resource. You can create multiple
ClusterLogForwarder
resources using any name, in any namespace, with the following exceptions:

  • You cannot create a
    ClusterLogForwarder
    resource named
    instance
    in the
    openshift-logging
    namespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector.
  • You cannot create a
    ClusterLogForwarder
    resource named
    collector
    in the
    openshift-logging
    namespace, because this is reserved for the collector.

10.1.2.2. Enabling the multi log forwarder feature for a cluster

To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the

ClusterLogForwarder
resource to control access permissions.

Important

In order to support multi log forwarding in additional namespaces other than the

openshift-logging
namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations.

10.1.2.2.1. Authorizing log collection RBAC permissions

In logging 5.8 and later, the Red Hat OpenShift Logging Operator provides

collect-audit-logs
,
collect-application-logs
, and
collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.

You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account.

Prerequisites

  • The Red Hat OpenShift Logging Operator is installed in the
    openshift-logging
    namespace.
  • You have administrator permissions.

Procedure

  1. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
  2. Bind the appropriate cluster roles to the service account:

    Example binding command

    $ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>

10.2. Log output types

Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the

ClusterLogForwarder
custom resource (CR) to send logs to servers that support different protocols.

10.2.1. Supported log forwarding outputs

Outputs can be any of the following types:

Expand
Table 10.8. Supported log output types
Output typeProtocolTested withLogging versionsSupported collector type

Elasticsearch v6

HTTP 1.1

6.8.1, 6.8.23

5.6+

Fluentd, Vector

Elasticsearch v7

HTTP 1.1

7.12.2, 7.17.7, 7.10.1

5.6+

Fluentd, Vector

Elasticsearch v8

HTTP 1.1

8.4.3, 8.6.1

5.6+

Fluentd [1], Vector

Fluent Forward

Fluentd forward v1

Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5

5.4+

Fluentd

Google Cloud Logging

REST over HTTPS

Latest

5.7+

Vector

HTTP

HTTP 1.1

Fluentd 1.14.6, Vector 0.21

5.7+

Fluentd, Vector

Kafka

Kafka 0.11

Kafka 2.4.1, 2.7.0, 3.3.1

5.4+

Fluentd, Vector

Loki

REST over HTTP and HTTPS

2.3.0, 2.5.0, 2.7, 2.2.1

5.4+

Fluentd, Vector

Splunk

HEC

8.2.9, 9.0.0

5.6+

Vector

Syslog

RFC3164, RFC5424

Rsyslog 8.37.0-9.el7, rsyslog-8.39.0

5.4+

Fluentd, Vector [2]

Amazon CloudWatch

REST over HTTPS

Latest

5.4+

Fluentd, Vector

  1. Fluentd does not support Elasticsearch 8 in the logging version 5.6.2.
  2. Vector supports Syslog in the logging version 5.7 and higher.

10.2.2. Output type descriptions

default

The on-cluster, Red Hat managed log store. You are not required to configure the default output.

Note

If you configure a

default
output, you receive an error message, because the
default
output name is reserved for referencing the on-cluster, Red Hat managed log store.

loki
Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
kafka
A Kafka broker. The kafka output can use a TCP or TLS connection.
elasticsearch
An external Elasticsearch instance. The elasticsearch output can use a TLS connection.
fluentdForward

An external log aggregation solution that supports Fluentd. This option uses the Fluentd

forward
protocols. The
fluentForward
output can use a TCP or TLS connection and supports shared-key authentication by providing a
shared_key
field in a secret. Shared-key authentication can be used with or without TLS.

Important

The

fluentdForward
output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the
http
output.

syslog
An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection.
cloudwatch
Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).

10.3. Enabling JSON log forwarding

You can configure the Log Forwarding API to parse JSON strings into a structured object.

10.3.1. Parsing JSON logs

You can use a

ClusterLogForwarder
object to parse JSON logs into a structured object and forward them to a supported output.

To illustrate how this works, suppose that you have the following structured JSON log entry:

Example structured JSON log entry

{"level":"info","name":"fred","home":"bedrock"}

To enable parsing JSON log, you add

parse: json
to a pipeline in the
ClusterLogForwarder
CR, as shown in the following example:

Example snippet showing parse: json

pipelines:
- inputRefs: [ application ]
  outputRefs: myFluentd
  parse: json

When you enable parsing JSON logs by using

parse: json
, the CR copies the JSON-structured log entry in a
structured
field, as shown in the following example:

Example structured output containing the structured JSON log entry

{"structured": { "level": "info", "name": "fred", "home": "bedrock" },
 "more fields..."}

Important

If the log entry does not contain valid structured JSON, the

structured
field is absent.

10.3.2. Configuring JSON log data for Elasticsearch

If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the

ClusterLogForwarder
custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index.

Important

If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.

Structure types

You can use the following structure types in the

ClusterLogForwarder
CR to construct index names for the Elasticsearch log store:

  • structuredTypeKey
    is the name of a message field. The value of that field is used to construct the index name.

    • kubernetes.labels.<key>
      is the Kubernetes pod label whose value is used to construct the index name.
    • openshift.labels.<key>
      is the
      pipeline.label.<key>
      element in the
      ClusterLogForwarder
      CR whose value is used to construct the index name.
    • kubernetes.container_name
      uses the container name to construct the index name.
  • structuredTypeName
    : If the
    structuredTypeKey
    field is not set or its key is not present, the
    structuredTypeName
    value is used as the structured type. When you use both the
    structuredTypeKey
    field and the
    structuredTypeName
    field together, the
    structuredTypeName
    value provides a fallback index name if the key in the
    structuredTypeKey
    field is missing from the JSON log data.
Note

Although you can set the value of

structuredTypeKey
to any field shown in the "Log Record Fields" topic, the most useful fields are shown in the preceding list of structure types.

A structuredTypeKey: kubernetes.labels.<key> example

Suppose the following:

  • Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google".
  • The user labels these application pods with
    logFormat=apache
    and
    logFormat=google
    .
  • You use the following snippet in your
    ClusterLogForwarder
    CR YAML file.
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
# ...
  outputDefaults:
    elasticsearch:
      structuredTypeKey: kubernetes.labels.logFormat 
1

      structuredTypeName: nologformat
  pipelines:
  - inputRefs:
    - application
    outputRefs:
    - default
    parse: json 
2
1
Uses the value of the key-value pair that is formed by the Kubernetes logFormat label.
2
Enables parsing JSON logs.

In that case, the following structured log record goes to the

app-apache-write
index:

{
  "structured":{"name":"fred","home":"bedrock"},
  "kubernetes":{"labels":{"logFormat": "apache", ...}}
}

And the following structured log record goes to the

app-google-write
index:

{
  "structured":{"name":"wilma","home":"bedrock"},
  "kubernetes":{"labels":{"logFormat": "google", ...}}
}

A structuredTypeKey: openshift.labels.<key> example

Suppose that you use the following snippet in your

ClusterLogForwarder
CR YAML file.

outputDefaults:
 elasticsearch:
    structuredTypeKey: openshift.labels.myLabel 
1

    structuredTypeName: nologformat
pipelines:
 - name: application-logs
   inputRefs:
   - application
   - audit
   outputRefs:
   - elasticsearch-secure
   - default
   parse: json
   labels:
     myLabel: myValue 
2
1
Uses the value of the key-value pair that is formed by the OpenShift myLabel label.
2
The myLabel element gives its string value, myValue, to the structured log record.

In that case, the following structured log record goes to the

app-myValue-write
index:

{
  "structured":{"name":"fred","home":"bedrock"},
  "openshift":{"labels":{"myLabel": "myValue", ...}}
}

Additional considerations

  • The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write".
  • Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices.
  • If there is no non-empty structured type, forward an unstructured record with no
    structured
    field.

It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats, not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as

LogApache
.

10.3.3. Forwarding JSON logs to the Elasticsearch log store

For an Elasticsearch log store, if your JSON log entries follow different schemas, configure the

ClusterLogForwarder
custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema.

Important

Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.

To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.

Procedure

  1. Add the following snippet to your

    ClusterLogForwarder
    CR YAML file.

    outputDefaults:
     elasticsearch:
        structuredTypeKey: <log record field>
        structuredTypeName: <name>
    pipelines:
    - inputRefs:
      - application
      outputRefs: default
      parse: json
  2. Use
    structuredTypeKey
    field to specify one of the log record fields.
  3. Use

    structuredTypeName
    field to specify a name.

    Important

    To parse JSON logs, you must set both the

    structuredTypeKey
    and
    structuredTypeName
    fields.

  4. For
    inputRefs
    , specify which log types to forward by using that pipeline, such as
    application,
    infrastructure
    , or
    audit
    .
  5. Add the
    parse: json
    element to pipelines.
  6. Create the CR object:

    $ oc create -f <filename>.yaml

    The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy.

    $ oc delete pod --selector logging-infra=collector

You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of

app-
. It is recommended that Elasticsearch be configured with aliases to accommodate this.

Important

JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats.

Prerequisites

  • Logging for Red Hat OpenShift: 5.5

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      outputDefaults:
        elasticsearch:
          structuredTypeKey: kubernetes.labels.logFormat 
    1
    
          structuredTypeName: nologformat
          enableStructuredContainerLogs: true 
    2
    
      pipelines:
      - inputRefs:
        - application
        name: application-logs
        outputRefs:
        - default
        parse: json
    1
    Uses the value of the key-value pair that is formed by the Kubernetes logFormat label.
    2
    Enables multi-container outputs.
  2. Create or edit a YAML file that defines the

    Pod
    CR object:

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        containerType.logging.openshift.io/heavy: heavy 
    1
    
        containerType.logging.openshift.io/low: low
    spec:
      containers:
      - name: heavy 
    2
    
        image: heavyimage
      - name: low
        image: lowimage
    1
    Format: containerType.logging.openshift.io/<container-name>: <index>
    2
    Annotation names must match container names
Warning

This configuration might significantly increase the number of shards on the cluster.

Additional resources

10.4. Configuring log forwarding

In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the

ClusterLogging
custom resource (CR) by default.

Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured.

If this default configuration meets your needs, you do not need to configure a

ClusterLogForwarder
CR. If a
ClusterLogForwarder
CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the
default
output.

10.4.1. About forwarding logs to third-party systems

To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a

ClusterLogForwarder
custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object.

pipeline

Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:

  • application
    . Container logs generated by user applications running in the cluster, except infrastructure container applications.
  • infrastructure
    . Container logs from pods that run in the
    openshift*
    ,
    kube*
    , or
    default
    projects and journal logs sourced from node file system.
  • audit
    . Audit logs generated by the node audit system,
    auditd
    , Kubernetes API server, OpenShift API server, and OVN network.

You can add labels to outbound log messages by using

key:value
pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message.

input

Forwards the application logs associated with a specific project to a pipeline.

In the pipeline, you define which log types to forward using an

inputRef
parameter and where to forward the logs to using an
outputRef
parameter.

Secret
A key:value map that contains confidential data such as user credentials.

Note the following:

  • If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the
    application
    and
    audit
    types, but do not specify a pipeline for the
    infrastructure
    type,
    infrastructure
    logs are dropped.
  • You can use multiple types of outputs in the
    ClusterLogForwarder
    custom resource (CR) to send logs to servers that support different protocols.

The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the

my-apps-logs
project to the internal Elasticsearch instance.

Sample log forwarding outputs and pipelines

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name> 
1

  namespace: <log_forwarder_namespace> 
2

spec:
  serviceAccountName: <service_account_name> 
3

  outputs:
   - name: elasticsearch-secure 
4

     type: "elasticsearch"
     url: https://elasticsearch.secure.com:9200
     secret:
        name: elasticsearch
   - name: elasticsearch-insecure 
5

     type: "elasticsearch"
     url: http://elasticsearch.insecure.com:9200
   - name: kafka-app 
6

     type: "kafka"
     url: tls://kafka.secure.com:9093/app-topic
  inputs: 
7

   - name: my-app-logs
     application:
        namespaces:
        - my-project
  pipelines:
   - name: audit-logs 
8

     inputRefs:
      - audit
     outputRefs:
      - elasticsearch-secure
      - default
     labels:
       secure: "true" 
9

       datacenter: "east"
   - name: infrastructure-logs 
10

     inputRefs:
      - infrastructure
     outputRefs:
      - elasticsearch-insecure
     labels:
       datacenter: "west"
   - name: my-app 
11

     inputRefs:
      - my-app-logs
     outputRefs:
      - default
   - inputRefs: 
12

      - application
     outputRefs:
      - kafka-app
     labels:
       datacenter: "south"

1
In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
2
In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
3
The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
4
Configuration for an secure Elasticsearch output using a secret with a secure URL.
  • A name to describe the output.
  • The type of output:
    elasticsearch
    .
  • The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
  • The secret required by the endpoint for TLS communication. The secret must exist in the
    openshift-logging
    project.
5
Configuration for an insecure Elasticsearch output:
  • A name to describe the output.
  • The type of output:
    elasticsearch
    .
  • The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
6
Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL:
  • A name to describe the output.
  • The type of output:
    kafka
    .
  • Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
7
Configuration for an input to filter application logs from the my-project namespace.
8
Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
  • A name to describe the pipeline.
  • The
    inputRefs
    is the log type, in this example
    audit
    .
  • The
    outputRefs
    is the name of the output to use, in this example
    elasticsearch-secure
    to forward to the secure Elasticsearch instance and
    default
    to forward to the internal Elasticsearch instance.
  • Optional: Labels to add to the logs.
9
Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
10
Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
11
Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance.
  • A name to describe the pipeline.
  • The
    inputRefs
    is a specific input:
    my-app-logs
    .
  • The
    outputRefs
    is
    default
    .
  • Optional: String. One or more labels to add to the logs.
12
Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
  • The
    inputRefs
    is the log type, in this example
    application
    .
  • The
    outputRefs
    is the name of the output to use.
  • Optional: String. One or more labels to add to the logs.
Fluentd log handling when the external log aggregator is unavailable

If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

Supported Authorization Keys

Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.

Transport Layer Security (TLS)

Using a TLS URL (

http://...
or
ssl://...
) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:

  • passphrase
    : (string) Passphrase to decode an encoded TLS private key. Requires
    tls.key
    .
  • ca-bundle.crt
    : (string) File name of a customer CA for server authentication.
Username and Password
  • username
    : (string) Authentication user name. Requires
    password
    .
  • password
    : (string) Authentication password. Requires
    username
    .
Simple Authentication Security Layer (SASL)
  • sasl.enable
    (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other
    sasl.
    keys are set.
  • sasl.mechanisms
    : (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used.
  • sasl.allow-insecure
    : (boolean) Allow mechanisms that send clear-text passwords. Defaults to false.

10.4.1.1. Creating a Secret

You can create a secret in the directory that contains your certificate and key files by using the following command:

$ oc create secret generic -n <namespace> <secret_name> \
  --from-file=ca-bundle.crt=<your_bundle_file> \
  --from-literal=username=<your_username> \
  --from-literal=password=<your_password>
Note

Generic or opaque secrets are recommended for best results.

10.4.2. Creating a log forwarder

To create a log forwarder, you must create a

ClusterLogForwarder
CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the
ClusterLogForwarder
CR.

If you are using the multi log forwarder feature on your cluster, you can create

ClusterLogForwarder
custom resources (CRs) in any namespace, using any name. If you are using a legacy implementation, the
ClusterLogForwarder
CR must be named
instance
, and must be created in the
openshift-logging
namespace.

Important

You need administrator permissions for the namespace where you create the

ClusterLogForwarder
CR.

ClusterLogForwarder resource example

apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: <log_forwarder_name> 
1

  namespace: <log_forwarder_namespace> 
2

spec:
  serviceAccountName: <service_account_name> 
3

  pipelines:
   - inputRefs:
     - <log_type> 
4

     outputRefs:
     - <output_name> 
5

  outputs:
  - name: <output_name> 
6

    type: <output_type> 
7

    url: <log_output_url> 
8

# ...

1
In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
2
In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
3
The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
4
The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application.
5 7
The type of output that you want to forward logs to. The value of this field can be default, loki, kafka, elasticsearch, fluentdForward, syslog, or cloudwatch.
Note

The

default
output type is not supported in mutli log forwarder implementations.

6
A name for the output that you want to forward logs to.
8
The URL of the output that you want to forward logs to.

10.4.3. Tuning log payloads and delivery

In logging 5.9 and newer versions, the

tuning
spec in the
ClusterLogForwarder
custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.

For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.

Important

To use this feature, your logging deployment must be configured to use the Vector collector. The

tuning
spec in the
ClusterLogForwarder
CR is not supported when using the Fluentd collector.

The following example shows the

ClusterLogForwarder
CR options that you can modify to tune log forwarder outputs:

Example ClusterLogForwarder CR tuning options

apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
  tuning:
    delivery: AtLeastOnce 
1

    compression: none 
2

    maxWrite: <integer> 
3

    minRetryDuration: 1s 
4

    maxRetryDuration: 1s 
5

# ...

1
Specify the delivery mode for log forwarding.
  • AtLeastOnce
    delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
  • AtMostOnce
    delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
2
Specifying a compression configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are none for no compression, gzip, snappy, zlib, or zstd. lz4 compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information.
3
Specifies a limit for the maximum payload of a single send operation to the output.
4
Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
5
Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (ms), seconds (s), or minutes (m).
Expand
Table 10.9. Supported compression types for tuning outputs
Compression algorithmSplunkAmazon CloudwatchElasticsearch 8LokiStackApache KafkaHTTPSyslogGoogle CloudMicrosoft Azure Monitoring

gzip

X

X

X

X

 

X

   

snappy

 

X

 

X

X

X

   

zlib

 

X

X

  

X

   

zstd

 

X

  

X

X

   

lz4

    

X

    

10.4.4. Enabling multi-line exception detection

Enables multi-line error detection of container logs.

Warning

Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.

Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.

Example java exception

java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
    at testjava.Main.handle(Main.java:47)
    at testjava.Main.printMe(Main.java:19)
    at testjava.Main.main(Main.java:10)

  • To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
    ClusterLogForwarder
    Custom Resource (CR) contains a
    detectMultilineErrors
    field, with a value of
    true
    .

Example ClusterLogForwarder CR

apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  pipelines:
    - name: my-app-logs
      inputRefs:
        - application
      outputRefs:
        - default
      detectMultilineErrors: true

10.4.4.1. Details

When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.

Expand
Table 10.10. Supported languages per collector:
LanguageFluentdVector

Java

JS

Ruby

Python

Golang

PHP

Dart

10.4.4.2. Troubleshooting

When enabled, the collector configuration will include a new section with type:

detect_exceptions

Example vector configuration section

[transforms.detect_exceptions_app-logs]
 type = "detect_exceptions"
 inputs = ["application"]
 languages = ["All"]
 group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"]
 expire_after_ms = 2000
 multiline_flush_interval_ms = 1000

Example fluentd config section

<label @MULTILINE_APP_LOGS>
  <match kubernetes.**>
    @type detect_exceptions
    remove_tag_prefix 'kubernetes'
    message message
    force_line_breaks true
    multiline_flush_interval .2
  </match>
</label>

10.4.5. Forwarding logs to Google Cloud

You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store.

Note

Using this feature with Fluentd is not supported.

Prerequisites

  • Red Hat OpenShift Logging Operator 5.5.1 and later

Procedure

  1. Create a secret using your Google service account key.

    $ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
  2. Create a

    ClusterLogForwarder
    Custom Resource YAML using the template below:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
        - name: gcp-1
          type: googleCloudLogging
          secret:
            name: gcp-secret
          googleCloudLogging:
            projectId : "openshift-gce-devel" 
    4
    
            logId : "app-gcp" 
    5
    
      pipelines:
        - name: test-app
          inputRefs: 
    6
    
            - application
          outputRefs:
            - gcp-1
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Set a projectId, folderId, organizationId, or billingAccountId field and its corresponding value, depending on where you want to store your logs in the Google Cloud resource hierarchy.
    5
    Set the value to add to the logName field of the Log Entry.
    6
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.

10.4.6. Forwarding logs to Splunk

You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store.

Note

Using this feature with Fluentd is not supported.

Prerequisites

  • Red Hat OpenShift Logging Operator 5.6 or later
  • A
    ClusterLogging
    instance with
    vector
    specified as the collector
  • Base64 encoded Splunk HEC token

Procedure

  1. Create a secret using your Base64 encoded Splunk HEC token.

    $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>
  2. Create or edit the

    ClusterLogForwarder
    Custom Resource (CR) using the template below:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
        - name: splunk-receiver 
    4
    
          secret:
            name: vector-splunk-secret 
    5
    
          type: splunk 
    6
    
          url: <http://your.splunk.hec.url:8088> 
    7
    
      pipelines: 
    8
    
        - inputRefs:
            - application
            - infrastructure
          name: 
    9
    
          outputRefs:
            - splunk-receiver 
    10
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the name of the secret that contains your HEC token.
    6
    Specify the output type as splunk.
    7
    Specify the URL (including port) of your Splunk HEC.
    8
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    9
    Optional: Specify a name for the pipeline.
    10
    Specify the name of the output to use when forwarding logs with this pipeline.

10.4.7. Forwarding logs over HTTP

Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify

http
as the output type in the
ClusterLogForwarder
custom resource (CR).

Procedure

  • Create or edit the

    ClusterLogForwarder
    CR using the template below:

    Example ClusterLogForwarder CR

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
        - name: httpout-app
          type: http
          url: 
    4
    
          http:
            headers: 
    5
    
              h1: v1
              h2: v2
            method: POST
          secret:
            name: 
    6
    
          tls:
            insecureSkipVerify: 
    7
    
      pipelines:
        - name:
          inputRefs:
            - application
          outputRefs:
            - httpout-app 
    8

    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Destination address for logs.
    5
    Additional headers to send with the log record.
    6
    Secret name for destination credentials.
    7
    Values are either true or false.
    8
    This value should be the same as the output name.

10.4.8. Forwarding to Azure Monitor Logs

With logging 5.9 and later, you can forward logs to Azure Monitor Logs in addition to, or instead of, the default log store. This functionality is provided by the Vector Azure Monitor Logs sink.

Prerequisites

  • You are familiar with how to administer and create a
    ClusterLogging
    custom resource (CR) instance.
  • You are familiar with how to administer and create a
    ClusterLogForwarder
    CR instance.
  • You understand the
    ClusterLogForwarder
    CR specifications.
  • You have basic familiarity with Azure services.
  • You have an Azure account configured for Azure Portal or Azure CLI access.
  • You have obtained your Azure Monitor Logs primary or the secondary security key.
  • You have determined which log types to forward.

To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API:

Create a secret with your shared key:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: openshift-logging
type: Opaque
data:
  shared_key: <your_shared_key> 
1
1
Must contain a primary or secondary key for the Log Analytics workspace making the request.

To obtain a shared key, you can use this command in Azure CLI:

Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>”

Create or edit your

ClusterLogForwarder
CR using the template matching your log selection.

Forward all logs

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id 
1

      logType: my_log_type 
2

    secret:
       name: my-secret
  pipelines:
  - name: app-pipeline
    inputRefs:
    - application
    outputRefs:
    - azure-monitor

1
Unique identifier for the Log Analytics workspace. Required field.
2
Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.

Forward application and infrastructure logs

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor-app
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: application_log 
1

    secret:
      name: my-secret
  - name: azure-monitor-infra
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: infra_log #
    secret:
      name: my-secret
  pipelines:
    - name: app-pipeline
      inputRefs:
      - application
      outputRefs:
      - azure-monitor-app
    - name: infra-pipeline
      inputRefs:
      - infrastructure
      outputRefs:
      - azure-monitor-infra

1
Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters.

Advanced configuration options

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogForwarder"
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
  - name: azure-monitor
    type: azureMonitor
    azureMonitor:
      customerId: my-customer-id
      logType: my_log_type
      azureResourceId: "/subscriptions/111111111" 
1

      host: "ods.opinsights.azure.com" 
2

    secret:
       name: my-secret
  pipelines:
  - name: app-pipeline
    inputRefs:
    - application
    outputRefs:
    - azure-monitor

1
Resource ID of the Azure resource the data should be associated with. Optional field.
2
Alternative host for dedicated Azure regions. Optional field. Default value is ods.opinsights.azure.com. Default value for Azure Government is ods.opinsights.azure.us.

10.4.9. Forwarding application logs from specific projects

You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform.

To configure forwarding application logs from a project, you must create a

ClusterLogForwarder
custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR:

    Example ClusterLogForwarder CR

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance 
    1
    
      namespace: openshift-logging 
    2
    
    spec:
      outputs:
       - name: fluentd-server-secure 
    3
    
         type: fluentdForward 
    4
    
         url: 'tls://fluentdserver.security.example.com:24224' 
    5
    
         secret: 
    6
    
            name: fluentd-secret
       - name: fluentd-server-insecure
         type: fluentdForward
         url: 'tcp://fluentdserver.home.example.com:24224'
      inputs: 
    7
    
       - name: my-app-logs
         application:
            namespaces:
            - my-project 
    8
    
      pipelines:
       - name: forward-to-fluentd-insecure 
    9
    
         inputRefs: 
    10
    
         - my-app-logs
         outputRefs: 
    11
    
         - fluentd-server-insecure
         labels:
           project: "my-project" 
    12
    
       - name: forward-to-fluentd-secure 
    13
    
         inputRefs:
         - application 
    14
    
         - audit
         - infrastructure
         outputRefs:
         - fluentd-server-secure
         - default
         labels:
           clusterId: "C1234"

    1
    The name of the ClusterLogForwarder CR must be instance.
    2
    The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3
    The name of the output.
    4
    The output type: elasticsearch, fluentdForward, syslog, or kafka.
    5
    The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6
    If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt, tls.key, and ca-bundle.crt keys that each point to the certificates they represent.
    7
    The configuration for an input to filter application logs from the specified projects.
    8
    If no namespace is specified, logs are collected from all namespaces.
    9
    The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named forward-to-fluentd-insecure forwards logs from an input named my-app-logs to an output named fluentd-server-insecure.
    10
    A list of inputs.
    11
    The name of the output to use.
    12
    Optional: String. One or more labels to add to the logs.
    13
    Configuration for a pipeline to send logs to other log aggregators.
    • Optional: Specify a name for the pipeline.
    • Specify which log types to forward by using the pipeline:
      application,
      infrastructure
      , or
      audit
      .
    • Specify the name of the output to use when forwarding logs with this pipeline.
    • Optional: Specify the
      default
      output to forward logs to the default log store.
    • Optional: String. One or more labels to add to the logs.
    14
    Note that application logs from all namespaces are collected when using this configuration.
  2. Apply the

    ClusterLogForwarder
    CR by running the following command:

    $ oc apply -f <filename>.yaml

10.4.10. Forwarding application logs from specific pods

As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector.

Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector.

To specify the pod labels, you use one or more

matchLabels
key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object. In the file, specify the pod labels using simple equality-based selectors under
    inputs[].name.application.selector.matchLabels
    , as shown in the following example.

    Example ClusterLogForwarder CR YAML file

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      pipelines:
        - inputRefs: [ myAppLogData ] 
    3
    
          outputRefs: [ default ] 
    4
    
      inputs: 
    5
    
        - name: myAppLogData
          application:
            selector:
              matchLabels: 
    6
    
                environment: production
                app: nginx
            namespaces: 
    7
    
            - app1
            - app2
      outputs: 
    8
    
        - <output_name>
        ...

    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    Specify one or more comma-separated values from inputs[].name.
    4
    Specify one or more comma-separated values from outputs[].
    5
    Define a unique inputs[].name for each application that has a unique set of pod labels.
    6
    Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
    7
    Optional: Specify one or more namespaces.
    8
    Specify one or more outputs to forward your log data to.
  2. Optional: To restrict the gathering of log data to specific namespaces, use
    inputs[].name.application.namespaces
    , as shown in the preceding example.
  3. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.

    1. For each unique combination of pod labels, create an additional
      inputs[].name
      section similar to the one shown.
    2. Update the
      selectors
      to match the pod labels of this application.
    3. Add the new

      inputs[].name
      value to
      inputRefs
      . For example:

      - inputRefs: [ myAppLogData, myOtherAppLogData ]
  4. Create the CR object:

    $ oc create -f <file-name>.yaml

10.4.11. Overview of API audit filter

OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the

level
field:

  • None
    : The event is dropped.
  • Metadata
    : Audit metadata is included, request and response bodies are removed.
  • Request
    : Audit metadata and the request body are included, the response body is removed.
  • RequestResponse
    : All data is included: metadata, request body and response body. The response body can be very large. For example,
    oc get pods -A
    generates a response body containing the YAML description of every pod in the cluster.
Note

You can use this feature only if the Vector collector is set up in your logging deployment.

In logging 5.8 and later, the

ClusterLogForwarder
custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:

Wildcards
Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, namespace openshift-\* matches openshift-apiserver or openshift-authentication. Resource \*/status matches Pod/status or Deployment/status.
Default Rules

Events that do not match any rule in the policy are filtered as follows:

  • Read-only system events such as
    get
    ,
    list
    ,
    watch
    are dropped.
  • Service account write events that occur within the same namespace as the service account are dropped.
  • All other events are forwarded, subject to any configured rate limits.

To disable these defaults, either end your rules list with a rule that has only a

level
field or add an empty rule.

Omit Response Codes
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, a list of HTTP status code for which no events are created. The default value is [404, 409, 422, 429]. If the value is an empty list, [], then no status codes are omitted.

The

ClusterLogForwarder
CR audit policy acts in addition to the OpenShift Container Platform audit policy. The
ClusterLogForwarder
CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.

Note

The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.

Example audit policy

apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  pipelines:
    - name: my-pipeline
      inputRefs: audit 
1

      filterRefs: my-policy 
2

      outputRefs: default
  filters:
    - name: my-policy
      type: kubeAPIAudit
      kubeAPIAudit:
        # Don't generate audit events for all requests in RequestReceived stage.
        omitStages:
          - "RequestReceived"

        rules:
          # Log pod changes at RequestResponse level
          - level: RequestResponse
            resources:
            - group: ""
              resources: ["pods"]

          # Log "pods/log", "pods/status" at Metadata level
          - level: Metadata
            resources:
            - group: ""
              resources: ["pods/log", "pods/status"]

          # Don't log requests to a configmap called "controller-leader"
          - level: None
            resources:
            - group: ""
              resources: ["configmaps"]
              resourceNames: ["controller-leader"]

          # Don't log watch requests by the "system:kube-proxy" on endpoints or services
          - level: None
            users: ["system:kube-proxy"]
            verbs: ["watch"]
            resources:
            - group: "" # core API group
              resources: ["endpoints", "services"]

          # Don't log authenticated requests to certain non-resource URL paths.
          - level: None
            userGroups: ["system:authenticated"]
            nonResourceURLs:
            - "/api*" # Wildcard matching.
            - "/version"

          # Log the request body of configmap changes in kube-system.
          - level: Request
            resources:
            - group: "" # core API group
              resources: ["configmaps"]
            # This rule only applies to resources in the "kube-system" namespace.
            # The empty string "" can be used to select non-namespaced resources.
            namespaces: ["kube-system"]

          # Log configmap and secret changes in all other namespaces at the Metadata level.
          - level: Metadata
            resources:
            - group: "" # core API group
              resources: ["secrets", "configmaps"]

          # Log all other resources in core and extensions at the Request level.
          - level: Request
            resources:
            - group: "" # core API group
            - group: "extensions" # Version of group should NOT be included.

          # A catch-all rule to log all other requests at the Metadata level.
          - level: Metadata

1
The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application.
2
The name of your audit policy.

10.4.12. Forwarding logs to an external Loki logging system

You can forward logs to an external Loki logging system in addition to, or instead of, the default log store.

To configure log forwarding to Loki, you must create a

ClusterLogForwarder
custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

Prerequisites

  • You must have a Loki logging system running at the URL you specify with the
    url
    field in the CR.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
      - name: loki-insecure 
    4
    
        type: "loki" 
    5
    
        url: http://loki.insecure.com:3100 
    6
    
        loki:
          tenantKey: kubernetes.namespace_name
          labelKeys:
          - kubernetes.labels.foo
      - name: loki-secure 
    7
    
        type: "loki"
        url: https://loki.secure.com:3100
        secret:
          name: loki-secret 
    8
    
        loki:
          tenantKey: kubernetes.namespace_name 
    9
    
          labelKeys:
          - kubernetes.labels.foo 
    10
    
      pipelines:
      - name: application-logs 
    11
    
        inputRefs:  
    12
    
        - application
        - audit
        outputRefs: 
    13
    
        - loki-secure
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the type as "loki".
    6
    Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki’s default port for HTTP(S) communication is 3100.
    7
    For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret.
    8
    For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificates it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password."
    9
    Optional: Specify a metadata key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section.
    10
    Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. Illegal characters in metadata keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo metadata key becomes Loki label kubernetes_labels_foo. If you do not set labelKeys, the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]. Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config. You can still query based on any log record field using query filters.
    11
    Optional: Specify a name for the pipeline.
    12
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    13
    Specify the name of the output to use when forwarding logs with this pipeline.
    Note

    Because Loki requires log streams to be correctly ordered by timestamp,

    labelKeys
    always includes the
    kubernetes_host
    label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts.

  2. Apply the

    ClusterLogForwarder
    CR object by running the following command:

    $ oc apply -f <filename>.yaml

10.4.13. Forwarding logs to an external Elasticsearch instance

You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform.

To configure log forwarding to an external Elasticsearch instance, you must create a

ClusterLogForwarder
custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.

To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the

default
output to forward logs to the internal instance.

Note

If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a

ClusterLogForwarder
CR.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR:

    Example ClusterLogForwarder CR

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
       - name: elasticsearch-example 
    4
    
         type: elasticsearch 
    5
    
         elasticsearch:
           version: 8 
    6
    
         url: http://elasticsearch.example.com:9200 
    7
    
         secret:
           name: es-secret 
    8
    
      pipelines:
       - name: application-logs 
    9
    
         inputRefs: 
    10
    
         - application
         - audit
         outputRefs:
         - elasticsearch-example 
    11
    
         - default 
    12
    
         labels:
           myLabel: "myValue" 
    13
    
    # ...

    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the elasticsearch type.
    6
    Specify the Elasticsearch version. This can be 6, 7, or 8.
    7
    Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
    8
    For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password."
    9
    Optional: Specify a name for the pipeline.
    10
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11
    Specify the name of the output to use when forwarding logs with this pipeline.
    12
    Optional: Specify the default output to send the logs to the internal Elasticsearch instance.
    13
    Optional: String. One or more labels to add to the logs.
  2. Apply the

    ClusterLogForwarder
    CR:

    $ oc apply -f <filename>.yaml

Example: Setting a secret that contains a username and password

You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance.

For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password.

  1. Create a

    Secret
    YAML file similar to the following example. Use base64-encoded values for the
    username
    and
    password
    fields. The secret type is opaque by default.

    apiVersion: v1
    kind: Secret
    metadata:
      name: openshift-test-secret
    data:
      username: <username>
      password: <password>
    # ...
  2. Create the secret:

    $ oc create secret -n openshift-logging openshift-test-secret.yaml
  3. Specify the name of the secret in the

    ClusterLogForwarder
    CR:

    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      outputs:
       - name: elasticsearch
         type: "elasticsearch"
         url: https://elasticsearch.secure.com:9200
         secret:
            name: openshift-test-secret
    # ...
    Note

    In the value of the

    url
    field, the prefix can be
    http
    or
    https
    .

  4. Apply the CR object:

    $ oc apply -f <filename>.yaml

10.4.14. Forwarding logs using the Fluentd forward protocol

You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform.

To configure log forwarding using the forward protocol, you must create a

ClusterLogForwarder
custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance 
    1
    
      namespace: openshift-logging 
    2
    
    spec:
      outputs:
       - name: fluentd-server-secure 
    3
    
         type: fluentdForward 
    4
    
         url: 'tls://fluentdserver.security.example.com:24224' 
    5
    
         secret: 
    6
    
            name: fluentd-secret
       - name: fluentd-server-insecure
         type: fluentdForward
         url: 'tcp://fluentdserver.home.example.com:24224'
      pipelines:
       - name: forward-to-fluentd-secure 
    7
    
         inputRefs:  
    8
    
         - application
         - audit
         outputRefs:
         - fluentd-server-secure 
    9
    
         - default 
    10
    
         labels:
           clusterId: "C1234" 
    11
    
       - name: forward-to-fluentd-insecure 
    12
    
         inputRefs:
         - infrastructure
         outputRefs:
         - fluentd-server-insecure
         labels:
           clusterId: "C1234"
    1
    The name of the ClusterLogForwarder CR must be instance.
    2
    The namespace for the ClusterLogForwarder CR must be openshift-logging.
    3
    Specify a name for the output.
    4
    Specify the fluentdForward type.
    5
    Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    6
    If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents.
    7
    Optional: Specify a name for the pipeline.
    8
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    9
    Specify the name of the output to use when forwarding logs with this pipeline.
    10
    Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
    11
    Optional: String. One or more labels to add to the logs.
    12
    Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.
    • The
      inputRefs
      is the log type to forward by using the pipeline:
      application,
      infrastructure
      , or
      audit
      .
    • The
      outputRefs
      is the name of the output to use.
    • Optional: String. One or more labels to add to the logs.
  2. Create the CR object:

    $ oc create -f <file-name>.yaml

For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file.

Procedure

  • In the Logstash configuration file, set
    nanosecond_precision
    to
    true
    .

Example Logstash configuration file

input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } }
filter { }
output { stdout { codec => rubydebug } }

10.4.15. Forwarding logs using the syslog protocol

You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform.

To configure log forwarding using the syslog protocol, you must create a

ClusterLogForwarder
custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection.

Prerequisites

  • You must have a logging server that is configured to receive the logging data using the specified protocol or format.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
       - name: rsyslog-east 
    4
    
         type: syslog 
    5
    
         syslog: 
    6
    
           facility: local0
           rfc: RFC3164
           payloadKey: message
           severity: informational
         url: 'tls://rsyslogserver.east.example.com:514' 
    7
    
         secret: 
    8
    
            name: syslog-secret
       - name: rsyslog-west
         type: syslog
         syslog:
          appName: myapp
          facility: user
          msgID: mymsg
          procID: myproc
          rfc: RFC5424
          severity: debug
         url: 'tcp://rsyslogserver.west.example.com:514'
      pipelines:
       - name: syslog-east 
    9
    
         inputRefs: 
    10
    
         - audit
         - application
         outputRefs: 
    11
    
         - rsyslog-east
         - default 
    12
    
         labels:
           secure: "true" 
    13
    
           syslog: "east"
       - name: syslog-west 
    14
    
         inputRefs:
         - infrastructure
         outputRefs:
         - rsyslog-west
         - default
         labels:
           syslog: "west"
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the syslog type.
    6
    Optional: Specify the syslog parameters, listed below.
    7
    Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    8
    If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project.
    9
    Optional: Specify a name for the pipeline.
    10
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11
    Specify the name of the output to use when forwarding logs with this pipeline.
    12
    Optional: Specify the default output to forward logs to the internal Elasticsearch instance.
    13
    Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
    14
    Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.
    • The
      inputRefs
      is the log type to forward by using the pipeline:
      application,
      infrastructure
      , or
      audit
      .
    • The
      outputRefs
      is the name of the output to use.
    • Optional: String. One or more labels to add to the logs.
  2. Create the CR object:

    $ oc create -f <filename>.yaml

10.4.15.1. Adding log source information to message output

You can add

namespace_name
,
pod_name
, and
container_name
elements to the
message
field of the record by adding the
AddLogSource
field to your
ClusterLogForwarder
custom resource (CR).

  spec:
    outputs:
    - name: syslogout
      syslog:
        addLogSource: true
        facility: user
        payloadKey: message
        rfc: RFC3164
        severity: debug
        tag: mytag
      type: syslog
      url: tls://syslog-receiver.openshift-logging.svc:24224
    pipelines:
    - inputRefs:
      - application
      name: test-app
      outputRefs:
      - syslogout
Note

This configuration is compatible with both RFC3164 and RFC5424.

Example syslog message output without AddLogSource

<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - -  {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56}

Example syslog message output with AddLogSource

<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - -  namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76}

10.4.15.2. Syslog parameters

You can configure the following for the

syslog
outputs. For more information, see the syslog RFC3164 or RFC5424 RFC.

  • facility: The syslog facility. The value can be a decimal integer or a case-insensitive keyword:

    • 0
      or
      kern
      for kernel messages
    • 1
      or
      user
      for user-level messages, the default.
    • 2
      or
      mail
      for the mail system
    • 3
      or
      daemon
      for system daemons
    • 4
      or
      auth
      for security/authentication messages
    • 5
      or
      syslog
      for messages generated internally by syslogd
    • 6
      or
      lpr
      for the line printer subsystem
    • 7
      or
      news
      for the network news subsystem
    • 8
      or
      uucp
      for the UUCP subsystem
    • 9
      or
      cron
      for the clock daemon
    • 10
      or
      authpriv
      for security authentication messages
    • 11
      or
      ftp
      for the FTP daemon
    • 12
      or
      ntp
      for the NTP subsystem
    • 13
      or
      security
      for the syslog audit log
    • 14
      or
      console
      for the syslog alert log
    • 15
      or
      solaris-cron
      for the scheduling daemon
    • 16
      23
      or
      local0
      local7
      for locally used facilities
  • Optional:

    payloadKey
    : The record field to use as payload for the syslog message.

    Note

    Configuring the

    payloadKey
    parameter prevents other parameters from being forwarded to the syslog.

  • rfc: The RFC to be used for sending logs using syslog. The default is RFC5424.
  • severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword:

    • 0
      or
      Emergency
      for messages indicating the system is unusable
    • 1
      or
      Alert
      for messages indicating action must be taken immediately
    • 2
      or
      Critical
      for messages indicating critical conditions
    • 3
      or
      Error
      for messages indicating error conditions
    • 4
      or
      Warning
      for messages indicating warning conditions
    • 5
      or
      Notice
      for messages indicating normal but significant conditions
    • 6
      or
      Informational
      for messages indicating informational messages
    • 7
      or
      Debug
      for messages indicating debug-level messages, the default
  • tag: Tag specifies a record field to use as a tag on the syslog message.
  • trimPrefix: Remove the specified prefix from the tag.

10.4.15.3. Additional RFC5424 syslog parameters

The following parameters apply to RFC5424:

  • appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for
    RFC5424
    .
  • msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for
    RFC5424
    .
  • procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for
    RFC5424
    .

10.4.16. Forwarding logs to a Kafka broker

You can forward logs to an external Kafka broker in addition to, or instead of, the default log store.

To configure log forwarding to an external Kafka instance, you must create a

ClusterLogForwarder
custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.

Procedure

  1. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
       - name: app-logs 
    4
    
         type: kafka 
    5
    
         url: tls://kafka.example.devlab.com:9093/app-topic 
    6
    
         secret:
           name: kafka-secret 
    7
    
       - name: infra-logs
         type: kafka
         url: tcp://kafka.devlab2.example.com:9093/infra-topic 
    8
    
       - name: audit-logs
         type: kafka
         url: tls://kafka.qelab.example.com:9093/audit-topic
         secret:
            name: kafka-secret-qe
      pipelines:
       - name: app-topic 
    9
    
         inputRefs: 
    10
    
         - application
         outputRefs: 
    11
    
         - app-logs
         labels:
           logType: "application" 
    12
    
       - name: infra-topic 
    13
    
         inputRefs:
         - infrastructure
         outputRefs:
         - infra-logs
         labels:
           logType: "infra"
       - name: audit-topic
         inputRefs:
         - audit
         outputRefs:
         - audit-logs
         labels:
           logType: "audit"
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the kafka type.
    6
    Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
    7
    If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project.
    8
    Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output.
    9
    Optional: Specify a name for the pipeline.
    10
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    11
    Specify the name of the output to use when forwarding logs with this pipeline.
    12
    Optional: String. One or more labels to add to the logs.
    13
    Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
    • A name to describe the pipeline.
    • The
      inputRefs
      is the log type to forward by using the pipeline:
      application,
      infrastructure
      , or
      audit
      .
    • The
      outputRefs
      is the name of the output to use.
    • Optional: String. One or more labels to add to the logs.
  2. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:

    # ...
    spec:
      outputs:
      - name: app-logs
        type: kafka
        secret:
          name: kafka-secret-dev
        kafka:  
    1
    
          brokers: 
    2
    
            - tls://kafka-broker1.example.com:9093/
            - tls://kafka-broker2.example.com:9093/
          topic: app-topic 
    3
    
    # ...
    1
    Specify a kafka key that has a brokers and topic key.
    2
    Use the brokers key to specify an array of one or more brokers.
    3
    Use the topic key to specify the target topic that receives the logs.
  3. Apply the

    ClusterLogForwarder
    CR by running the following command:

    $ oc apply -f <filename>.yaml

10.4.17. Forwarding logs to Amazon CloudWatch

You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store.

To configure log forwarding to CloudWatch, you must create a

ClusterLogForwarder
custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output.

Procedure

  1. Create a

    Secret
    YAML file that uses the
    aws_access_key_id
    and
    aws_secret_access_key
    fields to specify your base64-encoded AWS credentials. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cw-secret
      namespace: openshift-logging
    data:
      aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK
      aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=
  2. Create the secret. For example:

    $ oc apply -f cw-secret.yaml
  3. Create or edit a YAML file that defines the

    ClusterLogForwarder
    CR object. In the file, specify the name of the secret. For example:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: <service_account_name> 
    3
    
      outputs:
       - name: cw 
    4
    
         type: cloudwatch 
    5
    
         cloudwatch:
           groupBy: logType 
    6
    
           groupPrefix: <group prefix> 
    7
    
           region: us-east-2 
    8
    
         secret:
            name: cw-secret 
    9
    
      pipelines:
        - name: infra-logs 
    10
    
          inputRefs: 
    11
    
            - infrastructure
            - audit
            - application
          outputRefs:
            - cw 
    12
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the cloudwatch type.
    6
    Optional: Specify how to group the logs:
    • logType
      creates log groups for each log type.
    • namespaceName
      creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs.
    • namespaceUUID
      creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
    7
    Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups.
    8
    Specify the AWS region.
    9
    Specify the name of the secret that contains your AWS credentials.
    10
    Optional: Specify a name for the pipeline.
    11
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    12
    Specify the name of the output to use when forwarding logs with this pipeline.
  4. Create the CR object:

    $ oc create -f <file-name>.yaml

Example: Using ClusterLogForwarder with Amazon CloudWatch

Here, you see an example

ClusterLogForwarder
custom resource (CR) and the log data that it outputs to Amazon CloudWatch.

Suppose that you are running an OpenShift Container Platform cluster named

mycluster
. The following command returns the cluster’s
infrastructureName
, which you will use to compose
aws
commands later on:

$ oc get Infrastructure/cluster -ojson | jq .status.infrastructureName
"mycluster-7977k"

To generate log data for this example, you run a

busybox
pod in a namespace called
app
. The
busybox
pod writes a message to stdout every three seconds:

$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
$ oc logs -f busybox

Example output

My life is my message
My life is my message
My life is my message
...

You can look up the UUID of the

app
namespace where the
busybox
pod runs:

$ oc get ns/app -ojson | jq .metadata.uid
"794e1e1a-b9f5-4958-a190-e76a9b53d7bf"

In your

ClusterLogForwarder
custom resource (CR), you configure the
infrastructure
,
audit
, and
application
log types as inputs to the
all-logs
pipeline. You also connect this pipeline to
cw
output, which forwards the logs to a CloudWatch instance in the
us-east-2
region:

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
   - name: cw
     type: cloudwatch
     cloudwatch:
       groupBy: logType
       region: us-east-2
     secret:
        name: cw-secret
  pipelines:
    - name: all-logs
      inputRefs:
        - infrastructure
        - audit
        - application
      outputRefs:
        - cw

Each region in CloudWatch contains three levels of objects:

  • log group

    • log stream

      • log event

With

groupBy: logType
in the
ClusterLogForwarding
CR, the three log types in the
inputRefs
produce three log groups in Amazon Cloudwatch:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.application"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

Each of the log groups contains log streams:

$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName
"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log"
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log"
"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log"
...
$ aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log"
"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log"
...

Each log stream contains log events. To see a log event from the

busybox
Pod, you specify its log stream from the
application
log group:

$ aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log
{
    "events": [
        {
            "timestamp": 1629422704178,
            "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}",
            "ingestionTime": 1629422744016
        },
...

Example: Customizing the prefix in log group names

In the log group names, you can replace the default

infrastructureName
prefix,
mycluster-7977k
, with an arbitrary string like
demo-group-prefix
. To make this change, you update the
groupPrefix
field in the
ClusterLogForwarding
CR:

cloudwatch:
    groupBy: logType
    groupPrefix: demo-group-prefix
    region: us-east-2

The value of

groupPrefix
replaces the default
infrastructureName
prefix:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"demo-group-prefix.application"
"demo-group-prefix.audit"
"demo-group-prefix.infrastructure"

Example: Naming log groups after application namespace names

For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace.

If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before.

If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead.

To create application log groups whose names are based on the names of the application namespaces, you set the value of the

groupBy
field to
namespaceName
in the
ClusterLogForwarder
CR:

cloudwatch:
    groupBy: namespaceName
    region: us-east-2

Setting

groupBy
to
namespaceName
affects the application log group only. It does not affect the
audit
and
infrastructure
log groups.

In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new

mycluster-7977k.app
log group instead of
mycluster-7977k.application
:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.app"
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace.

The

groupBy
field affects the application log group only. It does not affect the
audit
and
infrastructure
log groups.

Example: Naming log groups after application namespace UUIDs

For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace.

If you delete an application namespace object and create a new one, CloudWatch creates a new log group.

If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead.

To name log groups after application namespace UUIDs, you set the value of the

groupBy
field to
namespaceUUID
in the
ClusterLogForwarder
CR:

cloudwatch:
    groupBy: namespaceUUID
    region: us-east-2

In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new

mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf
log group instead of
mycluster-7977k.application
:

$ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace
"mycluster-7977k.audit"
"mycluster-7977k.infrastructure"

The

groupBy
field affects the application log group only. It does not affect the
audit
and
infrastructure
log groups.

10.4.18. Creating a secret for AWS CloudWatch with an existing AWS role

If you have an existing role for AWS, you can create a secret for AWS with STS using the

oc create secret --from-literal
command.

Procedure

  • In the CLI, enter the following to generate a secret for AWS:

    $ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions

    Example Secret

    apiVersion: v1
    kind: Secret
    metadata:
      namespace: openshift-logging
      name: my-secret-name
    stringData:
      role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions

10.4.19. Forwarding logs to Amazon CloudWatch from STS enabled clusters

For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the Cloud Credential Operator (CCO) utility

ccoctl
. .Prerequisites

  • Logging for Red Hat OpenShift: 5.5 and later

Procedure

  1. Create a

    CredentialsRequest
    custom resource YAML by using the template below:

    CloudWatch credentials request template

    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <your_role_name>-credrequest
      namespace: openshift-cloud-credential-operator
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
          - action:
              - logs:PutLogEvents
              - logs:CreateLogGroup
              - logs:PutRetentionPolicy
              - logs:CreateLogStream
              - logs:DescribeLogGroups
              - logs:DescribeLogStreams
            effect: Allow
            resource: arn:aws:logs:*:*:*
      secretRef:
        name: <your_role_name>
        namespace: openshift-logging
      serviceAccountNames:
        - logcollector

  2. Use the

    ccoctl
    command to create a role for AWS using your
    CredentialsRequest
    CR. With the
    CredentialsRequest
    object, this
    ccoctl
    command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in
    /<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml
    . This secret file contains the
    role_arn
    key/value used during authentication with the AWS IAM identity provider.

    $ ccoctl aws create-iam-roles \
    --name=<name> \
    --region=<aws_region> \
    --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \
    --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 
    1
    1
    <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install
  3. Apply the secret created:

    $ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml
  4. Create or edit a

    ClusterLogForwarder
    custom resource:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: <log_forwarder_name> 
    1
    
      namespace: <log_forwarder_namespace> 
    2
    
    spec:
      serviceAccountName: clf-collector 
    3
    
      outputs:
       - name: cw 
    4
    
         type: cloudwatch 
    5
    
         cloudwatch:
           groupBy: logType 
    6
    
           groupPrefix: <group prefix> 
    7
    
           region: us-east-2 
    8
    
         secret:
            name: <your_secret_name> 
    9
    
      pipelines:
        - name: to-cloudwatch 
    10
    
          inputRefs: 
    11
    
            - infrastructure
            - audit
            - application
          outputRefs:
            - cw 
    12
    1
    In legacy implementations, the CR name must be instance. In multi log forwarder implementations, you can use any name.
    2
    In legacy implementations, the CR namespace must be openshift-logging. In multi log forwarder implementations, you can use any namespace.
    3
    Specify the clf-collector service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace.
    4
    Specify a name for the output.
    5
    Specify the cloudwatch type.
    6
    Optional: Specify how to group the logs:
    • logType
      creates log groups for each log type.
    • namespaceName
      creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by
      logType
      .
    • namespaceUUID
      creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs.
    7
    Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups.
    8
    Specify the AWS region.
    9
    Specify the name of the secret that contains your AWS credentials.
    10
    Optional: Specify a name for the pipeline.
    11
    Specify which log types to forward by using the pipeline: application, infrastructure, or audit.
    12
    Specify the name of the output to use when forwarding logs with this pipeline.

10.5. Configuring the logging collector

Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the

spec.collection
stanza in the
ClusterLogging
custom resource (CR).

10.5.1. Configuring the log collector

You can configure which log collector type your logging uses by modifying the

ClusterLogging
custom resource (CR).

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a
    ClusterLogging
    CR.

Procedure

  1. Modify the

    ClusterLogging
    CR
    collection
    spec:

    ClusterLogging CR example

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
    # ...
    spec:
    # ...
      collection:
        type: <log_collector_type> 
    1
    
        resources: {}
        tolerations: {}
    # ...

    1
    The log collector type you want to use for the logging. This can be vector or fluentd.
  2. Apply the

    ClusterLogging
    CR by running the following command:

    $ oc apply -f <filename>.yaml

10.5.2. Creating a LogFileMetricExporter resource

In logging version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a

LogFileMetricExporter
custom resource (CR) to generate metrics from the logs produced by running containers.

If you do not create the

LogFileMetricExporter
CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs.

Prerequisites

  • You have administrator permissions.
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have installed the OpenShift CLI (
    oc
    ).

Procedure

  1. Create a

    LogFileMetricExporter
    CR as a YAML file:

    Example LogFileMetricExporter CR

    apiVersion: logging.openshift.io/v1alpha1
    kind: LogFileMetricExporter
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      nodeSelector: {} 
    1
    
      resources: 
    2
    
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      tolerations: [] 
    3
    
    # ...

    1
    Optional: The nodeSelector stanza defines which nodes the pods are scheduled on.
    2
    The resources stanza defines resource requirements for the LogFileMetricExporter CR.
    3
    Optional: The tolerations stanza defines the tolerations that the pods accept.
  2. Apply the

    LogFileMetricExporter
    CR by running the following command:

    $ oc apply -f <filename>.yaml

Verification

A

logfilesmetricexporter
pod runs concurrently with a
collector
pod on each node.

  • Verify that the

    logfilesmetricexporter
    pods are running in the namespace where you have created the
    LogFileMetricExporter
    CR, by running the following command and observing the output:

    $ oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging

    Example output

    NAME                           READY   STATUS    RESTARTS   AGE
    logfilesmetricexporter-9qbjj   1/1     Running   0          2m46s
    logfilesmetricexporter-cbc4v   1/1     Running   0          2m46s

10.5.3. Configure log collector CPU and memory limits

The log collector allows for adjustments to both the CPU and memory limits.

Procedure

  • Edit the

    ClusterLogging
    custom resource (CR) in the
    openshift-logging
    project:

    $ oc -n openshift-logging edit ClusterLogging instance
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      collection:
        type: fluentd
        resources:
          limits: 
    1
    
            memory: 736Mi
          requests:
            cpu: 100m
            memory: 736Mi
    # ...
    1
    Specify the CPU and memory limits and requests as needed. The values shown are the default values.

10.5.4. Configuring input receivers

The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. The service name is generated based on the following:

  • For multi log forwarder
    ClusterLogForwarder
    CR deployments, the service name is in the format
    <ClusterLogForwarder_CR_name>-<input_name>
    . For example,
    example-http-receiver
    .
  • For legacy
    ClusterLogForwarder
    CR deployments, meaning those named
    instance
    and located in the
    openshift-logging
    namespace, the service name is in the format
    collector-<input_name>
    . For example,
    collector-http-receiver
    .

You can configure your log collector to listen for HTTP connections and receive audit logs as an HTTP server by specifying

http
as a receiver input in the
ClusterLogForwarder
custom resource (CR). This enables you to use a common log store for audit logs that are collected from both inside and outside of your OpenShift Container Platform cluster.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (
    oc
    ).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a
    ClusterLogForwarder
    CR.

Procedure

  1. Modify the

    ClusterLogForwarder
    CR to add configuration for the
    http
    receiver input:

    Example ClusterLogForwarder CR if you are using a multi log forwarder deployment

    apiVersion: logging.openshift.io/v1beta1
    kind: ClusterLogForwarder
    metadata:
    # ...
    spec:
      serviceAccountName: <service_account_name>
      inputs:
        - name: http-receiver 
    1
    
          receiver:
            type: http 
    2
    
            http:
              format: kubeAPIAudit 
    3
    
              port: 8443 
    4
    
      pipelines: 
    5
    
        - name: http-pipeline
          inputRefs:
            - http-receiver
    # ...

    1
    Specify a name for your input receiver.
    2
    Specify the input receiver type as http.
    3
    Currently, only the kube-apiserver webhook format is supported for http input receivers.
    4
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535. The default value is 8443 if this is not specified.
    5
    Configure a pipeline for your input receiver.

    Example ClusterLogForwarder CR if you are using a legacy deployment

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      inputs:
        - name: http-receiver 
    1
    
          receiver:
            type: http 
    2
    
            http:
              format: kubeAPIAudit 
    3
    
              port: 8443 
    4
    
      pipelines: 
    5
    
      - inputRefs:
        - http-receiver
        name: http-pipeline
    # ...

    1
    Specify a name for your input receiver.
    2
    Specify the input receiver type as http.
    3
    Currently, only the kube-apiserver webhook format is supported for http input receivers.
    4
    Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535. The default value is 8443 if this is not specified.
    5
    Configure a pipeline for your input receiver.
  2. Apply the changes to the

    ClusterLogForwarder
    CR by running the following command:

    $ oc apply -f <filename>.yaml

10.5.5. Advanced configuration for the Fluentd log forwarder

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:

  • Chunk and chunk buffer sizes
  • Chunk flushing behavior
  • Chunk forwarding retry behavior

Fluentd collects log data in a single blob called a chunk. When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.

By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval.

These parameters can help you determine the trade-offs between latency and throughput.

  • To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
  • To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.

You can configure the chunking and flushing behavior using the following parameters in the

ClusterLogging
custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd.

Note

These parameters are:

  • Not relevant to most users. The default settings should give good general performance.
  • Only for advanced users with detailed knowledge of Fluentd configuration and performance.
  • Only for performance tuning. They have no effect on functional aspects of logging.
Expand
Table 10.11. Advanced Fluentd Configuration Parameters
ParameterDescriptionDefault

chunkLimitSize

The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.

8m

totalLimitSize

The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost.

Approximately 15% of the node disk distributed across all outputs.

flushInterval

The interval between chunk flushes. You can use

s
(seconds),
m
(minutes),
h
(hours), or
d
(days).

1s

flushMode

The method to perform flushes:

  • lazy
    : Flush chunks based on the
    timekey
    parameter. You cannot modify the
    timekey
    parameter.
  • interval
    : Flush chunks based on the
    flushInterval
    parameter.
  • immediate
    : Flush chunks immediately after data is added to a chunk.

interval

flushThreadCount

The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency.

2

overflowAction

The chunking behavior when the queue is full:

  • throw_exception
    : Raise an exception to show in the log.
  • block
    : Stop data chunking until the full buffer issue is resolved.
  • drop_oldest_chunk
    : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks.

block

retryMaxInterval

The maximum time in seconds for the

exponential_backoff
retry method.

300s

retryType

The retry method when flushing fails:

  • exponential_backoff
    : Increase the time between flush retries. Fluentd doubles the time it waits until the next retry until the
    retry_max_interval
    parameter is reached.
  • periodic
    : Retries flushes periodically, based on the
    retryWait
    parameter.

exponential_backoff

retryTimeOut

The maximum time interval to attempt retries before the record is discarded.

60m

retryWait

The time in seconds before the next chunk flush.

1s

For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation.

Procedure

  1. Edit the

    ClusterLogging
    custom resource (CR) in the
    openshift-logging
    project:

    $ oc edit ClusterLogging instance
  2. Add or modify any of the following parameters:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      collection:
        fluentd:
          buffer:
            chunkLimitSize: 8m 
    1
    
            flushInterval: 5s 
    2
    
            flushMode: interval 
    3
    
            flushThreadCount: 3 
    4
    
            overflowAction: throw_exception 
    5
    
            retryMaxInterval: "300s" 
    6
    
            retryType: periodic 
    7
    
            retryWait: 1s 
    8
    
            totalLimitSize: 32m 
    9
    
    # ...
    1
    Specify the maximum size of each chunk before it is queued for flushing.
    2
    Specify the interval between chunk flushes.
    3
    Specify the method to perform chunk flushes: lazy, interval, or immediate.
    4
    Specify the number of threads to use for chunk flushes.
    5
    Specify the chunking behavior when the queue is full: throw_exception, block, or drop_oldest_chunk.
    6
    Specify the maximum interval in seconds for the exponential_backoff chunk flushing method.
    7
    Specify the retry type when chunk flushing fails: exponential_backoff or periodic.
    8
    Specify the time in seconds before the next chunk flush.
    9
    Specify the maximum size of the chunk buffer.
  3. Verify that the Fluentd pods are redeployed:

    $ oc get pods -l component=collector -n openshift-logging
  4. Check that the new values are in the

    fluentd
    config map:

    $ oc extract configmap/collector-config --confirm

    Example fluentd.conf

    <buffer>
      @type file
      path '/var/lib/fluentd/default'
      flush_mode interval
      flush_interval 5s
      flush_thread_count 3
      retry_type periodic
      retry_wait 1s
      retry_max_interval 300s
      retry_timeout 60m
      queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}"
      total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}"
      chunk_limit_size 8m
      overflow_action throw_exception
      disable_chunk_backup true
    </buffer>

10.6. Collecting and storing Kubernetes events

The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router.

The Event Router collects events from all projects and writes them to

STDOUT
. The collector then forwards those events to the store defined in the
ClusterLogForwarder
custom resource (CR).

Important

The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed.

10.6.1. Deploying and configuring the Event Router

Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the

openshift-logging
project to ensure it collects events from across the cluster.

Note

The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately.

The following

Template
object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can either use this template without making changes or edit the template to change the deployment object CPU and memory requests.

Prerequisites

  • You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role.
  • The Red Hat OpenShift Logging Operator must be installed.

Procedure

  1. Create a template for the Event Router:

    apiVersion: template.openshift.io/v1
    kind: Template
    metadata:
      name: eventrouter-template
      annotations:
        description: "A pod forwarding kubernetes events to OpenShift Logging stack."
        tags: "events,EFK,logging,cluster-logging"
    objects:
      - kind: ServiceAccount 
    1
    
        apiVersion: v1
        metadata:
          name: eventrouter
          namespace: ${NAMESPACE}
      - kind: ClusterRole 
    2
    
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: event-reader
        rules:
        - apiGroups: [""]
          resources: ["events"]
          verbs: ["get", "watch", "list"]
      - kind: ClusterRoleBinding 
    3
    
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: event-reader-binding
        subjects:
        - kind: ServiceAccount
          name: eventrouter
          namespace: ${NAMESPACE}
        roleRef:
          kind: ClusterRole
          name: event-reader
      - kind: ConfigMap 
    4
    
        apiVersion: v1
        metadata:
          name: eventrouter
          namespace: ${NAMESPACE}
        data:
          config.json: |-
            {
              "sink": "stdout"
            }
      - kind: Deployment 
    5
    
        apiVersion: apps/v1
        metadata:
          name: eventrouter
          namespace: ${NAMESPACE}
          labels:
            component: "eventrouter"
            logging-infra: "eventrouter"
            provider: "openshift"
        spec:
          selector:
            matchLabels:
              component: "eventrouter"
              logging-infra: "eventrouter"
              provider: "openshift"
          replicas: 1
          template:
            metadata:
              labels:
                component: "eventrouter"
                logging-infra: "eventrouter"
                provider: "openshift"
              name: eventrouter
            spec:
              serviceAccount: eventrouter
              containers:
                - name: kube-eventrouter
                  image: ${IMAGE}
                  imagePullPolicy: IfNotPresent
                  resources:
                    requests:
                      cpu: ${CPU}
                      memory: ${MEMORY}
                  volumeMounts:
                  - name: config-volume
                    mountPath: /etc/eventrouter
                  securityContext:
                    allowPrivilegeEscalation: false
                    capabilities:
                      drop: ["ALL"]
              securityContext:
                runAsNonRoot: true
                seccompProfile:
                  type: RuntimeDefault
              volumes:
              - name: config-volume
                configMap:
                  name: eventrouter
    parameters:
      - name: IMAGE 
    6
    
        displayName: Image
        value: "registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4"
      - name: CPU 
    7
    
        displayName: CPU
        value: "100m"
      - name: MEMORY 
    8
    
        displayName: Memory
        value: "128Mi"
      - name: NAMESPACE
        displayName: Namespace
        value: "openshift-logging" 
    9
    1
    Creates a Service Account in the openshift-logging project for the Event Router.
    2
    Creates a ClusterRole to monitor for events in the cluster.
    3
    Creates a ClusterRoleBinding to bind the ClusterRole to the service account.
    4
    Creates a config map in the openshift-logging project to generate the required config.json file.
    5
    Creates a deployment in the openshift-logging project to generate and configure the Event Router pod.
    6
    Specifies the image, identified by a tag such as v0.4.
    7
    Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to 100m.
    8
    Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to 128Mi.
    9
    Specifies the openshift-logging project to install objects in.
  2. Use the following command to process and apply the template:

    $ oc process -f <templatefile> | oc apply -n openshift-logging -f -

    For example:

    $ oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -

    Example output

    serviceaccount/eventrouter created
    clusterrole.rbac.authorization.k8s.io/event-reader created
    clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created
    configmap/eventrouter created
    deployment.apps/eventrouter created

  3. Validate that the Event Router installed in the

    openshift-logging
    project:

    1. View the new Event Router pod:

      $ oc get pods --selector  component=eventrouter -o name -n openshift-logging

      Example output

      pod/cluster-logging-eventrouter-d649f97c8-qvv8r

    2. View the events collected by the Event Router:

      $ oc logs <cluster_logging_eventrouter_pod> -n openshift-logging

      For example:

      $ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging

      Example output

      {"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}}

      You can also use Kibana to view events by creating an index pattern using the Elasticsearch

      infra
      index.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben