Logging


OpenShift Container Platform 4.10

OpenShift Logging installation, usage, and release notes

Red Hat OpenShift Documentation Team

Abstract

This document provides instructions for installing, configuring, and using OpenShift Logging, which aggregates logs for a range of OpenShift Container Platform services.

Chapter 1. Release notes for Logging

Note

The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed.

1.1. Logging 5.6.11

This release includes OpenShift Logging Bug Fix Release 5.6.11.

1.1.1. Bug fixes

  • Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4435)

1.1.2. CVEs

1.2. Logging 5.6.9

This release includes OpenShift Logging Bug Fix Release 5.6.9.

1.2.1. Bug fixes

  • Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. (LOG-4084)
  • Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9. With this update, the error has been resolved. (LOG-4276)
  • Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana’s Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. (LOG-4390)

1.2.2. CVEs

1.3. Logging 5.6.8

This release includes OpenShift Logging Bug Fix Release 5.6.8.

1.3.1. Bug fixes

  • Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (LOG-4091)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. (OU-187)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. (OU-189)
  • Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (LOG-4158)
  • Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. (LOG-4278)

1.3.2. CVEs

1.4. Logging 5.6.7

This release includes OpenShift Logging Bug Fix Release 5.6.7.

1.4.1. Bug fixes

  • Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. (LOG-3728)
  • Before this update, the time field of log messages did not parse as structured.time by default in Fluentd when the messages included a timestamp. With this update, parsed log messages will include a structured.time field if the output destination supports it. (LOG-4090)
  • Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to time out. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. (LOG-4130)
  • Before this update, LokiStack CRs with values defined for tenant limits but not global limits caused the Loki Operator to crash. With this update, the Operator is able to process LokiStack CRs with only tenant limits defined, resolving the issue. (LOG-4199)
  • Before this update, the OpenShift Container Platform web console generated errors after an upgrade due to cached files of the prior version retained by the web browser. With this update, these files are no longer cached, resolving the issue. (LOG-4099)
  • Before this update, Vector generated certificate errors when forwarding to the default Loki instance. With this update, logs can be forwarded without errors to Loki by using Vector. (LOG-4184)
  • Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true. With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator’s CR:

    tls.verify_certificate = false
    tls.verify_hostname = false

    (LOG-4146)

1.4.2. CVEs

1.5. Logging 5.6.6

This release includes OpenShift Logging Bug Fix Release 5.6.6.

1.5.1. Bug fixes

  • Before this update, dropping of messages occurred when configuring the ClusterLogForwarder custom resource to write to a Kafka output topic that matched a key in the payload due to an error. With this update, the issue is resolved by prefixing Fluentd’s buffer name with an underscore. (LOG-3458)
  • Before this update, premature closure of watches occurred in Fluentd when inodes were reused and there were multiple entries with the same inode. With this update, the issue of premature closure of watches in the Fluentd position file is resolved. (LOG-3629)
  • Before this update, the detection of JavaScript client multi-line exceptions by Fluentd failed, resulting in printing them as multiple lines. With this update, exceptions are output as a single line, resolving the issue.(LOG-3761)
  • Before this update, direct upgrades from the Red Hat Openshift Logging Operator version 4.6 to version 5.6 were allowed, resulting in functionality issues. With this update, upgrades must be within two versions, resolving the issue. (LOG-3837)
  • Before this update, metrics were not displayed for Splunk or Google Logging outputs. With this update, the issue is resolved by sending metrics for HTTP endpoints.(LOG-3932)
  • Before this update, when the ClusterLogForwarder custom resource was deleted, collector pods remained running. With this update, collector pods do not run when log forwarding is not enabled. (LOG-4030)
  • Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4101)
  • Before this update, Fluentd hash values for watch files were generated using the paths to log files, resulting in a non unique hash upon log rotation. With this update, hash values for watch files are created with inode numbers, resolving the issue. (LOG-3633)
  • Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. (LOG-4118)

1.5.2. CVEs

1.6. Logging 5.6.5

This release includes OpenShift Logging Bug Fix Release 5.6.5.

1.6.1. Bug fixes

  • Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. (LOG-3419)
  • Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. (LOG-3750)
  • Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. (LOG-3583)
  • Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. (LOG-3480)
  • Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. (LOG-4008)

1.6.2. CVEs

1.7. Logging 5.6.4

This release includes OpenShift Logging Bug Fix Release 5.6.4.

1.7.1. Bug fixes

  • Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. (LOG-3280)
  • Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. (LOG-3454)
  • Before this update, when the tls.insecureSkipVerify option was set to true, the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3475)
  • Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. (LOG-3640)
  • Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. (LOG-3733)
  • Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. (LOG-3783)
  • Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. (LOG-3814)
  • Before this update, if the tls.insecureSkipVerify field was set to true, the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3838)
  • Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.(LOG-3763)

1.7.2. CVEs

1.8. Logging 5.6.3

This release includes OpenShift Logging Bug Fix Release 5.6.3.

1.8.1. Bug fixes

  • Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (LOG-3717)
  • Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log. With this update, Fluentd captures these OAuth login events, resolving the issue. (LOG-3729)

1.8.2. CVEs

1.9. Logging 5.6.2

This release includes OpenShift Logging Bug Fix Release 5.6.2.

1.9.1. Bug fixes

  • Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. (LOG-3429)
  • Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. (LOG-3584)
  • Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (LOG-3437)
  • Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (LOG-3559)
  • Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (LOG-3608)
  • Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (LOG-3635)

1.9.2. CVEs

1.10. Logging 5.6.1

This release includes OpenShift Logging Bug Fix Release 5.6.1.

1.10.1. Bug fixes

  • Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (LOG-3494)
  • Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (LOG-3496)
  • Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (LOG-3510)
  • Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3441), (LOG-3397)
  • Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3463)
  • Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. (LOG-3447)
  • Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(LOG-3477)

1.10.2. CVEs

1.11. Logging 5.6

This release includes OpenShift Logging Release 5.6.

1.11.1. Deprecation notice

In Logging 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to fluentd, you can use Vector instead.

1.11.2. Enhancements

  • With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. (LOG-895)
  • With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (LOG-2695)
  • With this update, Splunk is an available output option for log forwarding. (LOG-2913)
  • With this update, Vector replaces Fluentd as the default Collector. (LOG-2222)
  • With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. (LOG-3388)
  • With this update, logs from any source contain a field openshift.cluster_id, the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value with the command below. (LOG-2715)
$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'

1.11.3. Known Issues

  • Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (LOG-3463)

1.11.4. Bug fixes

  • Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-2993)
  • Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-3072)
  • Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. (LOG-3090)
  • Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3331)
  • Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1. With this update, the operator sets the actual value for the size used. (LOG-3296)
  • Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. (LOG-3195)
  • Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3161)
  • Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3157)
  • Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout, with a default value of 24h. (LOG-3129)
  • Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (LOG-2919)
  • Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (LOG-2819)
  • Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (LOG-2789)
  • Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (LOG-2315)
  • Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (LOG-1806)
  • Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. (LOG-3446)
  • Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (LOG-3235)
  • Before this update, Vector was missing the field sequence, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. (LOG-3106)

1.11.5. CVEs

1.12. Logging 5.5.16

This release includes OpenShift Logging Bug Fix Release 5.5.16.

1.12.1. Bug fixes

  • Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4434)

1.12.2. CVEs

1.13. Logging 5.5.14

This release includes OpenShift Logging Bug Fix Release 5.5.14.

1.13.1. Bug fixes

  • Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9. With this update, the error has been resolved. (LOG-4279)

1.13.2. CVEs

1.14. Logging 5.5.13

This release includes OpenShift Logging Bug Fix Release 5.5.13.

1.14.1. Bug fixes

None.

1.14.2. CVEs

1.15. Logging 5.5.11

This release includes OpenShift Logging Bug Fix Release 5.5.11.

1.15.1. Bug fixes

  • Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4102)
  • Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. (LOG-4117)

1.15.2. CVEs

1.16. Logging 5.5.10

This release includes OpenShift Logging Bug Fix Release 5.5.10.

1.16.1. Bug fixes

  • Before this update, the logging view plugin of the OpenShift Web Console showed only an error text when the LokiStack was not reachable. After this update the plugin shows a proper error message with details on how to fix the unreachable LokiStack. (LOG-2874)

1.16.2. CVEs

1.17. Logging 5.5.9

This release includes OpenShift Logging Bug Fix Release 5.5.9.

1.17.1. Bug fixes

  • Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log. This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log, as expected.(LOG-3730)
  • Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received logs now have structured messages included, even when they are forwarded to multiple destinations.(LOG-3767)

1.17.2. CVEs

1.18. Logging 5.5.8

This release includes OpenShift Logging Bug Fix Release 5.5.8.

1.18.1. Bug fixes

  • Before this update, the priority field was missing from systemd logs due to an error in how the collector set level fields. With this update, these fields are set correctly, resolving the issue. (LOG-3630)

1.18.2. CVEs

1.19. Logging 5.5.7

This release includes OpenShift Logging Bug Fix Release 5.5.7.

1.19.1. Bug fixes

  • Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3534)
  • Before this update, the ClusterLogForwarder custom resource (CR) did not pass TLS credentials for syslog output to Fluentd, resulting in errors during forwarding. With this update, credentials pass correctly to Fluentd, resolving the issue. (LOG-3533)

1.19.2. CVEs

CVE-2021-46848CVE-2022-3821CVE-2022-35737CVE-2022-42010CVE-2022-42011CVE-2022-42012CVE-2022-42898CVE-2022-43680

1.20. Logging 5.5.6

This release includes OpenShift Logging Bug Fix Release 5.5.6.

1.20.1. Bug fixes

  • Before this update, the Pod Security admission controller added the label podSecurityLabelSync = true to the openshift-logging namespace. This resulted in our specified security labels being overwritten, and as a result Collector pods would not start. With this update, the label podSecurityLabelSync = false preserves security labels. Collector pods deploy as expected. (LOG-3340)
  • Before this update, the Operator installed the console view plugin, even when it was not enabled on the cluster. This caused the Operator to crash. With this update, if an account for a cluster does not have the console view enabled, the Operator functions normally and does not install the console view. (LOG-3407)
  • Before this update, a prior fix to support a regression where the status of the Elasticsearch deployment was not being updated caused the Operator to crash unless the Red Hat Elasticsearch Operator was deployed. With this update, that fix has been reverted so the Operator is now stable but re-introduces the previous issue related to the reported status. (LOG-3428)
  • Before this update, the Loki Operator only deployed one replica of the LokiStack gateway regardless of the chosen stack size. With this update, the number of replicas is correctly configured according to the selected size. (LOG-3478)
  • Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3341)
  • Before this update, the logging view plugin contained an incompatible feature for certain versions of OpenShift Container Platform. With this update, the correct release stream of the plugin resolves the issue. (LOG-3467)
  • Before this update, the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of one or more pipelines causing the collector pods to restart every 8-10 seconds. With this update, reconciliation of the ClusterLogForwarder custom resource processes correctly, resolving the issue. (LOG-3469)
  • Before this change the spec for the outputDefaults field of the ClusterLogForwarder custom resource would apply the settings to every declared Elasticsearch output type. This change corrects the behavior to match the enhancement specification where the setting specifically applies to the default managed Elasticsearch store. (LOG-3342)
  • Before this update, the OpenShift CLI (oc) must-gather script did not complete because the OpenShift CLI (oc) needs a folder with write permission to build its cache. With this update, the OpenShift CLI (oc) has write permissions to a folder, and the must-gather script completes successfully. (LOG-3472)
  • Before this update, the Loki Operator webhook server caused TLS errors. With this update, the Loki Operator webhook PKI is managed by the Operator Lifecycle Manager’s dynamic webhook management resolving the issue. (LOG-3511)

1.20.2. CVEs

1.21. Logging 5.5.5

This release includes OpenShift Logging Bug Fix Release 5.5.5.

1.21.1. Bug fixes

  • Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout, with a default value of 24h. (LOG-3305)
  • Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. (LOG-3284)
  • Before this update, the FluentdQueueLengthIncreasing alert could fail to fire when there was a cardinality issue with the set of labels returned from this alert expression. This update reduces labels to only include those required for the alert. (LOG-3226)
  • Before this update, Loki did not have support to reach an external storage in a disconnected cluster. With this update, proxy environment variables and proxy trusted CA bundles are included in the container image to support these connections. (LOG-2860)
  • Before this update, OpenShift Container Platform web console users could not choose the ConfigMap object that includes the CA certificate for Loki, causing pods to operate without the CA. With this update, web console users can select the config map, resolving the issue. (LOG-3310)
  • Before this update, the CA key was used as volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters (such as dots). With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3332)

1.21.2. CVEs

1.22. Logging 5.5.4

This release includes RHSA-2022:7434-OpenShift Logging Bug Fix Release 5.5.4.

1.22.1. Bug fixes

  • Before this update, an error in the query parser of the logging view plugin caused parts of the logs query to disappear if the query contained curly brackets {}. This made the queries invalid, leading to errors being returned for valid queries. With this update, the parser correctly handles these queries. (LOG-3042)
  • Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3049)
  • Before this update, no alerts were implemented to support the collector implementation of Vector. This change adds Vector alerts and deploys separate alerts, depending upon the chosen collector implementation. (LOG-3127)
  • Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3138)
  • Before this update, a prior refactoring of the logging must-gather scripts removed the expected location for the artifacts. This update reverts that change to write artifacts to the /must-gather folder. (LOG-3213)
  • Before this update, on certain clusters, the Prometheus exporter would bind on IPv4 instead of IPv6. After this update, Fluentd detects the IP version and binds to 0.0.0.0 for IPv4 or [::] for IPv6. (LOG-3162)

1.22.2. CVEs

1.23. Logging 5.5.3

This release includes OpenShift Logging Bug Fix Release 5.5.3.

1.23.1. Bug fixes

  • Before this update, log entries that had structured messages included the original message field, which made the entry larger. This update removes the message field for structured logs to reduce the increased size. (LOG-2759)
  • Before this update, the collector configuration excluded logs from collector, default-log-store, and visualization pods, but was unable to exclude logs archived in a .gz file. With this update, archived logs stored as .gz files of collector, default-log-store, and visualization pods are also excluded. (LOG-2844)
  • Before this update, when requests to an unavailable pod were sent through the gateway, no alert would warn of the disruption. With this update, individual alerts will generate if the gateway has issues completing a write or read request. (LOG-2884)
  • Before this update, pod metadata could be altered by fluent plugins because the values passed through the pipeline by reference. This update ensures each log message receives a copy of the pod metadata so each message processes independently. (LOG-3046)
  • Before this update, selecting unknown severity in the OpenShift Console Logs view excluded logs with a level=unknown value. With this update, logs without level and with level=unknown values are visible when filtering by unknown severity. (LOG-3062)
  • Before this update, log records sent to Elasticsearch had an extra field named write-index that contained the name of the index to which the logs needed to be sent. This field is not a part of the data model. After this update, this field is no longer sent. (LOG-3075)
  • With the introduction of the new built-in Pod Security Admission Controller, Pods not configured in accordance with the enforced security standards defined globally or on the namespace level cannot run. With this update, the Operator and collectors allow privileged execution and run without security audit warnings or errors. (LOG-3077)
  • Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. (LOG-3095)

1.23.2. CVEs

1.24. Logging 5.5.2

This release includes OpenShift Logging Bug Fix Release 5.5.2.

1.24.1. Bug fixes

  • Before this update, alerting rules for the Fluentd collector did not adhere to the OpenShift Container Platform monitoring style guidelines. This update modifies those alerts to include the namespace label, resolving the issue. (LOG-1823)
  • Before this update, the index management rollover script failed to generate a new index name whenever there was more than one hyphen character in the name of the index. With this update, index names generate correctly. (LOG-2644)
  • Before this update, the Kibana route was setting a caCertificate value without a certificate present. With this update, no caCertificate value is set. (LOG-2661)
  • Before this update, a change in the collector dependencies caused it to issue a warning message for unused parameters. With this update, removing unused configuration parameters resolves the issue. (LOG-2859)
  • Before this update, pods created for deployments that Loki Operator created were mistakenly scheduled on nodes with non-Linux operating systems, if such nodes were available in the cluster the Operator was running in. With this update, the Operator attaches an additional node-selector to the pod definitions which only allows scheduling the pods on Linux-based nodes. (LOG-2895)
  • Before this update, the OpenShift Console Logs view did not filter logs by severity due to a LogQL parser issue in the LokiStack gateway. With this update, a parser fix resolves the issue and the OpenShift Console Logs view can filter by severity. (LOG-2908)
  • Before this update, a refactoring of the Fluentd collector plugins removed the timestamp field for events. This update restores the timestamp field, sourced from the event’s received time. (LOG-2923)
  • Before this update, absence of a level field in audit logs caused an error in vector logs. With this update, the addition of a level field in the audit log record resolves the issue. (LOG-2961)
  • Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-3053)
  • Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. (LOG-3063)
  • Before this update, when the user deleted the LokiStack after an update to Loki Operator 5.5 resources originally created by Loki Operator 5.4 remained. With this update, the resources' owner-references point to the 5.5 LokiStack. (LOG-2945)
  • Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-2918)
  • Before this update, users with cluster-admin privileges were not able to properly view infrastructure and audit logs using the logging console. With this update, the authorization check has been extended to also recognize users in cluster-admin and dedicated-admin groups as admins. (LOG-2970)

1.24.2. CVEs

1.25. Logging 5.5.1

This release includes OpenShift Logging Bug Fix Release 5.5.1.

1.25.1. Enhancements

  • This enhancement adds an Aggregated Logs tab to the Pod Details page of the OpenShift Container Platform web console when the Logging Console Plugin is in use. This enhancement is only available on OpenShift Container Platform 4.10 and later. (LOG-2647)
  • This enhancement adds Google Cloud Logging as an output option for log forwarding. (LOG-1482)

1.25.2. Bug fixes

  • Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. (LOG-2745)
  • Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. (LOG-2995)
  • Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. (LOG-2801)
  • Before this update, changing the OpenShift Container Platform web console’s refresh interval created an error when the Query field was empty. With this update, changing the interval is not an available option when the Query field is empty. (LOG-2917)

1.25.3. CVEs

1.26. Logging 5.5

The following advisories are available for Logging 5.5:Release 5.5

1.26.1. Enhancements

  • With this update, you can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. (LOG-1296)
Important

JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats.

  • With this update, you can filter logs with Elasticsearch outputs by using the Kubernetes common labels, app.kubernetes.io/component, app.kubernetes.io/managed-by, app.kubernetes.io/part-of, and app.kubernetes.io/version. Non-Elasticsearch output types can use all labels included in kubernetes.labels. (LOG-2388)
  • With this update, clusters with AWS Security Token Service (STS) enabled may use STS authentication to forward logs to Amazon CloudWatch. (LOG-1976)
  • With this update, the 'Loki Operator' Operator and Vector collector move from Technical Preview to General Availability. Full feature parity with prior releases are pending, and some APIs remain Technical Previews. See the Logging with the LokiStack section for details.

1.26.2. Bug fixes

  • Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for all storage options has been disabled, resolving the issue. (LOG-2746)
  • Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. (LOG-2656)

Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. (LOG-2656)

  • Before this update, multiple ClusterLogForwarder pipelines configured for multiline error detection caused the collector to go into a crashloopbackoff error state. This update fixes the issue where multiple configuration sections had the same unique ID. (LOG-2241)
  • Before this update, the collector could not save non UTF-8 symbols to the Elasticsearch storage logs. With this update the collector encodes non UTF-8 symbols, resolving the issue. (LOG-2203)
  • Before this update, non-latin characters displayed incorrectly in Kibana. With this update, Kibana displays all valid UTF-8 symbols correctly. (LOG-2784)

1.26.3. CVEs

1.27. Logging 5.4.14

This release includes OpenShift Logging Bug Fix Release 5.4.14.

1.27.1. Bug fixes

None.

1.27.2. CVEs

1.28. Logging 5.4.13

This release includes OpenShift Logging Bug Fix Release 5.4.13.

1.28.1. Bug fixes

  • Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log. This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log, as expected. (LOG-3731)

1.28.2. CVEs

1.29. Logging 5.4.12

This release includes OpenShift Logging Bug Fix Release 5.4.12.

1.29.1. Bug fixes

None.

1.29.2. CVEs

1.30. Logging 5.4.11

This release includes OpenShift Logging Bug Fix Release 5.4.11.

1.30.1. Bug fixes

1.30.2. CVEs

1.31. Logging 5.4.10

This release includes OpenShift Logging Bug Fix Release 5.4.10.

1.31.1. Bug fixes

None.

1.31.2. CVEs

1.32. Logging 5.4.9

This release includes OpenShift Logging Bug Fix Release 5.4.9.

1.32.1. Bug fixes

  • Before this update, the Fluentd collector would warn of unused configuration parameters. This update removes those configuration parameters and their warning messages. (LOG-3074)
  • Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout, with a default value of 24h. (LOG-3306)

1.32.2. CVEs

1.33. Logging 5.4.8

This release includes RHSA-2022:7435-OpenShift Logging Bug Fix Release 5.4.8.

1.33.1. Bug fixes

None.

1.33.2. CVEs

1.34. Logging 5.4.6

This release includes OpenShift Logging Bug Fix Release 5.4.6.

1.34.1. Bug fixes

  • Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. (LOG-2792)
  • Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. (LOG-2823)
  • Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-3054)

1.34.2. CVEs

1.35. Logging 5.4.5

This release includes RHSA-2022:6183-OpenShift Logging Bug Fix Release 5.4.5.

1.35.1. Bug fixes

  • Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. (LOG-2881)
  • Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. (LOG-2946)
  • Before this update, the Operator could not decode index setting JSON responses with a quoted Boolean value and would result in an error. With this update, the Operator can properly decode this JSON response. (LOG-3009)
  • Before this update, Elasticsearch index templates defined the fields for labels with the wrong types. This change updates those templates to match the expected types forwarded by the log collector. (LOG-2972)

1.35.2. CVEs

1.36. Logging 5.4.4

This release includes RHBA-2022:5907-OpenShift Logging Bug Fix Release 5.4.4.

1.36.1. Bug fixes

  • Before this update, non-latin characters displayed incorrectly in Elasticsearch. With this update, Elasticsearch displays all valid UTF-8 symbols correctly. (LOG-2794)
  • Before this update, non-latin characters displayed incorrectly in Fluentd. With this update, Fluentd displays all valid UTF-8 symbols correctly. (LOG-2657)
  • Before this update, the metrics server for the collector attempted to bind to the address using a value exposed by an environment value. This change modifies the configuration to bind to any available interface. (LOG-2821)
  • Before this update, the cluster-logging Operator relied on the cluster to create a secret. This cluster behavior changed in OpenShift Container Platform 4.11, which caused logging deployments to fail. With this update, the cluster-logging Operator resolves the issue by creating the secret if needed. (LOG-2840)

1.36.2. CVEs

1.37. Logging 5.4.3

This release includes RHSA-2022:5556-OpenShift Logging Bug Fix Release 5.4.3.

1.37.1. Elasticsearch Operator deprecation notice

In logging subsystem 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

1.37.2. Bug fixes

  • Before this update, the OpenShift Logging Dashboard showed the number of active primary shards instead of all active shards. With this update, the dashboard displays all active shards. (LOG-2781)
  • Before this update, a bug in a library used by elasticsearch-operator contained a denial of service attack vulnerability. With this update, the library has been updated to a version that does not contain this vulnerability. (LOG-2816)
  • Before this update, when configuring Vector to forward logs to Loki, it was not possible to set a custom bearer token or use the default token if Loki had TLS enabled. With this update, Vector can forward logs to Loki using tokens with TLS enabled. (LOG-2786
  • Before this update, the ElasticSearch Operator omitted the referencePolicy property of the ImageStream custom resource when selecting an oauth-proxy image. This omission caused the Kibana deployment to fail in specific environments. With this update, using referencePolicy resolves the issue, and the Operator can deploy Kibana successfully. (LOG-2791)
  • Before this update, alerting rules for the ClusterLogForwarder custom resource did not take multiple forward outputs into account. This update resolves the issue. (LOG-2640)
  • Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. (LOG-2768)

1.37.3. CVEs

1.38. Logging 5.4.2

This release includes RHBA-2022:4874-OpenShift Logging Bug Fix Release 5.4.2

1.38.1. Bug fixes

  • Before this update, editing the Collector configuration using oc edit was difficult because it had inconsistent use of white-space. This change introduces logic to normalize and format the configuration prior to any updates by the Operator so that it is easy to edit using oc edit. (LOG-2319)
  • Before this update, the FluentdNodeDown alert could not provide instance labels in the message section appropriately. This update resolves the issue by fixing the alert rule to provide instance labels in cases of partial instance failures. (LOG-2607)
  • Before this update, several log levels, such as`critical`, that were documented as supported by the product were not. This update fixes the discrepancy so the documented log levels are now supported by the product. (LOG-2033)

1.38.2. CVEs

1.39. Logging 5.4.1

This release includes RHSA-2022:2216-OpenShift Logging Bug Fix Release 5.4.1.

1.39.1. Bug fixes

  • Before this update, the log file metric exporter only reported logs created while the exporter was running, which resulted in inaccurate log growth data. This update resolves this issue by monitoring /var/log/pods. (LOG-2442)
  • Before this update, the collector would be blocked because it continually tried to use a stale connection when forwarding logs to fluentd forward receivers. With this release, the keepalive_timeout value has been set to 30 seconds (30s) so that the collector recycles the connection and re-attempts to send failed messages within a reasonable amount of time. (LOG-2534)
  • Before this update, an error in the gateway component enforcing tenancy for reading logs limited access to logs with a Kubernetes namespace causing "audit" and some "infrastructure" logs to be unreadable. With this update, the proxy correctly detects users with admin access and allows access to logs without a namespace. (LOG-2448)
  • Before this update, the system:serviceaccount:openshift-monitoring:prometheus-k8s service account had cluster level privileges as a clusterrole and clusterrolebinding. This update restricts the service account` to the openshift-logging namespace with a role and rolebinding. (LOG-2437)
  • Before this update, Linux audit log time parsing relied on an ordinal position of a key/value pair. This update changes the parsing to use a regular expression to find the time entry. (LOG-2321)

1.39.2. CVEs

1.40. Logging 5.4

The following advisories are available for logging 5.4: Logging subsystem for Red Hat OpenShift Release 5.4

1.40.1. Technology Previews

Important

Vector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.40.2. About Vector

Vector is a log collector offered as a tech-preview alternative to the current default collector for the logging subsystem.

The following outputs are supported:

  • elasticsearch. An external Elasticsearch instance. The elasticsearch output can use a TLS connection.
  • kafka. A Kafka broker. The kafka output can use an unsecured or TLS connection.
  • loki. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
1.40.2.1. Enabling Vector

Vector is not enabled by default. Use the following steps to enable Vector on your OpenShift Container Platform cluster.

Important

Vector does not support FIPS Enabled Clusters.

Prerequisites

  • OpenShift Container Platform: 4.10
  • Logging subsystem for Red Hat OpenShift: 5.4
  • FIPS disabled

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc -n openshift-logging edit ClusterLogging instance
  2. Add a logging.openshift.io/preview-vector-collector: enabled annotation to the ClusterLogging custom resource (CR).
  3. Add vector as a collection type to the ClusterLogging custom resource (CR).
  apiVersion: "logging.openshift.io/v1"
  kind: "ClusterLogging"
  metadata:
    name: "instance"
    namespace: "openshift-logging"
    annotations:
      logging.openshift.io/preview-vector-collector: enabled
  spec:
    collection:
      logs:
        type: "vector"
        vector: {}

Additional resources

Important

Loki Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

1.40.3. About Loki

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem.

Additional resources

1.40.3.1. Deploying the Lokistack

You can use the OpenShift Container Platform web console to install the Loki Operator.

Prerequisites

  • OpenShift Container Platform: 4.10
  • Logging subsystem for Red Hat OpenShift: 5.4

To install the Loki Operator using the OpenShift Container Platform web console:

  1. Install the Loki Operator:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
    2. Choose Loki Operator from the list of available Operators, and click Install.
    3. Under Installation Mode, select All namespaces on the cluster.
    4. Under Installed Namespace, select openshift-operators-redhat.

      You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts.

    5. Select Enable operator recommended cluster monitoring on this namespace.

      This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.

    6. Select an Approval Strategy.

      • The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
      • The Manual strategy requires a user with appropriate credentials to approve the Operator update.
    7. Click Install.
    8. Verify that you installed the Loki Operator. Visit the OperatorsInstalled Operators page and look for "Loki Operator."
    9. Ensure that Loki Operator is listed in all the projects whose Status is Succeeded.

1.40.4. Bug fixes

  • Before this update, the cluster-logging-operator used cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were created when deploying the Operator using the console interface but were missing when deploying from the command line. This update fixes the issue by making the roles and bindings namespace-scoped. (LOG-2286)
  • Before this update, a prior change to fix dashboard reconciliation introduced a ownerReferences field to the resource across namespaces. As a result, both the config map and dashboard were not created in the namespace. With this update, the removal of the ownerReferences field resolves the issue, and the OpenShift Logging dashboard is available in the console. (LOG-2163)
  • Before this update, changes to the metrics dashboards did not deploy because the cluster-logging-operator did not correctly compare existing and modified config maps that contain the dashboard. With this update, the addition of a unique hash value to object labels resolves the issue. (LOG-2071)
  • Before this update, the OpenShift Logging dashboard did not correctly display the pods and namespaces in the table, which displays the top producing containers collected over the last 24 hours. With this update, the pods and namespaces are displayed correctly. (LOG-2069)
  • Before this update, when the ClusterLogForwarder was set up with Elasticsearch OutputDefault and Elasticsearch outputs did not have structured keys, the generated configuration contained the incorrect values for authentication. This update corrects the secret and certificates used. (LOG-2056)
  • Before this update, the OpenShift Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the correct data point has been selected, resolving the issue. (LOG-2026)
  • Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image.(LOG-1927)
  • Before this update, a name change of the deployed collector in the 5.3 release caused the logging collector to generate the FluentdNodeDown alert. This update resolves the issue by fixing the job name for the Prometheus alert. (LOG-1918)
  • Before this update, the log collector was collecting its own logs due to a refactoring of the component name change. This lead to a potential feedback loop of the collector processing its own log that might result in memory and log message size issues. This update resolves the issue by excluding the collector logs from the collection. (LOG-1774)
  • Before this update, Elasticsearch generated the error Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota. if the PVC already existed. With this update, Elasticsearch checks for existing PVCs, resolving the issue. (LOG-2131)
  • Before this update, Elasticsearch was unable to return to the ready state when the elasticsearch-signing secret was removed. With this update, Elasticsearch is able to go back to the ready state after that secret is removed. (LOG-2171)
  • Before this update, the change of the path from which the collector reads container logs caused the collector to forward some records to the wrong indices. With this update, the collector now uses the correct configuration to resolve the issue. (LOG-2160)
  • Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. (LOG-1899)
  • Before this update, the OpenShift Container Platform Logging dashboard showed the number of shards 'x' times larger than the actual value when Elasticsearch had 'x' nodes. This issue occurred because it was printing all primary shards for each Elasticsearch pod and calculating a sum on it, although the output was always for the whole Elasticsearch cluster. With this update, the number of shards is now correctly calculated. (LOG-2156)
  • Before this update, the secrets kibana and kibana-proxy were not recreated if they were deleted manually. With this update, the elasticsearch-operator will watch the resources and automatically recreate them if deleted. (LOG-2250)
  • Before this update, tuning the buffer chunk size could cause the collector to generate a warning about the chunk size exceeding the byte limit for the event stream. With this update, you can also tune the read line limit, resolving the issue. (LOG-2379)
  • Before this update, the logging console link in OpenShift web console was not removed with the ClusterLogging CR. With this update, deleting the CR or uninstalling the Cluster Logging Operator removes the link. (LOG-2373)
  • Before this update, a change to the container logs path caused the collection metric to always be zero with older releases configured with the original path. With this update, the plugin which exposes metrics about collected logs supports reading from either path to resolve the issue. (LOG-2462)

1.40.5. CVEs

1.41. Logging 5.3.14

This release includes OpenShift Logging Bug Fix Release 5.3.14.

1.41.1. Bug fixes

  • Before this update, the log file size map generated by the log-file-metrics-exporter component did not remove entries for deleted files, resulting in increased file size, and process memory. With this update, the log file size map does not contain entries for deleted files. (LOG-3293)

1.41.2. CVEs

1.42. Logging 5.3.13

This release includes RHSA-2022:68828-OpenShift Logging Bug Fix Release 5.3.13.

1.42.1. Bug fixes

None.

1.42.2. CVEs

1.43. Logging 5.3.12

This release includes OpenShift Logging Bug Fix Release 5.3.12.

1.43.1. Bug fixes

None.

1.43.2. CVEs

1.44. Logging 5.3.11

This release includes OpenShift Logging Bug Fix Release 5.3.11.

1.44.1. Bug fixes

  • Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. (LOG-2871)

1.44.2. CVEs

1.45. Logging 5.3.10

This release includes RHSA-2022:5908-OpenShift Logging Bug Fix Release 5.3.10.

1.45.1. Bug fixes

1.45.2. CVEs

1.46. Logging 5.3.9

This release includes RHBA-2022:5557-OpenShift Logging Bug Fix Release 5.3.9.

1.46.1. Bug fixes

  • Before this update, the logging collector included a path as a label for the metrics it produced. This path changed frequently and contributed to significant storage changes for the Prometheus server. With this update, the label has been dropped to resolve the issue and reduce storage consumption. (LOG-2682)

1.46.2. CVEs

1.47. Logging 5.3.8

This release includes RHBA-2022:5010-OpenShift Logging Bug Fix Release 5.3.8

1.47.1. Bug fixes

(None.)

1.47.2. CVEs

1.48. OpenShift Logging 5.3.7

This release includes RHSA-2022:2217 OpenShift Logging Bug Fix Release 5.3.7

1.48.1. Bug fixes

  • Before this update, Linux audit log time parsing relied on an ordinal position of key/value pair. This update changes the parsing to utilize a regex to find the time entry. (LOG-2322)
  • Before this update, some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps. (LOG-2334)
  • Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. (LOG-2450)
  • Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding. This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. (LOG-2481))

1.48.2. CVEs

1.49. OpenShift Logging 5.3.6

This release includes RHBA-2022:1377 OpenShift Logging Bug Fix Release 5.3.6

1.49.1. Bug fixes

  • Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. (LOG-2126)
  • Before this change, it was possible for the collector to generate a warning where the chunk byte limit was exceeding an emitted event. With this change, you can tune the readline limit to resolve the issue as advised by the upstream documentation. (LOG-2380)

1.50. OpenShift Logging 5.3.5

This release includes RHSA-2022:0721 OpenShift Logging Bug Fix Release 5.3.5

1.50.1. Bug fixes

  • Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. (LOG-2182)

1.50.2. CVEs

1.51. OpenShift Logging 5.3.4

This release includes RHBA-2022:0411 OpenShift Logging Bug Fix Release 5.3.4

1.51.1. Bug fixes

  • Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired config maps that contained the dashboard. This update fixes the logic by adding a unique hash value to the object labels. (LOG-2066)
  • Before this update, Elasticsearch pods failed to start after updating with FIPS enabled. With this update, Elasticsearch pods start successfully. (LOG-1974)
  • Before this update, elasticsearch generated the error "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." if the PVC already existed. With this update, elasticsearch checks for existing PVCs, resolving the issue. (LOG-2127)

1.51.2. CVEs

1.52. OpenShift Logging 5.3.3

This release includes RHSA-2022:0227 OpenShift Logging Bug Fix Release 5.3.3

1.52.1. Bug fixes

  • Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired configmaps containing the dashboard. This update fixes the logic by adding a dashboard unique hash value to the object labels.(LOG-2066)
  • This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832.(LOG-2102)

1.52.2. CVEs

Example 1.11. Click to expand CVEs

1.53. OpenShift Logging 5.3.2

This release includes RHSA-2022:0044 OpenShift Logging Bug Fix Release 5.3.2

1.53.1. Bug fixes

  • Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, previous indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. (LOG-2087)
  • Before this update, the OpenShift Logging Dashboard displayed the wrong pod namespace in the table that displays top producing and collected containers over the last 24 hours. With this update, the OpenShift Logging Dashboard displays the correct pod namespace. (LOG-2051)
  • Before this update, if outputDefaults.elasticsearch.structuredTypeKey in the ClusterLogForwarder custom resource (CR) instance did not have a structured key, the CR replaced the output secret with the default secret used to communicate to the default log store. With this update, the defined output secret is correctly used. (LOG-2046)

1.53.2. CVEs

1.54. OpenShift Logging 5.3.1

This release includes RHSA-2021:5129 OpenShift Logging Bug Fix Release 5.3.1

1.54.1. Bug fixes

  • Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image. (LOG-1998)
  • Before this update, the Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the Logging dashboard displays CPU graphs correctly. (LOG-1925)
  • Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (LOG-1897)

1.54.2. CVEs

1.55. OpenShift Logging 5.3.0

This release includes RHSA-2021:4627 OpenShift Logging Bug Fix Release 5.3.0

1.55.1. New features and enhancements

  • With this update, authorization options for Log Forwarding have been expanded. Outputs may now be configured with SASL, username/password, or TLS.

1.55.2. Bug fixes

  • Before this update, if you forwarded logs using the syslog protocol, serializing a ruby hash encoded key/value pairs to contain a '⇒' character and replaced tabs with "#11". This update fixes the issue so that log messages are correctly serialized as valid JSON. (LOG-1494)
  • Before this update, application logs were not correctly configured to forward to the proper Cloudwatch stream with multi-line error detection enabled. (LOG-1939)
  • Before this update, a name change of the deployed collector in the 5.3 release caused the alert 'fluentnodedown' to generate. (LOG-1918)
  • Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay the termination and restart of collector Pods. With this update, fluentd no longer flushes buffers at shutdown, resolving the issue. (LOG-1735)
  • Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry "level" based on the "level" field in parsed JSON message or by using regex to extract a match from a message field. (LOG-1199)
  • Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue. (LOG-1776)

1.55.3. Known issues

  • If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. (LOG-1652)

    As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering:

    $ oc delete pod -l component=collector

1.55.4. Deprecated and removed features

Some features available in previous releases have been deprecated or removed.

Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

1.55.4.1. Forwarding logs using the legacy Fluentd and legacy syslog methods have been removed

In OpenShift Logging 5.3, the legacy methods of forwarding logs to Syslog and Fluentd are removed. Bug fixes and support are provided through the end of the OpenShift Logging 5.2 life cycle. After which, no new feature enhancements are made.

Instead, use the following non-legacy methods:

1.55.4.2. Configuration mechanisms for legacy forwarding methods have been removed

In OpenShift Logging 5.3, the legacy configuration mechanism for log forwarding is removed: You cannot forward logs using the legacy Fluentd method and legacy Syslog method. Use the standard log forwarding methods instead.

1.55.5. CVEs

Example 1.14. Click to expand CVEs

1.56. Logging 5.2.13

This release includes RHSA-2022:5909-OpenShift Logging Bug Fix Release 5.2.13.

1.56.1. Bug fixes

1.56.2. CVEs

1.57. Logging 5.2.12

This release includes RHBA-2022:5558-OpenShift Logging Bug Fix Release 5.2.12.

1.57.1. Bug fixes

None.

1.57.2. CVEs

1.58. Logging 5.2.11

This release includes RHBA-2022:5012-OpenShift Logging Bug Fix Release 5.2.11

1.58.1. Bug fixes

  • Before this update, clusters configured to perform CloudWatch forwarding wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. (LOG-2635)

1.58.2. CVEs

1.59. OpenShift Logging 5.2.10

This release includes OpenShift Logging Bug Fix Release 5.2.10]

1.59.1. Bug fixes

  • Before this update some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps.(LOG-2335)
  • Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. (LOG-2475)
  • Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding. This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. (LOG-2480)
  • Before this update, the cluster-logging-operator utilized cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were only created when deploying the Operator using the console interface and were missing when the Operator was deployed from the command line. This fixes the issue by making this role and binding namespace scoped. (LOG-1972)

1.59.2. CVEs

1.60. OpenShift Logging 5.2.9

This release includes RHBA-2022:1375 OpenShift Logging Bug Fix Release 5.2.9]

1.60.1. Bug fixes

  • Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. (LOG-2304)

1.61. OpenShift Logging 5.2.8

This release includes RHSA-2022:0728 OpenShift Logging Bug Fix Release 5.2.8

1.61.1. Bug fixes

  • Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. (LOG-2180)

1.61.2. CVEs

Example 1.19. Click to expand CVEs

1.62. OpenShift Logging 5.2.7

This release includes RHBA-2022:0478 OpenShift Logging Bug Fix Release 5.2.7

1.62.1. Bug fixes

  • Before this update, Elasticsearch pods with FIPS enabled failed to start after updating. With this update, Elasticsearch pods start successfully. (LOG-2000)
  • Before this update, if a persistent volume claim (PVC) already existed, Elasticsearch generated an error, "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." With this update, Elasticsearch checks for existing PVCs, resolving the issue. (LOG-2118)

1.62.2. CVEs

1.63. OpenShift Logging 5.2.6

This release includes RHSA-2022:0230 OpenShift Logging Bug Fix Release 5.2.6

1.63.1. Bug fixes

  • Before this update, the release did not include a filter change which caused Fluentd to crash. With this update, the missing filter has been corrected. (LOG-2104)
  • This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832.(LOG-2101)

1.63.2. CVEs

Example 1.21. Click to expand CVEs

1.64. OpenShift Logging 5.2.5

This release includes RHSA-2022:0043 OpenShift Logging Bug Fix Release 5.2.5

1.64.1. Bug fixes

  • Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, previous indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. LOG-2087)

1.64.2. CVEs

Example 1.22. Click to expand CVEs

1.65. OpenShift Logging 5.2.4

This release includes RHSA-2021:5127 OpenShift Logging Bug Fix Release 5.2.4

1.65.1. Bug fixes

  • Before this update, records shipped via syslog would serialize a ruby hash encoding key/value pairs to contain a '⇒' character, as well as replace tabs with "#11". This update serializes the message correctly as proper JSON. (LOG-1775)
  • Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (LOG-1970)
  • Before this update, Elasticsearch sometimes rejected messages when Log Forwarding was configured with multiple outputs. This happened because configuring one of the outputs modified message content to be a single message. With this update, Log Forwarding duplicates the messages for each output so that output-specific processing does not affect the other outputs. (LOG-1824)

1.65.2. CVEs

1.66. OpenShift Logging 5.2.3

This release includes RHSA-2021:4032 OpenShift Logging Bug Fix Release 5.2.3

1.66.1. Bug fixes

  • Before this update, some alerts did not include a namespace label. This omission does not comply with the OpenShift Monitoring Team’s guidelines for writing alerting rules in OpenShift Container Platform. With this update, all the alerts in Elasticsearch Operator include a namespace label and follow all the guidelines for writing alerting rules in OpenShift Container Platform. (LOG-1857)
  • Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry level based on the level field in parsed JSON message or by using regex to extract a match from a message field. (LOG-1759)

1.66.2. CVEs

1.67. OpenShift Logging 5.2.2

This release includes RHBA-2021:3747 OpenShift Logging Bug Fix Release 5.2.2

1.67.1. Bug fixes

  • Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue.(LOG-1738)
  • Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay to the termination and restart of collector pods. With this update, Fluentd no longer flushes buffers at shutdown, resolving the issue. (LOG-1739)
  • Before this update, an issue in the bundle manifests prevented installation of the Elasticsearch Operator through OLM on OpenShift Container Platform 4.9. With this update, a correction to bundle manifests re-enables installation and upgrade in 4.9.(LOG-1780)

1.67.2. CVEs

1.68. OpenShift Logging 5.2.1

This release includes RHBA-2021:3550 OpenShift Logging Bug Fix Release 5.2.1

1.68.1. Bug fixes

  • Before this update, due to an issue in the release pipeline scripts, the value of the olm.skipRange field remained unchanged at 5.2.0 instead of reflecting the current release number. This update fixes the pipeline scripts to update the value of this field when the release numbers change. (LOG-1743)

1.68.2. CVEs

(None)

1.69. OpenShift Logging 5.2.0

This release includes RHBA-2021:3393 OpenShift Logging Bug Fix Release 5.2.0

1.69.1. New features and enhancements

  • With this update, you can forward log data to Amazon CloudWatch, which provides application and infrastructure monitoring. For more information, see Forwarding logs to Amazon CloudWatch. (LOG-1173)
  • With this update, you can forward log data to Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. For more information, see Forwarding logs to Loki. (LOG-684)
  • With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, now you can use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see Forwarding logs using the Fluentd forward protocol. (LOG-1525)
  • This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see Forwarding logs to an external Elasticsearch instance. (LOG-1022)
  • With this update, you can collect OVN network policy audit logs for forwarding to a logging server. (LOG-1526)
  • By default, the data model introduced in OpenShift Container Platform 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.

    The current release adds namespace metrics to the Logging dashboard in the OpenShift Container Platform console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp.

    To see these metrics, open the Administrator perspective in the OpenShift Container Platform web console, and navigate to ObserveDashboardsLogging/Elasticsearch. (LOG-1680)

  • The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name so that you can see how many logs each namespace and pod collects and produces. (LOG-1213)

1.69.2. Bug fixes

  • Before this update, when the OpenShift Elasticsearch Operator created index management cronjobs, it added the POLICY_MAPPING environment variable twice, which caused the apiserver to report the duplication. This update fixes the issue so that the POLICY_MAPPING environment variable is set only once per cronjob, and there is no duplication for the apiserver to report. (LOG-1130)
  • Before this update, suspending an Elasticsearch cluster to zero nodes did not suspend the index-management cronjobs, which put these cronjobs into maximum backoff. Then, after unsuspending the Elasticsearch cluster, these cronjobs stayed halted due to maximum backoff reached. This update resolves the issue by suspending the cronjobs and the cluster. (LOG-1268)
  • Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers was missing the "chart namespace" label and provided the incorrect metric name, fluentd_input_status_total_bytes_logged. With this update, the chart shows the namespace label and the correct metric name, log_logged_bytes_total. (LOG-1271)
  • Before this update, if an index management cronjob terminated with an error, it did not report the error exit code: instead, its job status was "complete." This update resolves the issue by reporting the error exit codes of index management cronjobs that terminate with errors. (LOG-1273)
  • The priorityclasses.v1beta1.scheduling.k8s.io was removed in 1.22 and replaced by priorityclasses.v1.scheduling.k8s.io (v1beta1 was replaced by v1). Before this update, APIRemovedInNextReleaseInUse alerts were generated for priorityclasses because v1beta1 was still present . This update resolves the issue by replacing v1beta1 with v1. The alert is no longer generated. (LOG-1385)
  • Previously, the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator did not have the annotation that was required for them to appear in the OpenShift Container Platform web console list of Operators that can run in a disconnected environment. This update adds the operators.openshift.io/infrastructure-features: '["Disconnected"]' annotation to these two Operators so that they appear in the list of Operators that run in disconnected environments. (LOG-1420)
  • Before this update, Red Hat OpenShift Logging Operator pods were scheduled on CPU cores that were reserved for customer workloads on performance-optimized single-node clusters. With this update, cluster logging Operator pods are scheduled on the correct CPU cores. (LOG-1440)
  • Before this update, some log entries had unrecognized UTF-8 bytes, which caused Elasticsearch to reject the messages and block the entire buffered payload. With this update, rejected payloads drop the invalid log entries and resubmit the remaining entries to resolve the issue. (LOG-1499)
  • Before this update, the kibana-proxy pod sometimes entered the CrashLoopBackoff state and logged the following message Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes. The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy pod no longer enters a CrashLoopBackoff state due to this error. (LOG-1446)
  • Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional OpenShift Container Platform node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (LOG-1071)
  • Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (LOG-1276)
  • Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. (LOG-1353)
  • Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the chunkLimitSize and totalLimitSize values, the Setting queued_chunks_limit_size for each buffer to message reported values that were too low. The current update fixes this issue so that this message reports the correct values. (LOG-1411)
  • Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (LOG-1558)
  • Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. (LOG-1570)
  • Before this update, a missing license file for the viaq/logerr dependency caused license scanners to abort without success. With this update, the viaq/logerr dependency is licensed under Apache 2.0 and the license scanners run successfully. (LOG-1590)
  • Before this update, an incorrect brew tag for curator5 within the elasticsearch-operator-bundle build pipeline caused the pull of an image pinned to a dummy SHA1. With this update, the build pipeline uses the logging-curator5-rhel8 reference for curator5, enabling index management cronjobs to pull the correct image from registry.redhat.io. (LOG-1624)
  • Before this update, an issue with the ServiceAccount permissions caused errors such as no permissions for [indices:admin/aliases/get]. With this update, a permission fix resolves the issue. (LOG-1657)
  • Before this update, the Custom Resource Definition (CRD) for the Red Hat OpenShift Logging Operator was missing the Loki output type, which caused the admission controller to reject the ClusterLogForwarder custom resource object. With this update, the CRD includes Loki as an output type so that administrators can configure ClusterLogForwarder to send logs to a Loki server. (LOG-1683)
  • Before this update, OpenShift Elasticsearch Operator reconciliation of the ServiceAccounts overwrote third-party-owned fields that contained secrets. This issue caused memory and CPU spikes due to frequent recreation of secrets. This update resolves the issue. Now, the OpenShift Elasticsearch Operator does not overwrite third-party-owned fields. (LOG-1714)
  • Before this update, in the ClusterLogging custom resource (CR) definition, if you specified a flush_interval value but did not set flush_mode to interval, the Red Hat OpenShift Logging Operator generated a Fluentd configuration. However, the Fluentd collector generated an error at runtime. With this update, the Red Hat OpenShift Logging Operator validates the ClusterLogging CR definition and only generates the Fluentd configuration if both fields are specified. (LOG-1723)

1.69.3. Known issues

  • If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. (LOG-1652)

    As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering:

    $ oc delete pod -l component=collector

1.69.4. Deprecated and removed features

Some features available in previous releases have been deprecated or removed.

Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

1.69.5. Forwarding logs using the legacy Fluentd and legacy syslog methods have been deprecated

From OpenShift Container Platform 4.6 to the present, forwarding logs by using the following legacy methods have been deprecated and will be removed in a future release:

  • Forwarding logs using the legacy Fluentd method
  • Forwarding logs using the legacy syslog method

Instead, use the following non-legacy methods:

1.69.6. CVEs

Chapter 2. Support

Only the configuration options described in this documentation are supported for the logging subsystem.

Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.

Note

If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed.

Note

The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

The logging subsystem for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.

The logging subsystem for Red Hat OpenShift is not:

  • A high scale log collection system
  • Security Information and Event Monitoring (SIEM) compliant
  • Historical or long term log retention or storage
  • A guaranteed log sink
  • Secure storage - audit logs are not stored by default

Chapter 3. Logging 5.6

3.1. Logging 5.6 Release Notes

Note

The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed.

3.1.1. Logging 5.6.11

This release includes OpenShift Logging Bug Fix Release 5.6.11.

3.1.1.1. Bug fixes
  • Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4435)
3.1.1.2. CVEs

3.1.2. Logging 5.6.8

This release includes OpenShift Logging Bug Fix Release 5.6.8.

3.1.2.1. Bug fixes
  • Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (LOG-4091)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. (OU-187)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. (OU-189)
  • Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (LOG-4158)
  • Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. (LOG-4278)
3.1.2.2. CVEs

3.1.3. Logging 5.6.7

This release includes OpenShift Logging Bug Fix Release 5.6.7.

3.1.3.1. Bug fixes
  • Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. (LOG-3728)
  • Before this update, the time field of log messages did not parse as structured.time by default in Fluentd when the messages included a timestamp. With this update, parsed log messages will include a structured.time field if the output destination supports it. (LOG-4090)
  • Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to time out. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. (LOG-4130)
  • Before this update, LokiStack CRs with values defined for tenant limits but not global limits caused the Loki Operator to crash. With this update, the Operator is able to process LokiStack CRs with only tenant limits defined, resolving the issue. (LOG-4199)
  • Before this update, the OpenShift Container Platform web console generated errors after an upgrade due to cached files of the prior version retained by the web browser. With this update, these files are no longer cached, resolving the issue. (LOG-4099)
  • Before this update, Vector generated certificate errors when forwarding to the default Loki instance. With this update, logs can be forwarded without errors to Loki by using Vector. (LOG-4184)
  • Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true. With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator’s CR:

    tls.verify_certificate = false
    tls.verify_hostname = false

    (LOG-4146)

3.1.3.2. CVEs

3.1.4. Logging 5.6.6

This release includes OpenShift Logging Bug Fix Release 5.6.6.

3.1.4.1. Bug fixes
  • Before this update, dropping of messages occurred when configuring the ClusterLogForwarder custom resource to write to a Kafka output topic that matched a key in the payload due to an error. With this update, the issue is resolved by prefixing Fluentd’s buffer name with an underscore. (LOG-3458)
  • Before this update, premature closure of watches occurred in Fluentd when inodes were reused and there were multiple entries with the same inode. With this update, the issue of premature closure of watches in the Fluentd position file is resolved. (LOG-3629)
  • Before this update, the detection of JavaScript client multi-line exceptions by Fluentd failed, resulting in printing them as multiple lines. With this update, exceptions are output as a single line, resolving the issue.(LOG-3761)
  • Before this update, direct upgrades from the Red Hat Openshift Logging Operator version 4.6 to version 5.6 were allowed, resulting in functionality issues. With this update, upgrades must be within two versions, resolving the issue. (LOG-3837)
  • Before this update, metrics were not displayed for Splunk or Google Logging outputs. With this update, the issue is resolved by sending metrics for HTTP endpoints.(LOG-3932)
  • Before this update, when the ClusterLogForwarder custom resource was deleted, collector pods remained running. With this update, collector pods do not run when log forwarding is not enabled. (LOG-4030)
  • Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4101)
  • Before this update, Fluentd hash values for watch files were generated using the paths to log files, resulting in a non unique hash upon log rotation. With this update, hash values for watch files are created with inode numbers, resolving the issue. (LOG-3633)
  • Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. (LOG-4118)
3.1.4.2. CVEs

3.1.5. Logging 5.6.5

This release includes OpenShift Logging Bug Fix Release 5.6.5.

3.1.5.1. Bug fixes
  • Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. (LOG-3419)
  • Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. (LOG-3750)
  • Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. (LOG-3583)
  • Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. (LOG-3480)
  • Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. (LOG-4008)
3.1.5.2. CVEs

3.1.6. Logging 5.6.4

This release includes OpenShift Logging Bug Fix Release 5.6.4.

3.1.6.1. Bug fixes
  • Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. (LOG-3280)
  • Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. (LOG-3454)
  • Before this update, when the tls.insecureSkipVerify option was set to true, the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3475)
  • Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. (LOG-3640)
  • Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. (LOG-3733)
  • Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. (LOG-3783)
  • Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. (LOG-3814)
  • Before this update, if the tls.insecureSkipVerify field was set to true, the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3838)
  • Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.(LOG-3763)
3.1.6.2. CVEs

3.1.7. Logging 5.6.3

This release includes OpenShift Logging Bug Fix Release 5.6.3.

3.1.7.1. Bug fixes
  • Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (LOG-3717)
  • Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log. With this update, Fluentd captures these OAuth login events, resolving the issue. (LOG-3729)
3.1.7.2. CVEs

3.1.8. Logging 5.6.2

This release includes OpenShift Logging Bug Fix Release 5.6.2.

3.1.8.1. Bug fixes
  • Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. (LOG-3429)
  • Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. (LOG-3584)
  • Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (LOG-3437)
  • Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (LOG-3559)
  • Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (LOG-3608)
  • Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (LOG-3635)
3.1.8.2. CVEs

3.1.9. Logging 5.6.1

This release includes OpenShift Logging Bug Fix Release 5.6.1.

3.1.9.1. Bug fixes
  • Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (LOG-3494)
  • Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (LOG-3496)
  • Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (LOG-3510)
  • Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3441), (LOG-3397)
  • Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3463)
  • Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. (LOG-3447)
  • Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(LOG-3477)
3.1.9.2. CVEs

3.1.10. Logging 5.6.0

This release includes OpenShift Logging Release 5.6.

3.1.10.1. Deprecation notice

In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.

3.1.10.2. Enhancements
  • With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. (LOG-895)
  • With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (LOG-2695)
  • With this update, Splunk is an available output option for log forwarding. (LOG-2913)
  • With this update, Vector replaces Fluentd as the default Collector. (LOG-2222)
  • With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. (LOG-3388)
  • With this update, logs from any source contain a field openshift.cluster_id, the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value with the command below. (LOG-2715)
$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'
3.1.10.3. Known Issues
  • Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (LOG-3463)
3.1.10.4. Bug fixes
  • Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-2993)
  • Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-3072)
  • Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. (LOG-3090)
  • Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3331)
  • Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1. With this update, the operator sets the actual value for the size used. (LOG-3296)
  • Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. (LOG-3195)
  • Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3161)
  • Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3157)
  • Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout, with a default value of 24h. (LOG-3129)
  • Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (LOG-2919)
  • Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (LOG-2819)
  • Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (LOG-2789)
  • Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (LOG-2315)
  • Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (LOG-1806)
  • Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. (LOG-3446)
  • Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (LOG-3235)
  • Before this update, Vector was missing the field sequence, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. (LOG-3106)
3.1.10.5. CVEs

3.2. Getting started with logging 5.6

This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended.

Note

As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.

Note

The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Prerequisites

  • LogStore preference: Elasticsearch or LokiStack
  • Collector implementation preference: Fluentd or Vector
  • Credentials for your log forwarding outputs
Note

As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

  1. Install the Operator for the logstore you’d like to use.

    • For Elasticsearch, install the OpenShift Elasticsearch Operator.
    • For LokiStack, install the Loki Operator.

      • Create a LokiStack custom resource (CR) instance.
  2. Install the Red Hat OpenShift Logging Operator.
  3. Create a ClusterLogging custom resource (CR) instance.

    1. Select your Collector Implementation.

      Note

      As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.

  4. Create a ClusterLogForwarder custom resource (CR) instance.
  5. Create a secret for the selected output pipeline.

3.3. Understanding logging

The logging subsystem consists of these logical components:

  • Collector - Reads container log data from each node and forwards log data to configured outputs.
  • Store - Stores log data for analysis; the default output for the forwarder.
  • Visualization - Graphical interface for searching, querying, and viewing stored logs.

These components are managed by Operators and Custom Resource (CR) YAML files.

The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types:

  • application - Container logs generated by non-infrastructure containers.
  • infrastructure - Container logs from namespaces kube-* and openshift-\*, and node logs from journald.
  • audit - Logs from auditd, kube-apiserver, openshift-apiserver, and ovn if enabled.

The logging collector is a daemonset that deploys pods to each OpenShift Container Platform node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.

Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder custom resource.

3.4. Administering your logging deployment

3.4.1. Deploying Red Hat OpenShift Logging Operator using the web console

You can use the OpenShift Container Platform web console to deploy the Red Hat OpenShift Logging Operator.

Prerequisites

The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Procedure

To deploy the Red Hat OpenShift Logging Operator using the OpenShift Container Platform web console:

  1. Install the Red Hat OpenShift Logging Operator:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
    2. Type Logging in the Filter by keyword field.
    3. Choose Red Hat OpenShift Logging from the list of available Operators, and click Install.
    4. Select stable or stable-5.y as the Update Channel.

      Note

      The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed.

    5. Ensure that A specific namespace on the cluster is selected under Installation Mode.
    6. Ensure that Operator recommended namespace is openshift-logging under Installed Namespace.
    7. Select Enable Operator recommended cluster monitoring on this Namespace.
    8. Select an option for Update approval.

      • The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
      • The Manual option requires a user with appropriate credentials to approve the Operator update.
    9. Select Enable or Disable for the Console plugin.
    10. Click Install.
  2. Verify that the Red Hat OpenShift Logging Operator is installed by switching to the OperatorsInstalled Operators page.

    1. Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded.
  3. Create a ClusterLogging instance.

    Note

    The form view of the web console does not include all available options. The YAML view is recommended for completing your setup.

    1. In the collection section, select a Collector Implementation.

      Note

      As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.

    2. In the logStore section, select a type.

      Note

      As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

    3. Click Create.

3.4.2. Deploying the Loki Operator using the web console

You can use the OpenShift Container Platform web console to install the Loki Operator.

Prerequisites

  • Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)

Procedure

To install the Loki Operator using the OpenShift Container Platform web console:

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Type Loki in the Filter by keyword field.

    1. Choose Loki Operator from the list of available Operators, and click Install.
  3. Select stable or stable-5.y as the Update Channel.

    Note

    The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed.

  4. Ensure that All namespaces on the cluster is selected under Installation Mode.
  5. Ensure that openshift-operators-redhat is selected under Installed Namespace.
  6. Select Enable Operator recommended cluster monitoring on this Namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.

  7. Select an option for Update approval.

    • The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
    • The Manual option requires a user with appropriate credentials to approve the Operator update.
  8. Click Install.
  9. Verify that the LokiOperator installed by switching to the OperatorsInstalled Operators page.

    1. Ensure that LokiOperator is listed with Status as Succeeded in all the projects.
  10. Create a Secret YAML file that uses the access_key_id and access_key_secret fields to specify your credentials and bucketnames, endpoint, and region to define the object storage location. AWS is used in the following example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: logging-loki-s3
      namespace: openshift-logging
    stringData:
      access_key_id: AKIAIOSFODNN7EXAMPLE
      access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
      bucketnames: s3-bucket-name
      endpoint: https://s3.eu-central-1.amazonaws.com
      region: eu-central-1
  11. Select Create instance under LokiStack on the Details tab. Then select YAML view. Paste in the following template, subsituting values where appropriate.

      apiVersion: loki.grafana.com/v1
      kind: LokiStack
      metadata:
        name: logging-loki 1
        namespace: openshift-logging
      spec:
        size: 1x.small 2
        storage:
          schemas:
          - version: v12
            effectiveDate: '2022-06-01'
          secret:
            name: logging-loki-s3 3
            type: s3 4
        storageClassName: <storage_class_name> 5
        tenants:
          mode: openshift-logging
    1
    Name should be logging-loki.
    2
    Select your Loki deployment size.
    3
    Define the secret used for your log storage.
    4
    Define corresponding storage type.
    5
    Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using oc get storageclasses.
    1. Apply the configuration:

      oc apply -f logging-loki.yaml
  12. Create or edit a ClusterLogging CR:

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogging
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        managementState: Managed
        logStore:
          type: lokistack
          lokistack:
            name: logging-loki
          collection:
            type: vector
    1. Apply the configuration:

      oc apply -f cr-lokistack.yaml

3.4.3. Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • Install the oc command to your local system.

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace

    Example output

    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one.

    Note

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>

    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription object

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <subscription_name>
      namespace: openshift-operators 1
    spec:
      channel: <channel_name> 2
      name: <operator_name> 3
      source: redhat-operators 4
      sourceNamespace: openshift-marketplace 5
      config:
        env: 6
        - name: ARGS
          value: "-v=10"
        envFrom: 7
        - secretRef:
            name: license-secret
        volumes: 8
        - name: <volume_name>
          configMap:
            name: <configmap_name>
        volumeMounts: 9
        - mountPath: <directory_name>
          name: <volume_name>
        tolerations: 10
        - operator: "Exists"
        resources: 11
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        nodeSelector: 12
          foo: bar

    1
    For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage.
    2
    Name of the channel to subscribe to.
    3
    Name of the Operator to subscribe to.
    4
    Name of the catalog source that provides the Operator.
    5
    Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
    6
    The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM.
    7
    The envFrom parameter defines a list of sources to populate Environment Variables in the container.
    8
    The volumes parameter defines a list of Volumes that must exist on the pod created by OLM.
    9
    The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator.
    10
    The tolerations parameter defines a list of Tolerations for the pod created by OLM.
    11
    The resources parameter defines resource constraints for all the containers in the pod created by OLM.
    12
    The nodeSelector parameter defines a NodeSelector for the pod created by OLM.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

3.4.4. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

3.4.5. Deleting Operators from a cluster using the CLI

Cluster administrators can delete installed Operators from a selected namespace by using the CLI.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • oc command installed on workstation.

Procedure

  1. Check the current version of the subscribed Operator (for example, jaeger) in the currentCSV field:

    $ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV

    Example output

      currentCSV: jaeger-operator.v1.8.2

  2. Delete the subscription (for example, jaeger):

    $ oc delete subscription jaeger -n openshift-operators

    Example output

    subscription.operators.coreos.com "jaeger" deleted

  3. Delete the CSV for the Operator in the target namespace using the currentCSV value from the previous step:

    $ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators

    Example output

    clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted

3.5. Logging References

3.5.1. Collector features

OutputProtocolTested withFluentdVector

Cloudwatch

REST over HTTP(S)

 

Elasticsearch v6

 

v6.8.1

Elasticsearch v7

 

v7.12.2, 7.17.7

Elasticsearch v8

 

v8.4.3

 

Fluent Forward

Fluentd forward v1

Fluentd 1.14.6, Logstash 7.10.1

 

Google Cloud Logging

   

HTTP

HTTP 1.1

Fluentd 1.14.6, Vector 0.21

  

Kafka

Kafka 0.11

Kafka 2.4.1, 2.7.0, 3.3.1

Loki

REST over HTTP(S)

Loki 2.3.0, 2.7

Splunk

HEC

v8.2.9, 9.0.0

 

Syslog

RFC3164, RFC5424

Rsyslog 8.37.0-9.el7

 
Table 3.1. Log Sources
FeatureFluentdVector

App container logs

App-specific routing

App-specific routing by namespace

Infra container logs

Infra journal logs

Kube API audit logs

OpenShift API audit logs

Open Virtual Network (OVN) audit logs

Table 3.2. Authorization and Authentication
FeatureFluentdVector

Elasticsearch certificates

Elasticsearch username / password

Cloudwatch keys

Cloudwatch STS

Kafka certificates

Kafka username / password

Kafka SASL

Loki bearer token

Table 3.3. Normalizations and Transformations
FeatureFluentdVector

Viaq data model - app

Viaq data model - infra

Viaq data model - infra(journal)

Viaq data model - Linux audit

Viaq data model - kube-apiserver audit

Viaq data model - OpenShift API audit

Viaq data model - OVN

Loglevel Normalization

JSON parsing

Structured Index

Multiline error detection

 

Multicontainer / split indices

Flatten labels

CLF static labels

Table 3.4. Tuning
FeatureFluentdVector

Fluentd readlinelimit

 

Fluentd buffer

 

- chunklimitsize

 

- totallimitsize

 

- overflowaction

 

- flushthreadcount

 

- flushmode

 

- flushinterval

 

- retrywait

 

- retrytype

 

- retrymaxinterval

 

- retrytimeout

 
Table 3.5. Visibility
FeatureFluentdVector

Metrics

Dashboard

Alerts

 
Table 3.6. Miscellaneous
FeatureFluentdVector

Global proxy support

x86 support

ARM support

IBM Power support

IBM Z support

IPv6 support

Log event buffering

 

Disconnected Cluster

Additional resources

3.5.2. Logging 5.6 API reference

3.5.2.1. ClusterLogForwarder

ClusterLogForwarder is an API to configure forwarding logs.

You configure forwarding by specifying a list of pipelines, which forward from a set of named inputs to a set of named outputs.

There are built-in input names for common log categories, and you can define custom inputs to do additional filtering.

There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster.

For more details see the documentation on the API fields.

PropertyTypeDescription

spec

object

Specification of the desired behavior of ClusterLogForwarder

status

object

Status of the ClusterLogForwarder

3.5.2.1.1. .spec
3.5.2.1.1.1. Description

ClusterLogForwarderSpec defines how logs should be forwarded to remote targets.

3.5.2.1.1.1.1. Type
  • object
PropertyTypeDescription

inputs

array

(optional) Inputs are named filters for log messages to be forwarded.

outputDefaults

object

(optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store.

outputs

array

(optional) Outputs are named destinations for log messages.

pipelines

array

Pipelines forward the messages selected by a set of inputs to a set of outputs.

3.5.2.1.2. .spec.inputs[]
3.5.2.1.2.1. Description

InputSpec defines a selector of log messages.

3.5.2.1.2.1.1. Type
  • array
PropertyTypeDescription

application

object

(optional) Application, if present, enables named set of application logs that

name

string

Name used to refer to the input of a pipeline.

3.5.2.1.3. .spec.inputs[].application
3.5.2.1.3.1. Description

Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs.

3.5.2.1.3.1.1. Type
  • object
PropertyTypeDescription

namespaces

array

(optional) Namespaces from which to collect application logs.

selector

object

(optional) Selector for logs from pods with matching labels.

3.5.2.1.4. .spec.inputs[].application.namespaces[]
3.5.2.1.4.1. Description
3.5.2.1.4.1.1. Type
  • array
3.5.2.1.5. .spec.inputs[].application.selector
3.5.2.1.5.1. Description

A label selector is a label query over a set of resources.

3.5.2.1.5.1.1. Type
  • object
PropertyTypeDescription

matchLabels

object

(optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels

3.5.2.1.6. .spec.inputs[].application.selector.matchLabels
3.5.2.1.6.1. Description
3.5.2.1.6.1.1. Type
  • object
3.5.2.1.7. .spec.outputDefaults
3.5.2.1.7.1. Description
3.5.2.1.7.1.1. Type
  • object
PropertyTypeDescription

elasticsearch

object

(optional) Elasticsearch OutputSpec default values

3.5.2.1.8. .spec.outputDefaults.elasticsearch
3.5.2.1.8.1. Description

ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index

3.5.2.1.8.1.1. Type
  • object
PropertyTypeDescription

enableStructuredContainerLogs

bool

(optional) EnableStructuredContainerLogs enables multi-container structured logs to allow

structuredTypeKey

string

(optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index

structuredTypeName

string

(optional) StructuredTypeName specifies the name of elasticsearch schema

3.5.2.1.9. .spec.outputs[]
3.5.2.1.9.1. Description

Output defines a destination for log messages.

3.5.2.1.9.1.1. Type
  • array
PropertyTypeDescription

syslog

object

(optional)

fluentdForward

object

(optional)

elasticsearch

object

(optional)

kafka

object

(optional)

cloudwatch

object

(optional)

loki

object

(optional)

googleCloudLogging

object

(optional)

splunk

object

(optional)

name

string

Name used to refer to the output from a pipeline.

secret

object

(optional) Secret for authentication.

tls

object

TLS contains settings for controlling options on TLS client connections.

type

string

Type of output plugin.

url

string

(optional) URL to send log records to.

3.5.2.1.10. .spec.outputs[].secret
3.5.2.1.10.1. Description

OutputSecretSpec is a secret reference containing name only, no namespace.

3.5.2.1.10.1.1. Type
  • object
PropertyTypeDescription

name

string

Name of a secret in the namespace configured for log forwarder secrets.

3.5.2.1.11. .spec.outputs[].tls
3.5.2.1.11.1. Description

OutputTLSSpec contains options for TLS connections that are agnostic to the output type.

3.5.2.1.11.1.1. Type
  • object
PropertyTypeDescription

insecureSkipVerify

bool

If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates.

3.5.2.1.12. .spec.pipelines[]
3.5.2.1.12.1. Description

PipelinesSpec link a set of inputs to a set of outputs.

3.5.2.1.12.1.1. Type
  • array
PropertyTypeDescription

detectMultilineErrors

bool

(optional) DetectMultilineErrors enables multiline error detection of container logs

inputRefs

array

InputRefs lists the names (input.name) of inputs to this pipeline.

labels

object

(optional) Labels applied to log records passing through this pipeline.

name

string

(optional) Name is optional, but must be unique in the pipelines list if provided.

outputRefs

array

OutputRefs lists the names (output.name) of outputs from this pipeline.

parse

string

(optional) Parse enables parsing of log entries into structured logs

3.5.2.1.13. .spec.pipelines[].inputRefs[]
3.5.2.1.13.1. Description
3.5.2.1.13.1.1. Type
  • array
3.5.2.1.14. .spec.pipelines[].labels
3.5.2.1.14.1. Description
3.5.2.1.14.1.1. Type
  • object
3.5.2.1.15. .spec.pipelines[].outputRefs[]
3.5.2.1.15.1. Description
3.5.2.1.15.1.1. Type
  • array
3.5.2.1.16. .status
3.5.2.1.16.1. Description

ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder

3.5.2.1.16.1.1. Type
  • object
PropertyTypeDescription

conditions

object

Conditions of the log forwarder.

inputs

Conditions

Inputs maps input name to condition of the input.

outputs

Conditions

Outputs maps output name to condition of the output.

pipelines

Conditions

Pipelines maps pipeline name to condition of the pipeline.

3.5.2.1.17. .status.conditions
3.5.2.1.17.1. Description
3.5.2.1.17.1.1. Type
  • object
3.5.2.1.18. .status.inputs
3.5.2.1.18.1. Description
3.5.2.1.18.1.1. Type
  • Conditions
3.5.2.1.19. .status.outputs
3.5.2.1.19.1. Description
3.5.2.1.19.1.1. Type
  • Conditions
3.5.2.1.20. .status.pipelines
3.5.2.1.20.1. Description
3.5.2.1.20.1.1. Type
  • Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API
PropertyTypeDescription

spec

object

Specification of the desired behavior of ClusterLogging

status

object

Status defines the observed state of ClusterLogging

3.5.2.1.21. .spec
3.5.2.1.21.1. Description

ClusterLoggingSpec defines the desired state of ClusterLogging

3.5.2.1.21.1.1. Type
  • object
PropertyTypeDescription

collection

object

Specification of the Collection component for the cluster

curation

object

(DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster

forwarder

object

(DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster

logStore

object

(optional) Specification of the Log Storage component for the cluster

managementState

string

(optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator

visualization

object

(optional) Specification of the Visualization component for the cluster

3.5.2.1.22. .spec.collection
3.5.2.1.22.1. Description

This is the struct that will contain information pertinent to Log and event collection

3.5.2.1.22.1.1. Type
  • object
PropertyTypeDescription

resources

object

(optional) The resource requirements for the collector

nodeSelector

object

(optional) Define which Nodes the Pods are scheduled on.

tolerations

array

(optional) Define the tolerations the Pods will accept

fluentd

object

(optional) Fluentd represents the configuration for forwarders of type fluentd.

logs

object

(DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster

type

string

(optional) The type of Log Collection to configure

3.5.2.1.23. .spec.collection.fluentd
3.5.2.1.23.1. Description

FluentdForwarderSpec represents the configuration for forwarders of type fluentd.

3.5.2.1.23.1.1. Type
  • object
PropertyTypeDescription

buffer

object

 

inFile

object

 
3.5.2.1.24. .spec.collection.fluentd.buffer
3.5.2.1.24.1. Description

FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.

For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters

For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters

For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters

3.5.2.1.24.1.1. Type
  • object
PropertyTypeDescription

chunkLimitSize

string

(optional) ChunkLimitSize represents the maximum size of each chunk. Events will be

flushInterval

string

(optional) FlushInterval represents the time duration to wait between two consecutive flush

flushMode

string

(optional) FlushMode represents the mode of the flushing thread to write chunks. The mode

flushThreadCount

int

(optional) FlushThreadCount reprents the number of threads used by the fluentd buffer

overflowAction

string

(optional) OverflowAction represents the action for the fluentd buffer plugin to

retryMaxInterval

string

(optional) RetryMaxInterval represents the maximum time interval for exponential backoff

retryTimeout

string

(optional) RetryTimeout represents the maximum time interval to attempt retries before giving up

retryType

string

(optional) RetryType represents the type of retrying flush operations. Flush operations can

retryWait

string

(optional) RetryWait represents the time duration between two consecutive retries to flush

totalLimitSize

string

(optional) TotalLimitSize represents the threshold of node space allowed per fluentd

3.5.2.1.25. .spec.collection.fluentd.inFile
3.5.2.1.25.1. Description

FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.

For general parameters refer to: https://docs.fluentd.org/input/tail#parameters

3.5.2.1.25.1.1. Type
  • object
PropertyTypeDescription

readLinesLimit

int

(optional) ReadLinesLimit represents the number of lines to read with each I/O operation

3.5.2.1.26. .spec.collection.logs
3.5.2.1.26.1. Description
3.5.2.1.26.1.1. Type
  • object
PropertyTypeDescription

fluentd

object

Specification of the Fluentd Log Collection component

type

string

The type of Log Collection to configure

3.5.2.1.27. .spec.collection.logs.fluentd
3.5.2.1.27.1. Description

CollectorSpec is spec to define scheduling and resources for a collector

3.5.2.1.27.1.1. Type
  • object
PropertyTypeDescription

nodeSelector

object

(optional) Define which Nodes the Pods are scheduled on.

resources

object

(optional) The resource requirements for the collector

tolerations

array

(optional) Define the tolerations the Pods will accept

3.5.2.1.28. .spec.collection.logs.fluentd.nodeSelector
3.5.2.1.28.1. Description
3.5.2.1.28.1.1. Type
  • object
3.5.2.1.29. .spec.collection.logs.fluentd.resources
3.5.2.1.29.1. Description
3.5.2.1.29.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.30. .spec.collection.logs.fluentd.resources.limits
3.5.2.1.30.1. Description
3.5.2.1.30.1.1. Type
  • object
3.5.2.1.31. .spec.collection.logs.fluentd.resources.requests
3.5.2.1.31.1. Description
3.5.2.1.31.1.1. Type
  • object
3.5.2.1.32. .spec.collection.logs.fluentd.tolerations[]
3.5.2.1.32.1. Description
3.5.2.1.32.1.1. Type
  • array
PropertyTypeDescription

effect

string

(optional) Effect indicates the taint effect to match. Empty means match all taint effects.

key

string

(optional) Key is the taint key that the toleration applies to. Empty means match all taint keys.

operator

string

(optional) Operator represents a key's relationship to the value.

tolerationSeconds

int

(optional) TolerationSeconds represents the period of time the toleration (which must be

value

string

(optional) Value is the taint value the toleration matches to.

3.5.2.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds
3.5.2.1.33.1. Description
3.5.2.1.33.1.1. Type
  • int
3.5.2.1.34. .spec.curation
3.5.2.1.34.1. Description

This is the struct that will contain information pertinent to Log curation (Curator)

3.5.2.1.34.1.1. Type
  • object
PropertyTypeDescription

curator

object

The specification of curation to configure

type

string

The kind of curation to configure

3.5.2.1.35. .spec.curation.curator
3.5.2.1.35.1. Description
3.5.2.1.35.1.1. Type
  • object
PropertyTypeDescription

nodeSelector

object

Define which Nodes the Pods are scheduled on.

resources

object

(optional) The resource requirements for Curator

schedule

string

The cron schedule that the Curator job is run. Defaults to "30 3 * * *"

tolerations

array

 
3.5.2.1.36. .spec.curation.curator.nodeSelector
3.5.2.1.36.1. Description
3.5.2.1.36.1.1. Type
  • object
3.5.2.1.37. .spec.curation.curator.resources
3.5.2.1.37.1. Description
3.5.2.1.37.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.38. .spec.curation.curator.resources.limits
3.5.2.1.38.1. Description
3.5.2.1.38.1.1. Type
  • object
3.5.2.1.39. .spec.curation.curator.resources.requests
3.5.2.1.39.1. Description
3.5.2.1.39.1.1. Type
  • object
3.5.2.1.40. .spec.curation.curator.tolerations[]
3.5.2.1.40.1. Description
3.5.2.1.40.1.1. Type
  • array
PropertyTypeDescription

effect

string

(optional) Effect indicates the taint effect to match. Empty means match all taint effects.

key

string

(optional) Key is the taint key that the toleration applies to. Empty means match all taint keys.

operator

string

(optional) Operator represents a key's relationship to the value.

tolerationSeconds

int

(optional) TolerationSeconds represents the period of time the toleration (which must be

value

string

(optional) Value is the taint value the toleration matches to.

3.5.2.1.41. .spec.curation.curator.tolerations[].tolerationSeconds
3.5.2.1.41.1. Description
3.5.2.1.41.1.1. Type
  • int
3.5.2.1.42. .spec.forwarder
3.5.2.1.42.1. Description

ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd.

3.5.2.1.42.1.1. Type
  • object
PropertyTypeDescription

fluentd

object

 
3.5.2.1.43. .spec.forwarder.fluentd
3.5.2.1.43.1. Description

FluentdForwarderSpec represents the configuration for forwarders of type fluentd.

3.5.2.1.43.1.1. Type
  • object
PropertyTypeDescription

buffer

object

 

inFile

object

 
3.5.2.1.44. .spec.forwarder.fluentd.buffer
3.5.2.1.44.1. Description

FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.

For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters

For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters

For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters

3.5.2.1.44.1.1. Type
  • object
PropertyTypeDescription

chunkLimitSize

string

(optional) ChunkLimitSize represents the maximum size of each chunk. Events will be

flushInterval

string

(optional) FlushInterval represents the time duration to wait between two consecutive flush

flushMode

string

(optional) FlushMode represents the mode of the flushing thread to write chunks. The mode

flushThreadCount

int

(optional) FlushThreadCount reprents the number of threads used by the fluentd buffer

overflowAction

string

(optional) OverflowAction represents the action for the fluentd buffer plugin to

retryMaxInterval

string

(optional) RetryMaxInterval represents the maximum time interval for exponential backoff

retryTimeout

string

(optional) RetryTimeout represents the maximum time interval to attempt retries before giving up

retryType

string

(optional) RetryType represents the type of retrying flush operations. Flush operations can

retryWait

string

(optional) RetryWait represents the time duration between two consecutive retries to flush

totalLimitSize

string

(optional) TotalLimitSize represents the threshold of node space allowed per fluentd

3.5.2.1.45. .spec.forwarder.fluentd.inFile
3.5.2.1.45.1. Description

FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.

For general parameters refer to: https://docs.fluentd.org/input/tail#parameters

3.5.2.1.45.1.1. Type
  • object
PropertyTypeDescription

readLinesLimit

int

(optional) ReadLinesLimit represents the number of lines to read with each I/O operation

3.5.2.1.46. .spec.logStore
3.5.2.1.46.1. Description

The LogStoreSpec contains information about how logs are stored.

3.5.2.1.46.1.1. Type
  • object
PropertyTypeDescription

elasticsearch

object

Specification of the Elasticsearch Log Store component

lokistack

object

LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack.

retentionPolicy

object

(optional) Retention policy defines the maximum age for an index after which it should be deleted

type

string

The Type of Log Storage to configure. The operator currently supports either using ElasticSearch

3.5.2.1.47. .spec.logStore.elasticsearch
3.5.2.1.47.1. Description
3.5.2.1.47.1.1. Type
  • object
PropertyTypeDescription

nodeCount

int

Number of nodes to deploy for Elasticsearch

nodeSelector

object

Define which Nodes the Pods are scheduled on.

proxy

object

Specification of the Elasticsearch Proxy component

redundancyPolicy

string

(optional)

resources

object

(optional) The resource requirements for Elasticsearch

storage

object

(optional) The storage specification for Elasticsearch data nodes

tolerations

array

 
3.5.2.1.48. .spec.logStore.elasticsearch.nodeSelector
3.5.2.1.48.1. Description
3.5.2.1.48.1.1. Type
  • object
3.5.2.1.49. .spec.logStore.elasticsearch.proxy
3.5.2.1.49.1. Description
3.5.2.1.49.1.1. Type
  • object
PropertyTypeDescription

resources

object

 
3.5.2.1.50. .spec.logStore.elasticsearch.proxy.resources
3.5.2.1.50.1. Description
3.5.2.1.50.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.51. .spec.logStore.elasticsearch.proxy.resources.limits
3.5.2.1.51.1. Description
3.5.2.1.51.1.1. Type
  • object
3.5.2.1.52. .spec.logStore.elasticsearch.proxy.resources.requests
3.5.2.1.52.1. Description
3.5.2.1.52.1.1. Type
  • object
3.5.2.1.53. .spec.logStore.elasticsearch.resources
3.5.2.1.53.1. Description
3.5.2.1.53.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.54. .spec.logStore.elasticsearch.resources.limits
3.5.2.1.54.1. Description
3.5.2.1.54.1.1. Type
  • object
3.5.2.1.55. .spec.logStore.elasticsearch.resources.requests
3.5.2.1.55.1. Description
3.5.2.1.55.1.1. Type
  • object
3.5.2.1.56. .spec.logStore.elasticsearch.storage
3.5.2.1.56.1. Description
3.5.2.1.56.1.1. Type
  • object
PropertyTypeDescription

size

object

The max storage capacity for the node to provision.

storageClassName

string

(optional) The name of the storage class to use with creating the node's PVC.

3.5.2.1.57. .spec.logStore.elasticsearch.storage.size
3.5.2.1.57.1. Description
3.5.2.1.57.1.1. Type
  • object
PropertyTypeDescription

Format

string

Change Format at will. See the comment for Canonicalize for

d

object

d is the quantity in inf.Dec form if d.Dec != nil

i

int

i is the quantity in int64 scaled form, if d.Dec == nil

s

string

s is the generated value of this quantity to avoid recalculation

3.5.2.1.58. .spec.logStore.elasticsearch.storage.size.d
3.5.2.1.58.1. Description
3.5.2.1.58.1.1. Type
  • object
PropertyTypeDescription

Dec

object

 
3.5.2.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec
3.5.2.1.59.1. Description
3.5.2.1.59.1.1. Type
  • object
PropertyTypeDescription

scale

int

 

unscaled

object

 
3.5.2.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled
3.5.2.1.60.1. Description
3.5.2.1.60.1.1. Type
  • object
PropertyTypeDescription

abs

Word

sign

neg

bool

 
3.5.2.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs
3.5.2.1.61.1. Description
3.5.2.1.61.1.1. Type
  • Word
3.5.2.1.62. .spec.logStore.elasticsearch.storage.size.i
3.5.2.1.62.1. Description
3.5.2.1.62.1.1. Type
  • int
PropertyTypeDescription

scale

int

 

value

int

 
3.5.2.1.63. .spec.logStore.elasticsearch.tolerations[]
3.5.2.1.63.1. Description
3.5.2.1.63.1.1. Type
  • array
PropertyTypeDescription

effect

string

(optional) Effect indicates the taint effect to match. Empty means match all taint effects.

key

string

(optional) Key is the taint key that the toleration applies to. Empty means match all taint keys.

operator

string

(optional) Operator represents a key's relationship to the value.

tolerationSeconds

int

(optional) TolerationSeconds represents the period of time the toleration (which must be

value

string

(optional) Value is the taint value the toleration matches to.

3.5.2.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds
3.5.2.1.64.1. Description
3.5.2.1.64.1.1. Type
  • int
3.5.2.1.65. .spec.logStore.lokistack
3.5.2.1.65.1. Description

LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace.

3.5.2.1.65.1.1. Type
  • object
PropertyTypeDescription

name

string

Name of the LokiStack resource.

3.5.2.1.66. .spec.logStore.retentionPolicy
3.5.2.1.66.1. Description
3.5.2.1.66.1.1. Type
  • object
PropertyTypeDescription

application

object

 

audit

object

 

infra

object

 
3.5.2.1.67. .spec.logStore.retentionPolicy.application
3.5.2.1.67.1. Description
3.5.2.1.67.1.1. Type
  • object
PropertyTypeDescription

diskThresholdPercent

int

(optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75)

maxAge

string

(optional)

namespaceSpec

array

(optional) The per namespace specification to delete documents older than a given minimum age

pruneNamespacesInterval

string

(optional) How often to run a new prune-namespaces job

3.5.2.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[]
3.5.2.1.68.1. Description
3.5.2.1.68.1.1. Type
  • array
PropertyTypeDescription

minAge

string

(optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d)

namespace

string

Target Namespace to delete logs older than MinAge (defaults to 7d)

3.5.2.1.69. .spec.logStore.retentionPolicy.audit
3.5.2.1.69.1. Description
3.5.2.1.69.1.1. Type
  • object
PropertyTypeDescription

diskThresholdPercent

int

(optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75)

maxAge

string

(optional)

namespaceSpec

array

(optional) The per namespace specification to delete documents older than a given minimum age

pruneNamespacesInterval

string

(optional) How often to run a new prune-namespaces job

3.5.2.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[]
3.5.2.1.70.1. Description
3.5.2.1.70.1.1. Type
  • array
PropertyTypeDescription

minAge

string

(optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d)

namespace

string

Target Namespace to delete logs older than MinAge (defaults to 7d)

3.5.2.1.71. .spec.logStore.retentionPolicy.infra
3.5.2.1.71.1. Description
3.5.2.1.71.1.1. Type
  • object
PropertyTypeDescription

diskThresholdPercent

int

(optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75)

maxAge

string

(optional)

namespaceSpec

array

(optional) The per namespace specification to delete documents older than a given minimum age

pruneNamespacesInterval

string

(optional) How often to run a new prune-namespaces job

3.5.2.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[]
3.5.2.1.72.1. Description
3.5.2.1.72.1.1. Type
  • array
PropertyTypeDescription

minAge

string

(optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d)

namespace

string

Target Namespace to delete logs older than MinAge (defaults to 7d)

3.5.2.1.73. .spec.visualization
3.5.2.1.73.1. Description

This is the struct that will contain information pertinent to Log visualization (Kibana)

3.5.2.1.73.1.1. Type
  • object
PropertyTypeDescription

kibana

object

Specification of the Kibana Visualization component

type

string

The type of Visualization to configure

3.5.2.1.74. .spec.visualization.kibana
3.5.2.1.74.1. Description
3.5.2.1.74.1.1. Type
  • object
PropertyTypeDescription

nodeSelector

object

Define which Nodes the Pods are scheduled on.

proxy

object

Specification of the Kibana Proxy component

replicas

int

Number of instances to deploy for a Kibana deployment

resources

object

(optional) The resource requirements for Kibana

tolerations

array

 
3.5.2.1.75. .spec.visualization.kibana.nodeSelector
3.5.2.1.75.1. Description
3.5.2.1.75.1.1. Type
  • object
3.5.2.1.76. .spec.visualization.kibana.proxy
3.5.2.1.76.1. Description
3.5.2.1.76.1.1. Type
  • object
PropertyTypeDescription

resources

object

 
3.5.2.1.77. .spec.visualization.kibana.proxy.resources
3.5.2.1.77.1. Description
3.5.2.1.77.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.78. .spec.visualization.kibana.proxy.resources.limits
3.5.2.1.78.1. Description
3.5.2.1.78.1.1. Type
  • object
3.5.2.1.79. .spec.visualization.kibana.proxy.resources.requests
3.5.2.1.79.1. Description
3.5.2.1.79.1.1. Type
  • object
3.5.2.1.80. .spec.visualization.kibana.replicas
3.5.2.1.80.1. Description
3.5.2.1.80.1.1. Type
  • int
3.5.2.1.81. .spec.visualization.kibana.resources
3.5.2.1.81.1. Description
3.5.2.1.81.1.1. Type
  • object
PropertyTypeDescription

limits

object

(optional) Limits describes the maximum amount of compute resources allowed.

requests

object

(optional) Requests describes the minimum amount of compute resources required.

3.5.2.1.82. .spec.visualization.kibana.resources.limits
3.5.2.1.82.1. Description
3.5.2.1.82.1.1. Type
  • object
3.5.2.1.83. .spec.visualization.kibana.resources.requests
3.5.2.1.83.1. Description
3.5.2.1.83.1.1. Type
  • object
3.5.2.1.84. .spec.visualization.kibana.tolerations[]
3.5.2.1.84.1. Description
3.5.2.1.84.1.1. Type
  • array
PropertyTypeDescription

effect

string

(optional) Effect indicates the taint effect to match. Empty means match all taint effects.

key

string

(optional) Key is the taint key that the toleration applies to. Empty means match all taint keys.

operator

string

(optional) Operator represents a key's relationship to the value.

tolerationSeconds

int

(optional) TolerationSeconds represents the period of time the toleration (which must be

value

string

(optional) Value is the taint value the toleration matches to.

3.5.2.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds
3.5.2.1.85.1. Description
3.5.2.1.85.1.1. Type
  • int
3.5.2.1.86. .status
3.5.2.1.86.1. Description

ClusterLoggingStatus defines the observed state of ClusterLogging

3.5.2.1.86.1.1. Type
  • object
PropertyTypeDescription

collection

object

(optional)

conditions

object

(optional)

curation

object

(optional)

logStore

object

(optional)

visualization

object

(optional)

3.5.2.1.87. .status.collection
3.5.2.1.87.1. Description
3.5.2.1.87.1.1. Type
  • object
PropertyTypeDescription

logs

object

(optional)

3.5.2.1.88. .status.collection.logs
3.5.2.1.88.1. Description
3.5.2.1.88.1.1. Type
  • object
PropertyTypeDescription

fluentdStatus

object

(optional)

3.5.2.1.89. .status.collection.logs.fluentdStatus
3.5.2.1.89.1. Description
3.5.2.1.89.1.1. Type
  • object
PropertyTypeDescription

clusterCondition

object

(optional)

daemonSet

string

(optional)

nodes

object

(optional)

pods

string

(optional)

3.5.2.1.90. .status.collection.logs.fluentdStatus.clusterCondition
3.5.2.1.90.1. Description

operator-sdk generate crds does not allow map-of-slice, must use a named type.

3.5.2.1.90.1.1. Type
  • object
3.5.2.1.91. .status.collection.logs.fluentdStatus.nodes
3.5.2.1.91.1. Description
3.5.2.1.91.1.1. Type
  • object
3.5.2.1.92. .status.conditions
3.5.2.1.92.1. Description
3.5.2.1.92.1.1. Type
  • object
3.5.2.1.93. .status.curation
3.5.2.1.93.1. Description
3.5.2.1.93.1.1. Type
  • object
PropertyTypeDescription

curatorStatus

array

(optional)

3.5.2.1.94. .status.curation.curatorStatus[]
3.5.2.1.94.1. Description
3.5.2.1.94.1.1. Type
  • array
PropertyTypeDescription

clusterCondition

object

(optional)

cronJobs

string

(optional)

schedules

string

(optional)

suspended

bool

(optional)

3.5.2.1.95. .status.curation.curatorStatus[].clusterCondition
3.5.2.1.95.1. Description

operator-sdk generate crds does not allow map-of-slice, must use a named type.

3.5.2.1.95.1.1. Type
  • object
3.5.2.1.96. .status.logStore
3.5.2.1.96.1. Description
3.5.2.1.96.1.1. Type
  • object
PropertyTypeDescription

elasticsearchStatus

array

(optional)

3.5.2.1.97. .status.logStore.elasticsearchStatus[]
3.5.2.1.97.1. Description
3.5.2.1.97.1.1. Type
  • array
PropertyTypeDescription

cluster

object

(optional)

clusterConditions

object

(optional)

clusterHealth

string

(optional)

clusterName

string

(optional)

deployments

array

(optional)

nodeConditions

object

(optional)

nodeCount

int

(optional)

pods

object

(optional)

replicaSets

array

(optional)

shardAllocationEnabled

string

(optional)

statefulSets

array

(optional)

3.5.2.1.98. .status.logStore.elasticsearchStatus[].cluster