Search

Logging

download PDF
OpenShift Container Platform 4.13

Configuring and using logging in OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

Use logging to collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats in OpenShift Container Platform.

Chapter 1. Release notes

1.1. Logging 5.9

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

1.1.1. Logging 5.9.6

This release includes OpenShift Logging Bug Fix Release 5.9.6.

1.1.1.1. Bug fixes
  • Before this update, the collector deployment ignored secret changes, causing receivers to reject logs. With this update, the system rolls out a new pod when there is a change in the secret value, ensuring that the collector reloads the updated secrets. (LOG-5525)
  • Before this update, the Vector could not correctly parse field values that included a single dollar sign ($). With this update, field values with a single dollar sign are automatically changed to two dollar signs ($$), ensuring proper parsing by the Vector. (LOG-5602)
  • Before this update, the drop filter could not handle non-string values (e.g., .responseStatus.code: 403). With this update, the drop filter now works properly with these values. (LOG-5815)
  • Before this update, the collector used the default settings to collect audit logs, without handling the backload from output receivers. With this update, the process for collecting audit logs has been improved to better manage file handling and log reading efficiency. (LOG-5866)
  • Before this update, the must-gather tool failed on clusters with non-AMD64 architectures such as Azure Resource Manager (ARM) or PowerPC. With this update, the tool now detects the cluster architecture at runtime and uses architecture-independent paths and dependencies. The detection allows must-gather to run smoothly on platforms like ARM and PowerPC. (LOG-5997)
  • Before this update, the log level was set using a mix of structured and unstructured keywords that were unclear. With this update, the log level follows a clear, documented order, starting with structured keywords. (LOG-6016)
  • Before this update, multiple unnamed pipelines writing to the default output in the ClusterLogForwarder caused a validation error due to duplicate auto-generated names. With this update, the pipeline names are now generated without duplicates. (LOG-6033)
  • Before this update, the collector pods did not have the PreferredScheduling annotation. With this update, the PreferredScheduling annotation is added to the collector daemonset. (LOG-6023)
1.1.1.2. CVEs

1.1.2. Logging 5.9.5

This release includes OpenShift Logging Bug Fix Release 5.9.5

1.1.2.1. Bug Fixes
  • Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. (LOG-5855)
  • Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. (LOG-5895)
  • Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. (LOG-5945)
1.1.2.2. CVEs

None.

1.1.3. Logging 5.9.4

This release includes OpenShift Logging Bug Fix Release 5.9.4

1.1.3.1. Bug Fixes
  • Before this update, an incorrectly formatted timeout configuration caused the OCP plugin to crash. With this update, a validation prevents the crash and informs the user about the incorrect configuration. (LOG-5373)
  • Before this update, workloads with labels containing - caused an error in the collector when normalizing log entries. With this update, the configuration change ensures the collector uses the correct syntax. (LOG-5524)
  • Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. (LOG-5697)
  • Before this update, the Loki Operator would crash if the CredentialRequest specification was registered in an environment without the cloud-credentials-operator. With this update, the CredentialRequest specification only registers in environments that are cloud-credentials-operator enabled. (LOG-5701)
  • Before this update, the Logging Operator watched and processed all config maps across the cluster. With this update, the dashboard controller only watches the config map for the logging dashboard. (LOG-5702)
  • Before this update, the ClusterLogForwarder introduced an extra space in the message payload which did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. (LOG-5707)
  • Before this update, removing the seeding for grafana-dashboard-cluster-logging as a part of (LOG-5308) broke new greenfield deployments without dashboards. With this update, the Logging Operator seeds the dashboard at the beginning and continues to update it for changes. (LOG-5747)
  • Before this update, LokiStack was missing a route for the Volume API causing the following error: 404 not found. With this update, LokiStack exposes the Volume API, resolving the issue. (LOG-5749)
1.1.3.2. CVEs

CVE-2024-24790

1.1.4. Logging 5.9.3

This release includes OpenShift Logging Bug Fix Release 5.9.3

1.1.4.1. Bug Fixes
  • Before this update, there was a delay in restarting Ingesters when configuring LokiStack, because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. (LOG-5614)
  • Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. (LOG-5586)
1.1.4.2. CVEs

1.1.5. Logging 5.9.2

This release includes OpenShift Logging Bug Fix Release 5.9.2

1.1.5.1. Bug Fixes
  • Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. (LOG-4910)
  • Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. (LOG-5156)
  • Before this update, the Logging Operator did not monitor changes to the grafana-dashboard-cluster-logging config map. With this update, the Logging Operator monitors changes in the ConfigMap objects, ensuring the system stays synchronized and responds effectively to config map modifications. (LOG-5308)
  • Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5426)
  • Before this change, the Fluentd out_http plugin ignored the no_proxy environment variable. With this update, the Fluentd patches the HTTP#start method of ruby to honor the no_proxy environment variable. (LOG-5466)
1.1.5.2. CVEs

1.1.6. Logging 5.9.1

This release includes OpenShift Logging Bug Fix Release 5.9.1

1.1.6.1. Enhancements
  • Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5401)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. (LOG-5395)
1.1.6.2. Bug Fixes
  • Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. (LOG-5268)
  • Before this update, a prune filter without a defined pruneFilterSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec. (LOG-5322)
  • Before this update, a drop filter without a defined dropTestsSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec. (LOG-5323)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack. (LOG-5397)
  • Before this update, poorly formatted timestamp fields in audit log records led to WARN messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. (LOG-4672)
  • Before this update, the error message thrown while validating a ClusterLogForwarder resource name and namespace did not correspond to the correct error. With this update, the system checks if a ClusterLogForwarder resource with the same name exists in the same namespace. If not, it corresponds to the correct error. (LOG-5062)
  • Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. (LOG-5307)
  • Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. (LOG-5309)
1.1.6.3. CVEs

No CVEs.

1.1.7. Logging 5.9.0

This release includes OpenShift Logging Bug Fix Release 5.9.0

1.1.7.1. Removal notice

The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

1.1.7.2. Deprecation notice
  • In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
  • In Logging 5.9, the Fields option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release.
1.1.7.3. Enhancements
1.1.7.3.1. Log Collection
  • This enhancement adds the ability to refine the process of log collection by using a workload’s metadata to drop or prune logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such as kube api or ovn logs, to only collect individual sources. (LOG-2155)
  • This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. (LOG-3527)
  • With this update, the ClusterLogForwarder API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. (LOG-4605)
  • This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the ClusterLogForwarder custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate the ClusterLogForwarder CR with the logging.openshift.io/dev-preview-enable-collector-as-deployment annotation. (LOG-4779)
  • This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. (LOG-4843)
  • With this update, the ClusterLogForwarder CR that specifies an application input with one or more infrastructure namespaces like default, openshift*, or kube*, now requires a service account with the collect-infrastructure-logs role. (LOG-4943)
  • This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the AtLeastOnce option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. (LOG-5026)
  • This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. (LOG-5055)
1.1.7.3.2. Log Storage
  • This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. (LOG-4538)
  • This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled OpenShift Container Platform 4.14 and later clusters. Local storage requires the addition of a CredentialMode: static annotation under spec.storage.secret in the LokiStack CR. (LOG-4540)
  • With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. (LOG-4571)
  • With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. (LOG-4754)
1.1.7.4. Bug Fixes
  • Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new oc client is available in cluster-logging-rhel9-operator, and must-gather works properly on FIPS clusters. (LOG-4403)
  • Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. (LOG-4709)
  • Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. (LOG-4792)
  • Before this update, when the ClusterLogForwarder CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception when ClusterLogging.Spec.Collection was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. (LOG-5006)
  • Before this update, in specific corner cases, replacing the ClusterLogForwarder CR status field caused the resourceVersion to constantly update due to changing timestamps in Status conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. (LOG-5007)
  • Before this update, there was an internal buffering behavior to drop_newest to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. (LOG-5123)
  • Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5165)
  • Before this update, the configuration of the Loki Operator ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. (LOG-5212)
1.1.7.5. Known Issues

None.

1.1.7.6. CVEs

1.2. Logging 5.8

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

1.2.1. Logging 5.8.12

This release includes OpenShift Logging Bug Fix Release 5.8.12 and OpenShift Logging Bug Fix Release 5.8.12.

1.2.1.1. Bug fixes
  • Before this update, the collector used internal buffering with the drop_newest setting to reduce high memory usage, which caused significant log loss. With this update, the collector goes back to its default behavior, where sink<>.buffer is not customized. (LOG-6026)
1.2.1.2. CVEs

1.2.2. Logging 5.8.11

This release includes OpenShift Logging Bug Fix Release 5.8.11 and OpenShift Logging Bug Fix Release 5.8.11.

1.2.2.1. Bug fixes
  • Before this update, the TLS section was added without verifying the broker URL schema, leading to SSL connection errors if the URLs did not start with tls. With this update, the TLS section is added only if broker URLs start with tls, preventing SSL connection errors. (LOG-5139)
  • Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. (LOG-5896)
  • Before this update, the 4.16 GA catalog did not include Elasticsearch Operator 5.8, preventing the installation of products like Service Mesh, Kiali, and Tracing. With this update, Elasticsearch Operator 5.8 is now available on 4.16, resolving the issue and providing support for Elasticsearch storage for these products only. (LOG-5911)
  • Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. (LOG-5857)
  • Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. (LOG-5946)
1.2.2.2. CVEs

1.2.3. Logging 5.8.10

This release includes OpenShift Logging Bug Fix Release 5.8.10 and OpenShift Logging Bug Fix Release 5.8.10.

1.2.3.1. Known issues
  • Before this update, when enabling retention, the Loki Operator produced an invalid configuration. As a result, Loki did not start properly. With this update, Loki pods can set retention. (LOG-5821)
1.2.3.2. Bug fixes
  • Before this update, the ClusterLogForwarder introduced an extra space in the message payload that did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. (LOG-5647)
1.2.3.3. CVEs

1.2.4. Logging 5.8.9

This release includes OpenShift Logging Bug Fix Release 5.8.9 and OpenShift Logging Bug Fix Release 5.8.9.

1.2.4.1. Bug fixes
  • Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. (LOG-5698)
  • Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found. With this update, LokiStack exposes the Volume API, resolving the issue. (LOG-5750)
  • Before this update, the Elasticsearch operator overwrote all service account annotations without considering ownership. As a result, the kube-controller-manager recreated service account secrets because it logged the link to the owning service account. With this update, the Elasticsearch operator merges annotations, resolving the issue. (LOG-5776)
1.2.4.2. CVEs

1.2.5. Logging 5.8.8

This release includes OpenShift Logging Bug Fix Release 5.8.8 and OpenShift Logging Bug Fix Release 5.8.8.

1.2.5.1. Bug fixes
  • Before this update, there was a delay in restarting Ingesters when configuring LokiStack, because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. (LOG-5615)
1.2.5.2. CVEs

1.2.6. Logging 5.8.7

This release includes OpenShift Logging Bug Fix Release 5.8.7 Security Update and OpenShift Logging Bug Fix Release 5.8.7.

1.2.6.1. Bug fixes
  • Before this update, the elasticsearch-im-<type>-* pods failed if no <type> logs (audit, infrastructure, or application) were collected. With this update, the pods no longer fail when <type> logs are not collected. (LOG-4949)
  • Before this update, the validation feature for output config required an SSL/TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message is more informative. (LOG-5467)
  • Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5471)
  • Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. (LOG-5514)
1.2.6.2. CVEs

1.2.7. Logging 5.8.6

This release includes OpenShift Logging Bug Fix Release 5.8.6 Security Update and OpenShift Logging Bug Fix Release 5.8.6.

1.2.7.1. Enhancements
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. (LOG-5392)
  • Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5402)
1.2.7.2. Bug fixes
  • Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. (LOG-5164)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack. (LOG-5398)
1.2.7.3. CVEs

1.2.8. Logging 5.8.5

This release includes OpenShift Logging Bug Fix Release 5.8.5.

1.2.8.1. Bug fixes
  • Before this update, the configuration of the Loki Operator’s ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator’s metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. (LOG-5250)
  • Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. (LOG-5201)
  • Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. (LOG-5171)
  • Before this update, running a query for log metrics caused an error in the histogram. With this update, the histogram toggle function and the chart are disabled and hidden because the histogram doesn’t work with log metrics. (LOG-5044)
  • Before this update, the Loki and Elasticsearch bundle had the wrong maxOpenShiftVersion, resulting in IncompatibleOperatorsInstalled alerts. With this update, including 4.16 as the maxOpenShiftVersion property in the bundle fixes the issue. (LOG-5272)
  • Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion. With this update, adding the missing linker flags in the build pipeline fixes the issue. (LOG-5274)
  • Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. (LOG-5270)
  • Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5240)
1.2.8.2. CVEs

1.2.9. Logging 5.8.4

This release includes OpenShift Logging Bug Fix Release 5.8.4.

1.2.9.1. Bug fixes
  • Before this update, the developer console’s logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. (LOG-4905)
  • Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. (LOG-4987)
  • Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name>. (LOG-5009)
  • Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output. (LOG-5021)
  • Before this update, the log_forwarder_input_info included application, infrastructure, and audit input metric points. With this update, http is also added as a metric point. (LOG-5043)
1.2.9.2. CVEs

1.2.10. Logging 5.8.3

This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3

1.2.10.1. Bug fixes
  • Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. (LOG-4969)
  • Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. (LOG-4822)
  • Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. (LOG-4962)
  • Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. (LOG-4963)
  • Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. (LOG-4893)
1.2.10.2. CVEs

1.2.11. Logging 5.8.2

This release includes OpenShift Logging Bug Fix Release 5.8.2.

1.2.11.1. Bug fixes
  • Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. (LOG-4890)
  • Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. (LOG-4947)
  • Before this update, the logging view plugin of the OpenShift Container Platform web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OpenShift Container Platform web console. (LOG-4912)
1.2.11.2. CVEs

1.2.12. Logging 5.8.1

This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana.

1.2.12.1. Enhancements
1.2.12.1.1. Log Collection
  • With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. (LOG-4780)
  • With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. (LOG-4828)
1.2.12.2. Bug fixes
  • Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. (LOG-4452)
  • Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp, for log level detection. (LOG-4480)
  • Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. (LOG-4683)
  • Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. (LOG-4706)
  • Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder. With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. (LOG-4741)
  • Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. (LOG-4768)
  • Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt. (LOG-4791)
  • Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt. (LOG-4852)
  • Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. (LOG-4811)
  • Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. (LOG-4815)
  • Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. (LOG-4836)
  • Before this update, while removing the inputs.receiver section in the ClusterLogForwarder, the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. (LOG-4612)
  • Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. (LOG-4821)
  • Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. (LOG-4841)
1.2.12.3. CVEs

1.2.13. Logging 5.8.0

This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana.

1.2.13.1. Deprecation notice

In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.

1.2.13.2. Enhancements
1.2.13.2.1. Log Collection
  • With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs. (LOG-3819)
  • With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. (LOG-1343)

    Important

    In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations.

  • With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. (LOG-884)
  • With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. (LOG-4562)
  • With this update, you can configure audit polices to control which Kubernetes and OpenShift API server events are forwarded by the log collector. (LOG-3982)
1.2.13.2.2. Log Storage
  • With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (LOG-3841)
  • With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during OpenShift Container Platform cluster restarts by keeping ingestion and the query path available. (LOG-3839)
  • With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. (LOG-3840)
  • With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. (LOG-3266)
  • With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OpenShift Container Platform clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). (LOG-4329)
  • With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. (LOG-4327)
1.2.13.2.3. Log Console
  • With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. (LOG-3856)
  • With this update, OpenShift Container Platform application owners can receive notifications for application log-based alerts on the OpenShift Container Platform web console Developer perspective for OpenShift Container Platform version 4.14 and later. (LOG-3548)
1.2.13.3. Known Issues
  • Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension.

    As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end.

  • Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. (LOG-4609)
  • Currently, when using FluentD as the collector, the collector pod cannot start on the OpenShift Container Platform IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. (LOG-4706)
  • Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. (LOG-4709)
  • Currently, must-gather cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator. There is currently no workaround for this issue. (LOG-4403)
  • Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. (LOG-3933)
1.2.13.4. CVEs

1.3. Logging 5.7

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

1.3.1. Logging 5.7.15

This release includes OpenShift Logging Bug Fix 5.7.15.

1.3.1.1. Bug fixes
  • Before this update, there was a delay in restarting Ingesters when configuring LokiStack, because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. (LOG-5616)
1.3.1.2. CVEs

1.3.2. Logging 5.7.14

This release includes OpenShift Logging Bug Fix 5.7.14.

1.3.2.1. Bug fixes
  • Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5472)
1.3.2.2. CVEs

1.3.3. Logging 5.7.13

This release includes OpenShift Logging Bug Fix 5.7.13.

1.3.3.1. Enhancements
  • Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5403)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. (LOG-5393)
1.3.3.2. Bug fixes
  • Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. (LOG-5243)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack. (LOG-5399)
1.3.3.3. CVEs

1.3.4. Logging 5.7.12

This release includes OpenShift Logging Bug Fix 5.7.12.

1.3.4.1. Bug fixes
  • Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. (LOG-5172)
  • Before this update, the Red Hat build pipeline didn’t use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. (LOG-5202)
  • Before this update, the configuration of the Loki Operator’s ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator’s metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. (LOG-5251)
  • Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion. With this update, adding the missing linker flags in the build pipeline fixes the issue. (LOG-5275)
  • Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5241)
1.3.4.2. CVEs

1.3.5. Logging 5.7.11

This release includes Logging Bug Fix 5.7.11.

1.3.5.1. Bug fixes
  • Before this update, when configured to read a custom S3 Certificate Authority, the Loki Operator would not automatically update the configuration when the name of the ConfigMap object or the contents changed. With this update, the Loki Operator now watches for changes to the ConfigMap object and automatically updates the generated configuration. (LOG-4968)
1.3.5.2. CVEs

1.3.6. Logging 5.7.10

This release includes OpenShift Logging Bug Fix Release 5.7.10.

1.3.6.1. Bug fix

Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. (LOG-4891)

1.3.6.2. CVEs

1.3.7. Logging 5.7.9

This release includes OpenShift Logging Bug Fix Release 5.7.9.

1.3.7.1. Bug fixes
  • Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. (LOG-4281)
  • Before this update, the Vector failed to start on IPv4-only nodes. As a result, it failed to create a listener for its metrics endpoint with the following error: Failed to start Prometheus exporter: TCP bind failed: Address family not supported by protocol (os error 97). With this update, the Vector operates normally on IPv4-only nodes. (LOG-4589)
  • Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. (LOG-4806)
  • Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. (LOG-4837)
  • Before this update, changing a LogQL query using controls such as time range or severity changed the label matcher operator as though it was defined like a regular expression. With this update, regular expression operators remain unchanged when updating the query. (LOG-4842)
  • Before this update, the Vector collector deployments relied upon the default retry and buffering behavior. As a result, the delivery pipeline backed up trying to deliver every message when the availability of an output was unstable. With this update, the Vector collector deployments limit the number of message retries and drop messages after the threshold has been exceeded. (LOG-4536)
1.3.7.2. CVEs

1.3.8. Logging 5.7.8

This release includes OpenShift Logging Bug Fix Release 5.7.8.

1.3.8.1. Bug fixes
  • Before this update, there was a potential conflict when the same name was used for the outputRefs and inputRefs parameters in the ClusterLogForwarder custom resource (CR). As a result, the collector pods entered in a CrashLoopBackOff status. With this update, the output labels contain the OUTPUT_ prefix to ensure a distinction between output labels and pipeline names. (LOG-4383)
  • Before this update, while configuring the JSON log parser, if you did not set the structuredTypeKey or structuredTypeName parameters for the Cluster Logging Operator, no alert would display about an invalid configuration. With this update, the Cluster Logging Operator informs you about the configuration issue. (LOG-4441)
  • Before this update, if the hecToken key was missing or incorrect in the secret specified for a Splunk output, the validation failed because the Vector forwarded logs to Splunk without a token. With this update, if the hecToken key is missing or incorrect, the validation fails with the A non-empty hecToken entry is required error message. (LOG-4580)
  • Before this update, selecting a date from the Custom time range for logs caused an error in the web console. With this update, you can select a date from the time range model in the web console successfully. (LOG-4684)
1.3.8.2. CVEs

1.3.9. Logging 5.7.7

This release includes OpenShift Logging Bug Fix Release 5.7.7.

1.3.9.1. Bug fixes
  • Before this update, FluentD normalized the logs emitted by the EventRouter differently from Vector. With this update, the Vector produces log records in a consistent format. (LOG-4178)
  • Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage. (LOG-4555)
  • Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true, which resolves the issue. (LOG-4569)
  • Before this update, the log collector relied on the default configuration settings for reading the container log lines. As a result, the log collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the log collector to efficiently process rotated files. (LOG-4575)
  • Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. (LOG-4686)
1.3.9.2. CVEs

1.3.10. Logging 5.7.6

This release includes OpenShift Logging Bug Fix Release 5.7.6.

1.3.10.1. Bug fixes
  • Before this update, the collector relied on the default configuration settings for reading the container log lines. As a result, the collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the collector to efficiently process rotated files. (LOG-4501)
  • Before this update, when users pasted a URL with predefined filters, some filters did not reflect. With this update, the UI reflects all the filters in the URL. (LOG-4459)
  • Before this update, forwarding to Loki using custom labels generated an error when switching from Fluentd to Vector. With this update, the Vector configuration sanitizes labels in the same way as Fluentd to ensure the collector starts and correctly processes labels. (LOG-4460)
  • Before this update, the Observability Logs console search field did not accept special characters that it should escape. With this update, it is escaping special characters properly in the query. (LOG-4456)
  • Before this update, the following warning message appeared while sending logs to Splunk: Timestamp was not found. With this update, the change overrides the name of the log field used to retrieve the Timestamp and sends it to Splunk without warning. (LOG-4413)
  • Before this update, the CPU and memory usage of Vector was increasing over time. With this update, the Vector configuration now contains the expire_metrics_secs=60 setting to limit the lifetime of the metrics and cap the associated CPU usage and memory footprint. (LOG-4171)
  • Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4393)
  • Before this update, the Fluentd runtime image included builder tools which were unnecessary at runtime. With this update, the builder tools are removed, resolving the issue. (LOG-4467)
1.3.10.2. CVEs

1.3.11. Logging 5.7.4

This release includes OpenShift Logging Bug Fix Release 5.7.4.

1.3.11.1. Bug fixes
  • Before this update, when forwarding logs to CloudWatch, a namespaceUUID value was not appended to the logGroupName field. With this update, the namespaceUUID value is included, so a logGroupName in CloudWatch appears as logGroupName: vectorcw.b443fb9e-bd4c-4b6a-b9d3-c0097f9ed286. (LOG-2701)
  • Before this update, when forwarding logs over HTTP to an off-cluster destination, the Vector collector was unable to authenticate to the cluster-wide HTTP proxy even though correct credentials were provided in the proxy URL. With this update, the Vector log collector can now authenticate to the cluster-wide HTTP proxy. (LOG-3381)
  • Before this update, the Operator would fail if the Fluentd collector was configured with Splunk as an output, due to this configuration being unsupported. With this update, configuration validation rejects unsupported outputs, resolving the issue. (LOG-4237)
  • Before this update, when the Vector collector was updated an enabled = true value in the TLS configuration for AWS Cloudwatch logs and the GCP Stackdriver caused a configuration error. With this update, enabled = true value will be removed for these outputs, resolving the issue. (LOG-4242)
  • Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9. With this update, the error has been resolved. (LOG-4275)
  • Before this update, an issue in the Loki Operator caused the alert-manager configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. (LOG-4361)
  • Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. (LOG-4368)
  • Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana’s Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. (LOG-4389)
  • Pipelines with no name field specified in the ClusterLogForwarder custom resource (CR) stopped working after upgrading to OpenShift Logging 5.7. With this update, the error has been resolved. (LOG-4120)
1.3.11.2. CVEs

1.3.12. Logging 5.7.3

This release includes OpenShift Logging Bug Fix Release 5.7.3.

1.3.12.1. Bug fixes
  • Before this update, when viewing logs within the OpenShift Container Platform web console, cached files caused the data to not refresh. With this update the bootstrap files are not cached, resolving the issue. (LOG-4100)
  • Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (LOG-4156)
  • Before this update, the LokiStack ruler did not restart after changes were made to the RulerConfig custom resource (CR). With this update, the Loki Operator restarts the ruler pods after the RulerConfig CR is updated. (LOG-4161)
  • Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (LOG-4176)
  • Before this update, the Loki Operator terminated unexpectedly when a LokiStack CR defined tenant limits, but not global limits. With this update, the Loki Operator can process LokiStack CRs without global limits, resolving the issue. (LOG-4198)
  • Before this update, Fluentd did not send logs to an Elasticsearch cluster when the private key provided was passphrase-protected. With this update, Fluentd properly handles passphrase-protected private keys when establishing a connection with Elasticsearch. (LOG-4258)
  • Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. (LOG-4277)
  • Before this update, label values containing a / character within the ClusterLogForwarder CR would cause the collector to terminate unexpectedly. With this update, slashes are replaced with underscores, resolving the issue. (LOG-4095)
  • Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the ClusterLogging resource is in the correct Management state before initiating the reconciliation of the ClusterLogForwarder CR, resolving the issue. (LOG-4177)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, selecting a time range by dragging over the histogram didn’t work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. (LOG-4108)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. (LOG-3498)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. (OU-188)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. (OU-166)
1.3.12.2. CVEs

1.3.13. Logging 5.7.2

This release includes OpenShift Logging Bug Fix Release 5.7.2.

1.3.13.1. Bug fixes
  • Before this update, it was not possible to delete the openshift-logging namespace directly due to the presence of a pending finalizer. With this update, the finalizer is no longer utilized, enabling direct deletion of the namespace. (LOG-3316)
  • Before this update, the run.sh script would display an incorrect chunk_limit_size value if it was changed according to the OpenShift Container Platform documentation. However, when setting the chunk_limit_size via the environment variable $BUFFER_SIZE_LIMIT, the script would show the correct value. With this update, the run.sh script now consistently displays the correct chunk_limit_size value in both scenarios. (LOG-3330)
  • Before this update, the OpenShift Container Platform web console’s logging view plugin did not allow for custom node placement or tolerations. This update adds the ability to define node placement and tolerations for the logging view plugin. (LOG-3749)
  • Before this update, the Cluster Logging Operator encountered an Unsupported Media Type exception when trying to send logs to DataDog via the Fluentd HTTP Plugin. With this update, users can seamlessly assign the content type for log forwarding by configuring the HTTP header Content-Type. The value provided is automatically assigned to the content_type parameter within the plugin, ensuring successful log transmission. (LOG-3784)
  • Before this update, when the detectMultilineErrors field was set to true in the ClusterLogForwarder custom resource (CR), PHP multi-line errors were recorded as separate log entries, causing the stack trace to be split across multiple messages. With this update, multi-line error detection for PHP is enabled, ensuring that the entire stack trace is included in a single log message. (LOG-3878)
  • Before this update, ClusterLogForwarder pipelines containing a space in their name caused the Vector collector pods to continuously crash. With this update, all spaces, dashes (-), and dots (.) in pipeline names are replaced with underscores (_). (LOG-3945)
  • Before this update, the log_forwarder_output metric did not include the http parameter. This update adds the missing parameter to the metric. (LOG-3997)
  • Before this update, Fluentd did not identify some multi-line JavaScript client exceptions when they ended with a colon. With this update, the Fluentd buffer name is prefixed with an underscore, resolving the issue. (LOG-4019)
  • Before this update, when configuring log forwarding to write to a Kafka output topic which matched a key in the payload, logs dropped due to an error. With this update, Fluentd’s buffer name has been prefixed with an underscore, resolving the issue.(LOG-4027)
  • Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. (LOG-4049)
  • Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true. With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator’s CR:

    tls.verify_certificate = false
    tls.verify_hostname = false

    (LOG-3445)

  • Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to timeout. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. (LOG-4052)
  • Before this update, a prior fix to remove defaulting of the collection.type resulted in the Operator no longer honoring the deprecated specs for resource, node selections, and tolerations. This update modifies the Operator behavior to always prefer the collection.logs spec over those of collection. This varies from previous behavior that allowed using both the preferred fields and deprecated fields but would ignore the deprecated fields when collection.type was populated. (LOG-4185)
  • Before this update, the Vector log collector did not generate TLS configuration for forwarding logs to multiple Kafka brokers if the broker URLs were not specified in the output. With this update, TLS configuration is generated appropriately for multiple brokers. (LOG-4163)
  • Before this update, the option to enable passphrase for log forwarding to Kafka was unavailable. This limitation presented a security risk as it could potentially expose sensitive information. With this update, users now have a seamless option to enable passphrase for log forwarding to Kafka. (LOG-3314)
  • Before this update, Vector log collector did not honor the tlsSecurityProfile settings for outgoing TLS connections. After this update, Vector handles TLS connection settings appropriately. (LOG-4011)
  • Before this update, not all available output types were included in the log_forwarder_output_info metrics. With this update, metrics contain Splunk and Google Cloud Logging data which was missing previously. (LOG-4098)
  • Before this update, when follow_inodes was set to true, the Fluentd collector could crash on file rotation. With this update, the follow_inodes setting does not crash the collector. (LOG-4151)
  • Before this update, the Fluentd collector could incorrectly close files that should be watched because of how those files were tracked. With this update, the tracking parameters have been corrected. (LOG-4149)
  • Before this update, forwarding logs with the Vector collector and naming a pipeline in the ClusterLogForwarder instance audit, application or infrastructure resulted in collector pods staying in the CrashLoopBackOff state with the following error in the collector log:

    ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit

    After this update, pipeline names no longer clash with reserved input names, and pipelines can be named audit, application or infrastructure. (LOG-4218)

  • Before this update, when forwarding logs to a syslog destination with the Vector collector and setting the addLogSource flag to true, the following extra empty fields were added to the forwarded messages: namespace_name=, container_name=, and pod_name=. With this update, these fields are no longer added to journal logs. (LOG-4219)
  • Before this update, when a structuredTypeKey was not found, and a structuredTypeName was not specified, log messages were still parsed into structured object. With this update, parsing of logs is as expected. (LOG-4220)
1.3.13.2. CVEs

1.3.14. Logging 5.7.1

This release includes: OpenShift Logging Bug Fix Release 5.7.1.

1.3.14.1. Bug fixes
  • Before this update, the presence of numerous noisy messages within the Cluster Logging Operator pod logs caused reduced log readability, and increased difficulty in identifying important system events. With this update, the issue is resolved by significantly reducing the noisy messages within Cluster Logging Operator pod logs. (LOG-3482)
  • Before this update, the API server would reset the value for the CollectorSpec.Type field to vector, even when the custom resource used a different value. This update removes the default for the CollectorSpec.Type field to restore the previous behavior. (LOG-4086)
  • Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4501)
  • Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the "Show Resources" link to toggle the display of resources for each log entry. (LOG-3218)
1.3.14.2. CVEs

1.3.15. Logging 5.7.0

This release includes OpenShift Logging Bug Fix Release 5.7.0.

1.3.15.1. Enhancements

With this update, you can enable logging to detect multi-line exceptions and reassemble them into a single log entry.

To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true.

1.3.15.2. Known Issues

None.

1.3.15.3. Bug fixes
  • Before this update, the nodeSelector attribute for the Gateway component of the LokiStack did not impact node scheduling. With this update, the nodeSelector attribute works as expected. (LOG-3713)
1.3.15.4. CVEs

1.4. Logging 5.6

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Note

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

1.4.1. Logging 5.6.23

This release includes OpenShift Logging Bug Fix Release 5.6.23.

1.4.1.1. Bug fixes

None.

1.4.1.2. CVEs

1.4.2. Logging 5.6.22

This release includes OpenShift Logging Bug Fix 5.6.22

1.4.2.1. Bug fixes
  • Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. (LOG-5947)
1.4.2.2. CVEs

1.4.3. Logging 5.6.21

This release includes OpenShift Logging Bug Fix 5.6.21

1.4.3.1. Bug fixes
  • Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found. With this update, LokiStack exposes the Volume API, resolving the issue. (LOG-5751)
1.4.3.2. CVEs

1.4.4. Logging 5.6.20

This release includes OpenShift Logging Bug Fix 5.6.20

1.4.4.1. Bug fixes
  • Before this update, there was a delay in restarting Ingesters when configuring LokiStack, because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. (LOG-5617)
1.4.4.2. CVEs

1.4.5. Logging 5.6.19

This release includes OpenShift Logging Bug Fix 5.6.19

1.4.5.1. Bug fixes
  • Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5529)
1.4.5.2. CVEs

1.4.6. Logging 5.6.18

This release includes OpenShift Logging Bug Fix 5.6.18

1.4.6.1. Enhancements
  • Before this update, Loki Operator set up Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5404)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. (LOG-5396)
1.4.6.2. Bug fixes
  • Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. (LOG-5244)
  • Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack. (LOG-5400)
1.4.6.3. CVEs

1.4.7. Logging 5.6.17

This release includes OpenShift Logging Bug Fix 5.6.17

1.4.7.1. Bug fixes
  • Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. (LOG-5203)
  • Before this update, the configuration of the ServiceMonitor by Loki Operator could match many Kubernetes services, which led to Loki Operator’s metrics being collected multiple times. With this update, the ServiceMonitor setup now only matches the dedicated metrics service. (LOG-5252)
  • Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion. With this update, adding the missing linker flags in the build pipeline fixes the issue. (LOG-5276)
  • Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5242)
1.4.7.2. CVEs

1.4.8. Logging 5.6.16

This release includes Logging Bug Fix 5.6.16

1.4.8.1. Bug fixes
  • Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. (LOG-4967)
1.4.8.2. CVEs

1.4.9. Logging 5.6.15

This release includes OpenShift Logging Bug Fix Release 5.6.15.

1.4.9.1. Bug fixes

Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. (LOG-4892)

1.4.9.2. CVEs

1.4.10. Logging 5.6.14

This release includes OpenShift Logging Bug Fix Release 5.6.14.

1.4.10.1. Bug fixes
  • Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. (LOG-4807)
  • Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. (LOG-4838)
1.4.10.2. CVEs

1.4.11. Logging 5.6.13

This release includes OpenShift Logging Bug Fix Release 5.6.13.

1.4.11.1. Bug fixes

None.

1.4.11.2. CVEs

1.4.12. Logging 5.6.12

This release includes OpenShift Logging Bug Fix Release 5.6.12.

1.4.12.1. Bug fixes
  • Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true, which resolves the issue. Currently, the log alert is not available on an IPv6-enabled cluster. (LOG-4570)
  • Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage. (LOG-4579)
  • Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. (LOG-4687)
1.4.12.2. CVEs

1.4.13. Logging 5.6.11

This release includes OpenShift Logging Bug Fix Release 5.6.11.

1.4.13.1. Bug fixes
  • Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4435)
1.4.13.2. CVEs

1.4.14. Logging 5.6.9

This release includes OpenShift Logging Bug Fix Release 5.6.9.

1.4.14.1. Bug fixes
  • Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. (LOG-4084)
  • Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9. With this update, the error has been resolved. (LOG-4276)
  • Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana’s Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. (LOG-4390)
1.4.14.2. CVEs

1.4.15. Logging 5.6.8

This release includes OpenShift Logging Bug Fix Release 5.6.8.

1.4.15.1. Bug fixes
  • Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder. This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. (LOG-4091)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. (OU-187)
  • Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. (OU-189)
  • Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. (LOG-4158)
  • Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. (LOG-4278)
1.4.15.2. CVEs

1.4.16. Logging 5.6.5

This release includes OpenShift Logging Bug Fix Release 5.6.5.

1.4.16.1. Bug fixes
  • Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. (LOG-3419)
  • Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. (LOG-3750)
  • Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. (LOG-3583)
  • Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. (LOG-3480)
  • Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. (LOG-4008)
1.4.16.2. CVEs

1.4.17. Logging 5.6.4

This release includes OpenShift Logging Bug Fix Release 5.6.4.

1.4.17.1. Bug fixes
  • Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. (LOG-3280)
  • Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. (LOG-3454)
  • Before this update, when the tls.insecureSkipVerify option was set to true, the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3475)
  • Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. (LOG-3640)
  • Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. (LOG-3733)
  • Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. (LOG-3783)
  • Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. (LOG-3814)
  • Before this update, if the tls.insecureSkipVerify field was set to true, the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3838)
  • Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.(LOG-3763)
1.4.17.2. CVEs

1.4.18. Logging 5.6.3

This release includes OpenShift Logging Bug Fix Release 5.6.3.

1.4.18.1. Bug fixes
  • Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (LOG-3717)
  • Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log. With this update, Fluentd captures these OAuth login events, resolving the issue. (LOG-3729)
1.4.18.2. CVEs

1.4.19. Logging 5.6.2

This release includes OpenShift Logging Bug Fix Release 5.6.2.

1.4.19.1. Bug fixes
  • Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. (LOG-3429)
  • Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. (LOG-3584)
  • Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (LOG-3437)
  • Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (LOG-3559)
  • Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (LOG-3608)
  • Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (LOG-3635)
1.4.19.2. CVEs

1.4.20. Logging 5.6.1

This release includes OpenShift Logging Bug Fix Release 5.6.1.

1.4.20.1. Bug fixes
  • Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (LOG-3494)
  • Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (LOG-3496)
  • Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (LOG-3510)
  • Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3441), (LOG-3397)
  • Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3463)
  • Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. (LOG-3447)
  • Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(LOG-3477)
1.4.20.2. CVEs

1.4.21. Logging 5.6.0

This release includes OpenShift Logging Release 5.6.

1.4.21.1. Deprecation notice

In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.

1.4.21.2. Enhancements
  • With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. (LOG-895)
  • With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (LOG-2695)
  • With this update, Splunk is an available output option for log forwarding. (LOG-2913)
  • With this update, Vector replaces Fluentd as the default Collector. (LOG-2222)
  • With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. (LOG-3388)
  • With this update, logs from any source contain a field openshift.cluster_id, the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value by using the following command:

    $ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'

    (LOG-2715)

1.4.21.3. Known Issues
  • Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (LOG-3463)
1.4.21.4. Bug fixes
  • Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-2993)
  • Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-3072)
  • Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. (LOG-3090)
  • Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3331)
  • Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1. With this update, the operator sets the actual value for the size used. (LOG-3296)
  • Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. (LOG-3195)
  • Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3161)
  • Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3157)
  • Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout, with a default value of 24h. (LOG-3129)
  • Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (LOG-2919)
  • Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (LOG-2819)
  • Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (LOG-2789)
  • Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (LOG-2315)
  • Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (LOG-1806)
  • Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. (LOG-3446)
  • Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (LOG-3235)
  • Before this update, Vector was missing the field sequence, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. (LOG-3106)
1.4.21.5. CVEs

Chapter 2. Support

Only the configuration options described in this documentation are supported for logging.

Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.

Note

If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed.

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.

Logging is not:

  • A high scale log collection system
  • Security Information and Event Monitoring (SIEM) compliant
  • Historical or long term log retention or storage
  • A guaranteed log sink
  • Secure storage - audit logs are not stored by default

2.1. Supported API custom resource definitions

LokiStack development is ongoing. Not all APIs are currently supported.

Table 2.1. Loki API support states
CustomResourceDefinition (CRD)ApiVersionSupport state

LokiStack

lokistack.loki.grafana.com/v1

Supported in 5.5

RulerConfig

rulerconfig.loki.grafana/v1

Supported in 5.7

AlertingRule

alertingrule.loki.grafana/v1

Supported in 5.7

RecordingRule

recordingrule.loki.grafana/v1

Supported in 5.7

2.2. Unsupported configurations

You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components:

  • The Elasticsearch custom resource (CR)
  • The Kibana deployment
  • The fluent.conf file
  • The Fluentd daemon set

You must set the OpenShift Elasticsearch Operator to the Unmanaged state to modify the Elasticsearch deployment files.

Explicitly unsupported cases include:

  • Configuring default log rotation. You cannot modify the default log rotation configuration.
  • Configuring the collected log location. You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log.
  • Throttling log collection. You cannot throttle down the rate at which the logs are read in by the log collector.
  • Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.
  • Configuring how the log collector normalizes logs. You cannot modify default log normalization.

2.3. Support policy for unmanaged Operators

The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.

While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

An Operator can be set to an unmanaged state using the following methods:

  • Individual Operator configuration

    Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.

    Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.

    Warning

    Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed.

  • Cluster Version Operator (CVO) overrides

    The spec.overrides parameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:

    Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.
    Warning

    Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.

2.4. Collecting logging data for Red Hat Support

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components.

For prompt support, supply diagnostic information for both OpenShift Container Platform and logging.

Note

Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data.

2.4.1. About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.

For your logging, must-gather collects the following information:

  • Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
  • Cluster-level resources, including nodes, roles, and role bindings at the cluster level
  • OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer

When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.

2.4.2. Collecting logging data

You can use the oc adm must-gather CLI command to collect information about logging.

Procedure

To collect logging information with must-gather:

  1. Navigate to the directory where you want to store the must-gather information.
  2. Run the oc adm must-gather command against the logging image:

    $ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')

    The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408.

  3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
  4. Attach the compressed file to your support case on the Red Hat Customer Portal.

Chapter 3. Troubleshooting logging

3.1. Viewing Logging status

You can view the status of the Red Hat OpenShift Logging Operator and other logging components.

3.1.1. Viewing the status of the Red Hat OpenShift Logging Operator

You can view the status of the Red Hat OpenShift Logging Operator.

Prerequisites

  • The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed.

Procedure

  1. Change to the openshift-logging project by running the following command:

    $ oc project openshift-logging
  2. Get the ClusterLogging instance status by running the following command:

    $ oc get clusterlogging instance -o yaml

    Example output

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    # ...
    status:  1
      collection:
        logs:
          fluentdStatus:
            daemonSet: fluentd  2
            nodes:
              collector-2rhqp: ip-10-0-169-13.ec2.internal
              collector-6fgjh: ip-10-0-165-244.ec2.internal
              collector-6l2ff: ip-10-0-128-218.ec2.internal
              collector-54nx5: ip-10-0-139-30.ec2.internal
              collector-flpnn: ip-10-0-147-228.ec2.internal
              collector-n2frh: ip-10-0-157-45.ec2.internal
            pods:
              failed: []
              notReady: []
              ready:
              - collector-2rhqp
              - collector-54nx5
              - collector-6fgjh
              - collector-6l2ff
              - collector-flpnn
              - collector-n2frh
      logstore: 3
        elasticsearchStatus:
        - ShardAllocationEnabled:  all
          cluster:
            activePrimaryShards:    5
            activeShards:           5
            initializingShards:     0
            numDataNodes:           1
            numNodes:               1
            pendingTasks:           0
            relocatingShards:       0
            status:                 green
            unassignedShards:       0
          clusterName:             elasticsearch
          nodeConditions:
            elasticsearch-cdm-mkkdys93-1:
          nodeCount:  1
          pods:
            client:
              failed:
              notReady:
              ready:
              - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
            data:
              failed:
              notReady:
              ready:
              - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
            master:
              failed:
              notReady:
              ready:
              - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
    visualization:  4
        kibanaStatus:
        - deployment: kibana
          pods:
            failed: []
            notReady: []
            ready:
            - kibana-7fb4fd4cc9-f2nls
          replicaSets:
          - kibana-7fb4fd4cc9
          replicas: 1

    1
    In the output, the cluster status fields appear in the status stanza.
    2
    Information on the Fluentd pods.
    3
    Information on the Elasticsearch pods, including Elasticsearch cluster health, green, yellow, or red.
    4
    Information on the Kibana pods.
3.1.1.1. Example condition messages

The following are examples of some condition messages from the Status.Nodes section of the ClusterLogging instance.

A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:

Example output

  nodes:
  - conditions:
    - lastTransitionTime: 2019-03-15T15:57:22Z
      message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
        be allocated on this node.
      reason: Disk Watermark Low
      status: "True"
      type: NodeStorage
    deploymentName: example-elasticsearch-clientdatamaster-0-1
    upgradeStatus: {}

A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes:

Example output

  nodes:
  - conditions:
    - lastTransitionTime: 2019-03-15T16:04:45Z
      message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
        from this node.
      reason: Disk Watermark High
      status: "True"
      type: NodeStorage
    deploymentName: cluster-logging-operator
    upgradeStatus: {}

A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:

Example output

    Elasticsearch Status:
      Shard Allocation Enabled:  shard allocation unknown
      Cluster:
        Active Primary Shards:  0
        Active Shards:          0
        Initializing Shards:    0
        Num Data Nodes:         0
        Num Nodes:              0
        Pending Tasks:          0
        Relocating Shards:      0
        Status:                 cluster health unknown
        Unassigned Shards:      0
      Cluster Name:             elasticsearch
      Node Conditions:
        elasticsearch-cdm-mkkdys93-1:
          Last Transition Time:  2019-06-26T03:37:32Z
          Message:               0/5 nodes are available: 5 node(s) didn't match node selector.
          Reason:                Unschedulable
          Status:                True
          Type:                  Unschedulable
        elasticsearch-cdm-mkkdys93-2:
      Node Count:  2
      Pods:
        Client:
          Failed:
          Not Ready:
            elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
            elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
          Ready:
        Data:
          Failed:
          Not Ready:
            elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
            elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
          Ready:
        Master:
          Failed:
          Not Ready:
            elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
            elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
          Ready:

A status message similar to the following indicates that the requested PVC could not bind to PV:

Example output

      Node Conditions:
        elasticsearch-cdm-mkkdys93-1:
          Last Transition Time:  2019-06-26T03:37:32Z
          Message:               pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
          Reason:                Unschedulable
          Status:                True
          Type:                  Unschedulable

A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:

Example output

Status:
  Collection:
    Logs:
      Fluentd Status:
        Daemon Set:  fluentd
        Nodes:
        Pods:
          Failed:
          Not Ready:
          Ready:

3.1.2. Viewing the status of logging components

You can view the status for a number of logging components.

Prerequisites

  • The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed.

Procedure

  1. Change to the openshift-logging project.

    $ oc project openshift-logging
  2. View the status of logging environment:

    $ oc describe deployment cluster-logging-operator

    Example output

    Name:                   cluster-logging-operator
    
    ....
    
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    NewReplicaSetAvailable
    
    ....
    
    Events:
      Type    Reason             Age   From                   Message
      ----    ------             ----  ----                   -------
      Normal  ScalingReplicaSet  62m   deployment-controller  Scaled up replica set cluster-logging-operator-574b8987df to 1----

  3. View the status of the logging replica set:

    1. Get the name of a replica set:

      Example output

      $ oc get replicaset

      Example output

      NAME                                      DESIRED   CURRENT   READY   AGE
      cluster-logging-operator-574b8987df       1         1         1       159m
      elasticsearch-cdm-uhr537yu-1-6869694fb    1         1         1       157m
      elasticsearch-cdm-uhr537yu-2-857b6d676f   1         1         1       156m
      elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd   1         1         1       155m
      kibana-5bd5544f87                         1         1         1       157m

    2. Get the status of the replica set:

      $ oc describe replicaset cluster-logging-operator-574b8987df

      Example output

      Name:           cluster-logging-operator-574b8987df
      
      ....
      
      Replicas:       1 current / 1 desired
      Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
      
      ....
      
      Events:
        Type    Reason            Age   From                   Message
        ----    ------            ----  ----                   -------
        Normal  SuccessfulCreate  66m   replicaset-controller  Created pod: cluster-logging-operator-574b8987df-qjhqv----

3.2. Troubleshooting log forwarding

3.2.1. Redeploying Fluentd pods

When you create a ClusterLogForwarder custom resource (CR), if the Red Hat OpenShift Logging Operator does not redeploy the Fluentd pods automatically, you can delete the Fluentd pods to force them to redeploy.

Prerequisites

  • You have created a ClusterLogForwarder custom resource (CR) object.

Procedure

  • Delete the Fluentd pods to force them to redeploy by running the following command:

    $ oc delete pod --selector logging-infra=collector

3.2.2. Troubleshooting Loki rate limit errors

If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429) errors.

These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.

In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR).

Important

The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.

Conditions

  • The Log Forwarder API is configured to forward logs to Loki.
  • Your system sends a block of messages that is larger than 2 MB to Loki. For example:

    "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\
    .......
    ......
    ......
    ......
    \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
  • After you enter oc logs -n openshift-logging -l component=collector, the collector logs in your cluster show a line containing one of the following error messages:

    429 Too Many Requests Ingestion rate limit exceeded

    Example Vector error message

    2023-08-25T16:08:49.301780Z  WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true

    Example Fluentd error message

    2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"

    The error is also visible on the receiving end. For example, in the LokiStack ingester pod:

    Example Loki ingester error message

    level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream

Procedure

  • Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      limits:
        global:
          ingestion:
            ingestionBurstSize: 16 1
            ingestionRate: 8 2
    # ...
    1
    The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted.
    2
    The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.

3.3. Troubleshooting logging alerts

You can use the following procedures to troubleshoot logging alerts on your cluster.

3.3.1. Elasticsearch cluster health status is red

At least one primary shard and its replicas are not allocated to a node. Use the following procedure to troubleshoot this alert.

Tip

Some commands in this documentation reference an Elasticsearch pod by using a $ES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster.

You can list the available Elasticsearch pods by running the following command:

$ oc -n openshift-logging get pods -l component=elasticsearch

Choose one of the pods listed and set the $ES_POD_NAME variable, by running the following command:

$ export ES_POD_NAME=<elasticsearch_pod_name>

You can now use the $ES_POD_NAME variable in commands.

Procedure

  1. Check the Elasticsearch cluster health and verify that the cluster status is red by running the following command:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- health
  2. List the nodes that have joined the cluster by running the following command:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cat/nodes?v
  3. List the Elasticsearch pods and compare them with the nodes in the command output from the previous step, by running the following command:

    $ oc -n openshift-logging get pods -l component=elasticsearch
  4. If some of the Elasticsearch nodes have not joined the cluster, perform the following steps.

    1. Confirm that Elasticsearch has an elected master node by running the following command and observing the output:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=_cat/master?v
    2. Review the pod logs of the elected master node for issues by running the following command and observing the output:

      $ oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging
    3. Review the logs of nodes that have not joined the cluster for issues by running the following command and observing the output:

      $ oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging
  5. If all the nodes have joined the cluster, check if the cluster is in the process of recovering by running the following command and observing the output:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cat/recovery?active_only=true

    If there is no command output, the recovery process might be delayed or stalled by pending tasks.

  6. Check if there are pending tasks by running the following command and observing the output:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- health | grep number_of_pending_tasks
  7. If there are pending tasks, monitor their status. If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors. Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled.
  8. If it seems like the recovery has stalled, check if the cluster.routing.allocation.enable value is set to none, by running the following command and observing the output:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cluster/settings?pretty
  9. If the cluster.routing.allocation.enable value is set to none, set it to all, by running the following command:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cluster/settings?pretty \
      -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}'
  10. Check if any indices are still red by running the following command and observing the output:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cat/indices?v
  11. If any indices are still red, try to clear them by performing the following steps.

    1. Clear the cache by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty
    2. Increase the max allocation retries by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name>/_settings?pretty \
        -X PUT -d '{"index.allocation.max_retries":10}'
    3. Delete all the scroll items by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=_search/scroll/_all -X DELETE
    4. Increase the timeout by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name>/_settings?pretty \
        -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}'
  12. If the preceding steps do not clear the red indices, delete the indices individually.

    1. Identify the red index name by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=_cat/indices?v
    2. Delete the red index by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_red_index_name> -X DELETE
  13. If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node.

    1. Check if the Elasticsearch JVM Heap usage is high by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=_nodes/stats?pretty

      In the command output, review the node_name.jvm.mem.heap_used_percent field to determine the JVM Heap usage.

    2. Check for high CPU utilization. For more information about CPU utilitzation, see the OpenShift Container Platform "Reviewing monitoring dashboards" documentation.

3.3.2. Elasticsearch cluster health status is yellow

Replica shards for at least one primary shard are not allocated to nodes. Increase the node count by adjusting the nodeCount value in the ClusterLogging custom resource (CR).

3.3.3. Elasticsearch node disk low watermark reached

Elasticsearch does not allocate shards to nodes that reach the low watermark.

Tip

Some commands in this documentation reference an Elasticsearch pod by using a $ES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster.

You can list the available Elasticsearch pods by running the following command:

$ oc -n openshift-logging get pods -l component=elasticsearch

Choose one of the pods listed and set the $ES_POD_NAME variable, by running the following command:

$ export ES_POD_NAME=<elasticsearch_pod_name>

You can now use the $ES_POD_NAME variable in commands.

Procedure

  1. Identify the node on which Elasticsearch is deployed by running the following command:

    $ oc -n openshift-logging get po -o wide
  2. Check if there are unassigned shards by running the following command:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cluster/health?pretty | grep unassigned_shards
  3. If there are unassigned shards, check the disk space on each node, by running the following command:

    $ for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \
      do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod \
      -- df -h /elasticsearch/persistent; done
  4. In the command output, check the Use column to determine the used disk percentage on that node.

    Example output

    elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme1n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme2n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme3n1     19G  528M   19G   3% /elasticsearch/persistent

    If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node.

  5. To check the current redundancyPolicy, run the following command:

    $ oc -n openshift-logging get es elasticsearch \
      -o jsonpath='{.spec.redundancyPolicy}'

    If you are using a ClusterLogging resource on your cluster, run the following command:

    $ oc -n openshift-logging get cl \
      -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'

    If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change.

  6. If the preceding steps do not fix the issue, delete the old indices.

    1. Check the status of all indices on Elasticsearch by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- indices
    2. Identify an old index that can be deleted.
    3. Delete the index by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name> -X DELETE

3.3.4. Elasticsearch node disk high watermark reached

Elasticsearch attempts to relocate shards away from a node that has reached the high watermark to a node with low disk usage that has not crossed any watermark threshold limits.

To allocate shards to a particular node, you must free up some space on that node. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy.

Tip

Some commands in this documentation reference an Elasticsearch pod by using a $ES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster.

You can list the available Elasticsearch pods by running the following command:

$ oc -n openshift-logging get pods -l component=elasticsearch

Choose one of the pods listed and set the $ES_POD_NAME variable, by running the following command:

$ export ES_POD_NAME=<elasticsearch_pod_name>

You can now use the $ES_POD_NAME variable in commands.

Procedure

  1. Identify the node on which Elasticsearch is deployed by running the following command:

    $ oc -n openshift-logging get po -o wide
  2. Check the disk space on each node:

    $ for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \
      do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod \
      -- df -h /elasticsearch/persistent; done
  3. Check if the cluster is rebalancing:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_cluster/health?pretty | grep relocating_shards

    If the command output shows relocating shards, the high watermark has been exceeded. The default value of the high watermark is 90%.

  4. Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy.
  5. To check the current redundancyPolicy, run the following command:

    $ oc -n openshift-logging get es elasticsearch \
      -o jsonpath='{.spec.redundancyPolicy}'

    If you are using a ClusterLogging resource on your cluster, run the following command:

    $ oc -n openshift-logging get cl \
      -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'

    If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change.

  6. If the preceding steps do not fix the issue, delete the old indices.

    1. Check the status of all indices on Elasticsearch by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- indices
    2. Identify an old index that can be deleted.
    3. Delete the index by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name> -X DELETE

3.3.5. Elasticsearch node disk flood watermark reached

Elasticsearch enforces a read-only index block on every index that has both of these conditions:

  • One or more shards are allocated to the node.
  • One or more disks exceed the flood stage.

Use the following procedure to troubleshoot this alert.

Tip

Some commands in this documentation reference an Elasticsearch pod by using a $ES_POD_NAME shell variable. If you want to copy and paste the commands directly from this documentation, you must set this variable to a value that is valid for your Elasticsearch cluster.

You can list the available Elasticsearch pods by running the following command:

$ oc -n openshift-logging get pods -l component=elasticsearch

Choose one of the pods listed and set the $ES_POD_NAME variable, by running the following command:

$ export ES_POD_NAME=<elasticsearch_pod_name>

You can now use the $ES_POD_NAME variable in commands.

Procedure

  1. Get the disk space of the Elasticsearch node:

    $ for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \
      do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod \
      -- df -h /elasticsearch/persistent; done
  2. In the command output, check the Avail column to determine the free disk space on that node.

    Example output

    elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme1n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme2n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme3n1     19G  528M   19G   3% /elasticsearch/persistent

  3. Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy.
  4. To check the current redundancyPolicy, run the following command:

    $ oc -n openshift-logging get es elasticsearch \
      -o jsonpath='{.spec.redundancyPolicy}'

    If you are using a ClusterLogging resource on your cluster, run the following command:

    $ oc -n openshift-logging get cl \
      -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'

    If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change.

  5. If the preceding steps do not fix the issue, delete the old indices.

    1. Check the status of all indices on Elasticsearch by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- indices
    2. Identify an old index that can be deleted.
    3. Delete the index by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name> -X DELETE
  6. Continue freeing up and monitoring the disk space. After the used disk space drops below 90%, unblock writing to this node by running the following command:

    $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
      -- es_util --query=_all/_settings?pretty \
      -X PUT -d '{"index.blocks.read_only_allow_delete": null}'

3.3.6. Elasticsearch JVM heap usage is high

The Elasticsearch node Java virtual machine (JVM) heap memory used is above 75%. Consider increasing the heap size.

3.3.7. Aggregated logging system CPU is high

System CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node.

3.3.8. Elasticsearch process CPU is high

Elasticsearch process CPU usage on the node is high. Check the CPU of the cluster node. Consider allocating more CPU resources to the node.

3.3.9. Elasticsearch disk space is running low

Elasticsearch is predicted to run out of disk space within the next 6 hours based on current disk usage. Use the following procedure to troubleshoot this alert.

Procedure

  1. Get the disk space of the Elasticsearch node:

    $ for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \
      do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod \
      -- df -h /elasticsearch/persistent; done
  2. In the command output, check the Avail column to determine the free disk space on that node.

    Example output

    elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme1n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme2n1     19G  522M   19G   3% /elasticsearch/persistent
    elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/nvme3n1     19G  528M   19G   3% /elasticsearch/persistent

  3. Increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster, or decrease the total cluster redundancy policy.
  4. To check the current redundancyPolicy, run the following command:

    $ oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'

    If you are using a ClusterLogging resource on your cluster, run the following command:

    $ oc -n openshift-logging get cl \
      -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'

    If the cluster redundancyPolicy value is higher than the SingleRedundancy value, set it to the SingleRedundancy value and save this change.

  5. If the preceding steps do not fix the issue, delete the old indices.

    1. Check the status of all indices on Elasticsearch by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- indices
    2. Identify an old index that can be deleted.
    3. Delete the index by running the following command:

      $ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
        -- es_util --query=<elasticsearch_index_name> -X DELETE

3.3.10. Elasticsearch FileDescriptor usage is high

Based on current usage trends, the predicted number of file descriptors on the node is insufficient. Check the value of max_file_descriptors for each node as described in the Elasticsearch File Descriptors documentation.

3.4. Viewing the status of the Elasticsearch log store

You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components.

3.4.1. Viewing the status of the Elasticsearch log store

You can view the status of the Elasticsearch log store.

Prerequisites

  • The Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator are installed.

Procedure

  1. Change to the openshift-logging project by running the following command:

    $ oc project openshift-logging
  2. To view the status:

    1. Get the name of the Elasticsearch log store instance by running the following command:

      $ oc get Elasticsearch

      Example output

      NAME            AGE
      elasticsearch   5h9m

    2. Get the Elasticsearch log store status by running the following command:

      $ oc get Elasticsearch <Elasticsearch-instance> -o yaml

      For example:

      $ oc get Elasticsearch elasticsearch -n openshift-logging -o yaml

      The output includes information similar to the following:

      Example output

      status: 1
        cluster: 2
          activePrimaryShards: 30
          activeShards: 60
          initializingShards: 0
          numDataNodes: 3
          numNodes: 3
          pendingTasks: 0
          relocatingShards: 0
          status: green
          unassignedShards: 0
        clusterHealth: ""
        conditions: [] 3
        nodes: 4
        - deploymentName: elasticsearch-cdm-zjf34ved-1
          upgradeStatus: {}
        - deploymentName: elasticsearch-cdm-zjf34ved-2
          upgradeStatus: {}
        - deploymentName: elasticsearch-cdm-zjf34ved-3
          upgradeStatus: {}
        pods: 5
          client:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422
            - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz
            - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt
          data:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422
            - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz
            - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt
          master:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422
            - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz
            - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt
        shardAllocationEnabled: all

      1
      In the output, the cluster status fields appear in the status stanza.
      2
      The status of the Elasticsearch log store:
      • The number of active primary shards.
      • The number of active shards.
      • The number of shards that are initializing.
      • The number of Elasticsearch log store data nodes.
      • The total number of Elasticsearch log store nodes.
      • The number of pending tasks.
      • The Elasticsearch log store status: green, red, yellow.
      • The number of unassigned shards.
      3
      Any status conditions, if present. The Elasticsearch log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown:
      • Container Waiting for both the Elasticsearch log store and proxy containers.
      • Container Terminated for both the Elasticsearch log store and proxy containers.
      • Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages.
      4
      The Elasticsearch log store nodes in the cluster, with upgradeStatus.
      5
      The Elasticsearch log store client, data, and master pods in the cluster, listed under failed, notReady, or ready state.
3.4.1.1. Example condition messages

The following are examples of some condition messages from the Status section of the Elasticsearch instance.

The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node.

status:
  nodes:
  - conditions:
    - lastTransitionTime: 2019-03-15T15:57:22Z
      message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
        be allocated on this node.
      reason: Disk Watermark Low
      status: "True"
      type: NodeStorage
    deploymentName: example-elasticsearch-cdm-0-1
    upgradeStatus: {}

The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes.

status:
  nodes:
  - conditions:
    - lastTransitionTime: 2019-03-15T16:04:45Z
      message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
        from this node.
      reason: Disk Watermark High
      status: "True"
      type: NodeStorage
    deploymentName: example-elasticsearch-cdm-0-1
    upgradeStatus: {}

The following status message indicates that the Elasticsearch log store node selector in the custom resource (CR) does not match any nodes in the cluster:

status:
    nodes:
    - conditions:
      - lastTransitionTime: 2019-04-10T02:26:24Z
        message: '0/8 nodes are available: 8 node(s) didn''t match node selector.'
        reason: Unschedulable
        status: "True"
        type: Unschedulable

The following status message indicates that the Elasticsearch log store CR uses a non-existent persistent volume claim (PVC).

status:
   nodes:
   - conditions:
     - last Transition Time:  2019-04-10T05:55:51Z
       message:               pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
       reason:                Unschedulable
       status:                True
       type:                  Unschedulable

The following status message indicates that your Elasticsearch log store cluster does not have enough nodes to support the redundancy policy.

status:
  clusterHealth: ""
  conditions:
  - lastTransitionTime: 2019-04-17T20:01:31Z
    message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or
      add more nodes with data roles
    reason: Invalid Settings
    status: "True"
    type: InvalidRedundancy

This status message indicates your cluster has too many control plane nodes:

status:
  clusterHealth: green
  conditions:
    - lastTransitionTime: '2019-04-17T20:12:34Z'
      message: >-
        Invalid master nodes count. Please ensure there are no more than 3 total
        nodes with master roles
      reason: Invalid Settings
      status: 'True'
      type: InvalidMasters

The following status message indicates that Elasticsearch storage does not support the change you tried to make.

For example:

status:
  clusterHealth: green
  conditions:
    - lastTransitionTime: "2021-05-07T01:05:13Z"
      message: Changing the storage structure for a custom resource is not supported
      reason: StorageStructureChangeIgnored
      status: 'True'
      type: StorageStructureChangeIgnored

The reason and type fields specify the type of unsupported change:

StorageClassNameChangeIgnored
Unsupported change to the storage class name.
StorageSizeChangeIgnored
Unsupported change the storage size.
StorageStructureChangeIgnored

Unsupported change between ephemeral and persistent storage structures.

Important

If you try to configure the ClusterLogging CR to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the PVC.

3.4.2. Viewing the status of the log store components

You can view the status for a number of the log store components.

Elasticsearch indices

You can view the status of the Elasticsearch indices.

  1. Get the name of an Elasticsearch pod:

    $ oc get pods --selector component=elasticsearch -o name

    Example output

    pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw
    pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n
    pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7

  2. Get the status of the indices:

    $ oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices

    Example output

    Defaulting container name to elasticsearch.
    Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod.
    
    green  open   infra-000002                                                     S4QANnf1QP6NgCegfnrnbQ   3   1     119926            0        157             78
    green  open   audit-000001                                                     8_EQx77iQCSTzFOXtxRqFw   3   1          0            0          0              0
    green  open   .security                                                        iDjscH7aSUGhIdq0LheLBQ   1   1          5            0          0              0
    green  open   .kibana_-377444158_kubeadmin                                     yBywZ9GfSrKebz5gWBZbjw   3   1          1            0          0              0
    green  open   infra-000001                                                     z6Dpe__ORgiopEpW6Yl44A   3   1     871000            0        874            436
    green  open   app-000001                                                       hIrazQCeSISewG3c2VIvsQ   3   1       2453            0          3              1
    green  open   .kibana_1                                                        JCitcBMSQxKOvIq6iQW6wg   1   1          0            0          0              0
    green  open   .kibana_-1595131456_user1                                        gIYFIEGRRe-ka0W3okS-mQ   3   1          1            0          0              0

Log store pods

You can view the status of the pods that host the log store.

  1. Get the name of a pod:

    $ oc get pods --selector component=elasticsearch -o name

    Example output

    pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw
    pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n
    pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7

  2. Get the status of a pod:

    $ oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw

    The output includes the following status information:

    Example output

    ....
    Status:             Running
    
    ....
    
    Containers:
      elasticsearch:
        Container ID:   cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2
        State:          Running
          Started:      Mon, 08 Jun 2020 10:17:56 -0400
        Ready:          True
        Restart Count:  0
        Readiness:  exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
    
    ....
    
      proxy:
        Container ID:  cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1
        State:          Running
          Started:      Mon, 08 Jun 2020 10:18:38 -0400
        Ready:          True
        Restart Count:  0
    
    ....
    
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    
    ....
    
    Events:          <none>

Log storage pod deployment configuration

You can view the status of the log store deployment configuration.

  1. Get the name of a deployment configuration:

    $ oc get deployment --selector component=elasticsearch -o name

    Example output

    deployment.extensions/elasticsearch-cdm-1gon-1
    deployment.extensions/elasticsearch-cdm-1gon-2
    deployment.extensions/elasticsearch-cdm-1gon-3

  2. Get the deployment configuration status:

    $ oc describe deployment elasticsearch-cdm-1gon-1

    The output includes the following status information:

    Example output

    ....
      Containers:
       elasticsearch:
        Image:      registry.redhat.io/openshift-logging/elasticsearch6-rhel8
        Readiness:  exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
    
    ....
    
    Conditions:
      Type           Status   Reason
      ----           ------   ------
      Progressing    Unknown  DeploymentPaused
      Available      True     MinimumReplicasAvailable
    
    ....
    
    Events:          <none>

Log store replica set

You can view the status of the log store replica set.

  1. Get the name of a replica set:

    $ oc get replicaSet --selector component=elasticsearch -o name
    
    replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495
    replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf
    replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d
  2. Get the status of the replica set:

    $ oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495

    The output includes the following status information:

    Example output

    ....
      Containers:
       elasticsearch:
        Image:      registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25
        Readiness:  exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
    
    ....
    
    Events:          <none>

3.4.3. Elasticsearch cluster status

A dashboard in the Observe section of the OpenShift Container Platform web console displays the status of the Elasticsearch cluster.

To get the status of the OpenShift Elasticsearch cluster, visit the dashboard in the Observe section of the OpenShift Container Platform web console at <cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging.

Elasticsearch status fields

eo_elasticsearch_cr_cluster_management_state

Shows whether the Elasticsearch cluster is in a managed or unmanaged state. For example:

eo_elasticsearch_cr_cluster_management_state{state="managed"} 1
eo_elasticsearch_cr_cluster_management_state{state="unmanaged"} 0
eo_elasticsearch_cr_restart_total

Shows the number of times the Elasticsearch nodes have restarted for certificate restarts, rolling restarts, or scheduled restarts. For example:

eo_elasticsearch_cr_restart_total{reason="cert_restart"} 1
eo_elasticsearch_cr_restart_total{reason="rolling_restart"} 1
eo_elasticsearch_cr_restart_total{reason="scheduled_restart"} 3
es_index_namespaces_total

Shows the total number of Elasticsearch index namespaces. For example:

Total number of Namespaces.
es_index_namespaces_total 5
es_index_document_count

Shows the number of records for each namespace. For example:

es_index_document_count{namespace="namespace_1"} 25
es_index_document_count{namespace="namespace_2"} 10
es_index_document_count{namespace="namespace_3"} 5

The "Secret Elasticsearch fields are either missing or empty" message

If Elasticsearch is missing the admin-cert, admin-key, logging-es.crt, or logging-es.key files, the dashboard shows a status message similar to the following example:

message": "Secret \"elasticsearch\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]",
"reason": "Missing Required Secrets",

Chapter 4. About Logging

As a cluster administrator, you can deploy logging on an OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution.

Note

The Kibana web console is now deprecated is planned to be removed in a future logging release.

OpenShift Container Platform cluster administrators can deploy logging by using Operators. For information, see Installing logging.

The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder CR to specify which logs are collected, how they are transformed, and where they are forwarded to.

Note

Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in Forward audit logs to the log store.

4.1. Logging architecture

The major components of the logging are:

Collector

The collector is a daemonset that deploys pods to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector.

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

Log store

The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores.

Note

The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

Visualization

You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin.

Note

The Kibana web console is now deprecated is planned to be removed in a future logging release.

Logging collects container logs and node logs. These are categorized into types:

Application logs
Container logs generated by user applications running in the cluster, except infrastructure container applications.
Infrastructure logs
Container logs generated by infrastructure namespaces: openshift*, kube*, or default, as well as journald messages from nodes.
Audit logs
Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd, kube-apiserver, openshift-apiserver services, as well as the ovn project if enabled.

4.2. About deploying logging

Administrators can deploy the logging by using the OpenShift Container Platform web console or the OpenShift CLI (oc) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging.

Administrators and application developers can view the logs of the projects for which they have view access.

4.2.1. Logging custom resources

You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator.

Red Hat OpenShift Logging Operator:

  • ClusterLogging (CL) - After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly.
  • ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration.

Loki Operator:

  • LokiStack - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.

OpenShift Elasticsearch Operator:

Note

These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator.

  • ElasticSearch - Configure and deploy an Elasticsearch instance as the default log store.
  • Kibana - Configure and deploy Kibana instance to search, query and view logs.

4.2.2. About JSON OpenShift Container Platform Logging

You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks:

  • Parse JSON logs
  • Configure JSON log data for Elasticsearch
  • Forward JSON logs to the Elasticsearch log store

4.2.3. About collecting and storing Kubernetes events

The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Container Platform Logging. You must manually deploy the Event Router.

For information, see About collecting and storing Kubernetes events.

4.2.4. About troubleshooting OpenShift Container Platform Logging

You can troubleshoot the logging issues by performing the following tasks:

  • Viewing logging status
  • Viewing the status of the log store
  • Understanding logging alerts
  • Collecting logging data for Red Hat Support
  • Troubleshooting for critical alerts

4.2.5. About exporting fields

The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana.

For information, see About exporting fields.

4.2.6. About event routing

The Event Router is a pod that watches OpenShift Container Platform events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT. Fluentd collects those events and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index.

You must manually deploy the Event Router.

For information, see Collecting and storing Kubernetes events.

Chapter 5. Installing Logging

You can deploy logging by installing the Red Hat OpenShift Logging Operator. The Red Hat OpenShift Logging Operator creates and manages the components of the logging stack.

Note

Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Important

For new installations, use Vector and LokiStack. Elasticsearch and Fluentd are deprecated and are planned to be removed in a future release.

5.1. Installing the Red Hat OpenShift Logging Operator by using the web console

You can install the Red Hat OpenShift Logging Operator by using the OpenShift Container Platform web console.

Prerequisites

  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console.

Procedure

  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
  2. Type OpenShift Logging in the Filter by keyword box.
  3. Choose Red Hat OpenShift Logging from the list of available Operators, and click Install.
  4. Ensure that A specific namespace on the cluster is selected under Installation mode.
  5. Ensure that Operator recommended namespace is openshift-logging under Installed Namespace.
  6. Select Enable operator recommended cluster monitoring on this namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace.

  7. Select stable-5.y as the Update channel.

    Note

    The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

  8. Select an Update approval.

    • The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
    • The Manual strategy requires a user with appropriate credentials to approve the Operator update.
  9. Select Enable or Disable for the Console plugin.
  10. Click Install.

Verification

  1. Verify that the Red Hat OpenShift Logging Operator is installed by switching to the OperatorsInstalled Operators page.
  2. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date.
Important

An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page.

If the Operator does not show as installed, choose one of the following troubleshooting options:

  • Go to the OperatorsInstalled Operators page, and inspect the Status column for any errors or failures.
  • Go to the WorkloadsPods page, and check the logs in any pods in the openshift-logging project that are reporting issues.

5.2. Creating a ClusterLogging object by using the web console

After you have installed the logging Operators, you must create a ClusterLogging custom resource to configure log storage, visualization, and the log collector for your cluster.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator.
  • You have access to the OpenShift Container Platform web console Administrator perspective.

Procedure

  1. Navigate to the Custom Resource Definitions page.
  2. On the Custom Resource Definitions page, click ClusterLogging.
  3. On the Custom Resource Definition details page, select View Instances from the Actions menu.
  4. On the ClusterLoggings page, click Create ClusterLogging.
  5. In the collection section, select a Collector Implementation.

    Note

    Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

  6. In the logStore section, select a type.

    Note

    The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

  7. Click Create.

5.3. Installing the Red Hat OpenShift Logging Operator by using the CLI

You can use the OpenShift CLI (oc) to install the Red Hat OpenShift Logging Operator.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a Namespace object as a YAML file:

    Example Namespace object

    apiVersion: v1
    kind: Namespace
    metadata:
      name: <name> 1
      annotations:
        openshift.io/node-selector: ""
      labels:
        openshift.io/cluster-monitoring: "true"

    1
    You must specify openshift-logging as the name of the namespace for logging versions 5.7 and earlier versions. For logging 5.8 and later versions, you can use any name.
  2. Apply the Namespace object by running the following command:

    $ oc apply -f <filename>.yaml
  3. Create an OperatorGroup object as a YAML file:

    Example OperatorGroup object

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: cluster-logging
      namespace: openshift-logging 1
    spec:
      targetNamespaces: [ ] 2

    1
    Ensure that the namespace field is set to openshift-logging.
    2
    For logging versions 5.7 and earlier, set targetNamespaces to openshift-logging. For logging versions 5.8 and later, the default installation mode is all namespaces on the cluster, or specify a list of namespaces separated by a comma.
  4. Apply the OperatorGroup object by running the following command:

    $ oc apply -f <filename>.yaml
  5. Create a Subscription object to subscribe the namespace to the Red Hat OpenShift Logging Operator:

    Example Subscription object

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: cluster-logging
      namespace: openshift-logging 1
    spec:
      channel: stable 2
      name: cluster-logging
      source: redhat-operators 3
      sourceNamespace: openshift-marketplace

    1
    You must specify the openshift-logging namespace for logging versions 5.7 and older. For logging 5.8 and later versions, you can use any namespace.
    2
    Specify stable or stable-x.y as the channel.
    3
    Specify redhat-operators. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
  6. Apply the subscription by running the following command:

    $ oc apply -f <filename>.yaml

    The Red Hat OpenShift Logging Operator is installed to the openshift-logging namespace.

Verification

  1. Run the following command:

    $ oc get csv -n <namespace>
  2. Observe the output and confirm that the Red Hat OpenShift Logging Operator exists in the namespace:

    Example output

    NAMESPACE                                               NAME                                         DISPLAY                  VERSION               REPLACES   PHASE
    ...
    openshift-logging                                       clusterlogging.5.8.0-202007012112.p0         OpenShift Logging          5.8.0-202007012112.p0              Succeeded
    ...

5.4. Creating a ClusterLogging object by using the CLI

This default logging configuration supports a wide array of environments. Review the topics on tuning and configuring components for information about modifications you can make.

Prerequisites

  • You have installed the Red Hat OpenShift Logging Operator.
  • You have installed the OpenShift Elasticsearch Operator for your log store.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a ClusterLogging object as a YAML file:

    Example ClusterLogging object

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
      name: instance 1
      namespace: openshift-logging
    spec:
      managementState: Managed 2
      logStore:
        type: elasticsearch 3
        retentionPolicy: 4
          application:
            maxAge: 1d
          infra:
            maxAge: 7d
          audit:
            maxAge: 7d
        elasticsearch:
          nodeCount: 3 5
          storage:
            storageClassName: <storage_class_name> 6
            size: 200G
          resources: 7
              limits:
                memory: 16Gi
              requests:
                memory: 16Gi
          proxy: 8
            resources:
              limits:
                memory: 256Mi
              requests:
                memory: 256Mi
          redundancyPolicy: SingleRedundancy
      visualization:
        type: kibana 9
        kibana:
          replicas: 1
      collection:
        type: fluentd 10
        fluentd: {}

    1
    The name must be instance.
    2
    The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to Unmanaged. However, an unmanaged deployment does not receive updates until OpenShift Logging is placed back into a managed state.
    3
    Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage.
    4
    Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 7d for seven days. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source.
    5
    Specify the number of Elasticsearch nodes. See the note that follows this list.
    6
    Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage.
    7
    Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request.
    8
    Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request.
    9
    Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see Configuring the log visualizer.
    10
    Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see "Configuring Fluentd".
    Note

    The maximum number of Elasticsearch control plane nodes is three. If you specify a nodeCount greater than 3, OpenShift Container Platform creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as data-only nodes, using client and data roles. Control plane nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.

    For example, if nodeCount=4, the following nodes are created:

    $ oc get deployment

    Example output

    NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
    cluster-logging-operator       1/1     1            1           18h
    elasticsearch-cd-x6kdekli-1    1/1     1            1          6m54s
    elasticsearch-cdm-x6kdekli-1   1/1     1            1           18h
    elasticsearch-cdm-x6kdekli-2   1/1     1            1           6m49s
    elasticsearch-cdm-x6kdekli-3   1/1     1            1           6m44s

    The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.

Verification

You can verify the installation by listing the pods in the openshift-logging project.

  • List the pods by running the following command:

    $ oc get pods -n openshift-logging

    Observe the pods for components of the logging, similar to the following list:

    Example output

    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-66f77ffccb-ppzbg       1/1     Running   0          7m
    elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp    2/2     Running   0          2m40s
    elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc   2/2     Running   0          2m36s
    elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2   2/2     Running   0          2m4s
    collector-587vb                                   1/1     Running   0          2m26s
    collector-7mpb9                                   1/1     Running   0          2m30s
    collector-flm6j                                   1/1     Running   0          2m33s
    collector-gn4rn                                   1/1     Running   0          2m26s
    collector-nlgb6                                   1/1     Running   0          2m30s
    collector-snpkt                                   1/1     Running   0          2m28s
    kibana-d6d5668c5-rppqm                          2/2     Running   0          2m39s

5.5. Postinstallation tasks

After you have installed the Red Hat OpenShift Logging Operator, you can configure your deployment by creating and modifying a ClusterLogging custom resource (CR).

Tip

If you are not using the Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. See Removing unused components if you do not use the Elasticsearch log store.

5.5.1. About the ClusterLogging custom resource

To make changes to your logging environment, create and modify the ClusterLogging custom resource (CR).

Sample ClusterLogging custom resource (CR)

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance 1
  namespace: openshift-logging 2
spec:
  managementState: Managed 3
# ...

1
The CR name must be instance.
2
The CR must be installed to the openshift-logging namespace.
3
The Red Hat OpenShift Logging Operator management state. When the state is set to unmanaged, the Operator is in an unsupported state and does not receive updates.

5.5.2. Configuring log storage

You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR).

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch.
  • You have created a ClusterLogging CR.
Note

The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

Procedure

  1. Modify the ClusterLogging CR logStore spec:

    ClusterLogging CR example

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
    # ...
    spec:
    # ...
      logStore:
        type: <log_store_type> 1
        elasticsearch: 2
          nodeCount: <integer>
          resources: {}
          storage: {}
          redundancyPolicy: <redundancy_type> 3
        lokistack: 4
          name: {}
    # ...

    1
    Specify the log store type. This can be either lokistack or elasticsearch.
    2
    Optional configuration options for the Elasticsearch log store.
    3
    Specify the redundancy type. This value can be ZeroRedundancy, SingleRedundancy, MultipleRedundancy, or FullRedundancy.
    4
    Optional configuration options for LokiStack.

    Example ClusterLogging CR to specify LokiStack as the log store

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
      name: instance
      namespace: openshift-logging
    spec:
      managementState: Managed
      logStore:
        type: lokistack
        lokistack:
          name: logging-loki
    # ...

  2. Apply the ClusterLogging CR by running the following command:

    $ oc apply -f <filename>.yaml

5.5.3. Configuring the log collector

You can configure which log collector type your logging uses by modifying the ClusterLogging custom resource (CR).

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a ClusterLogging CR.

Procedure

  1. Modify the ClusterLogging CR collection spec:

    ClusterLogging CR example

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
    # ...
    spec:
    # ...
      collection:
        type: <log_collector_type> 1
        resources: {}
        tolerations: {}
    # ...

    1
    The log collector type you want to use for the logging. This can be vector or fluentd.
  2. Apply the ClusterLogging CR by running the following command:

    $ oc apply -f <filename>.yaml

5.5.4. Configuring the log visualizer

You can configure which log visualizer type your logging uses by modifying the ClusterLogging custom resource (CR).

Prerequisites

  • You have administrator permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the Red Hat OpenShift Logging Operator.
  • You have created a ClusterLogging CR.
Important

If you want to use the OpenShift Container Platform web console for visualization, you must enable the logging Console Plugin. See the documentation about "Log visualization with the web console".

Procedure

  1. Modify the ClusterLogging CR visualization spec:

    ClusterLogging CR example

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    metadata:
    # ...
    spec:
    # ...
      visualization:
        type: <visualizer_type> 1
        kibana: 2
          resources: {}
          nodeSelector: {}
          proxy: {}
          replicas: {}
          tolerations: {}
        ocpConsole: 3
          logsLimit: {}
          timeout: {}
    # ...

    1
    The type of visualizer you want to use for your logging. This can be either kibana or ocp-console. The Kibana console is only compatible with deployments that use Elasticsearch log storage, while the OpenShift Container Platform console is only compatible with LokiStack deployments.
    2
    Optional configurations for the Kibana console.