Logging
Configuring and using logging in OpenShift Container Platform
Abstract
Chapter 1. Release notes
1.1. Logging 5.9
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y
represents the major and minor version of logging you have installed. For example, stable-5.7.
1.1.1. Logging 5.9.10
This release includes RHSA-2024:10990.
1.1.1.1. Bug Fixes
-
Before this update, any namespace containing
openshift
orkube
was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces:default
,kube
,openshift
, and namespaces that begin withopenshift-
orkube-
. (LOG-6044) - Before this update, Loki attempted to detect the level of log messages, which caused confusion when the collector also detected log levels and produced different results. With this update, automatic log level detection in Loki is disabled. (LOG-6321)
-
Before this update, when the
ClusterLogForwarder
custom resource definedtls.insecureSkipVerify: true
in combination withtype: http
and an HTTP URL, the certificate validation was not skipped. This misconfiguration caused the collector to fail because it attempted to validate certificates despite the setting. With this update, whentls.insecureSkipVerify: true
is set, the URL is checked for the HTTPS. An HTTP URL will cause a misconfiguration error. (LOG-6376) Before this update, when any infrastructure namespaces were specified in the application inputs in the
ClusterLogForwarder
custom resource, logs were generated with the incorrectlog_type: application
tags. With this update, when any infrastructure namespaces are specified in the application inputs, logs are generated with the correctlog_type: infrastructure
tags. (LOG-6377)ImportantWhen updating to Logging for Red Hat OpenShift 5.9.10, if you previously added any infrastructure namespaces in the application inputs in the
ClusterLogForwarder
custom resource, you must add the permissions for collecting logs from infrastructure namespaces. For more details, see "Setting up log collection".
1.1.1.2. CVEs
1.1.2. Logging 5.9.9
This release includes RHBA-2024:10049.
1.1.2.1. Bug fixes
- Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. (LOG-6201)
- Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-6293)
1.1.2.2. CVEs
1.1.3. Logging 5.9.8
This release includes OpenShift Logging Bug Fix Release 5.9.8.
1.1.3.1. Bug fixes
-
Before this update, the Loki Operator failed to add the default
namespace
label to allAlertingRule
resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. (LOG-6181) - Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. (LOG-6183)
-
Before this update, an LF character in the
vector.toml
file under the ES authentication configuration caused the collector pods to crash. This update removes the newline characters from the username and password fields, resolving the issue. (LOG-6206) -
Before this update, it was possible to set the
.containerLimit.maxRecordsPerSecond
parameter in theClusterLogForwarder
custom resource to0
, which could lead to an exception during Vector’s startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. (LOG-6214)
1.1.3.2. CVEs
1.1.4. Logging 5.9.7
This release includes OpenShift Logging Bug Fix Release 5.9.7.
1.1.4.1. Bug fixes
-
Before this update, the
clusterlogforwarder.spec.outputs.http.timeout
parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, theclusterlogforwarder.spec.outputs.http.timeout
parameter is now correctly applied, ensuring Fluentd honors the specified timeout and handles HTTP connections according to the user’s configuration. (LOG-6125) -
Before this update, the TLS section was added without verifying the broker URL schema, resulting in SSL connection errors if the URLs did not start with
tls
. With this update, the TLS section is now added only if the broker URLs start withtls
, preventing SSL connection errors. (LOG-6041)
1.1.4.2. CVEs
For detailed information on Red Hat security ratings, review Severity ratings.
1.1.5. Logging 5.9.6
This release includes OpenShift Logging Bug Fix Release 5.9.6.
1.1.5.1. Bug fixes
- Before this update, the collector deployment ignored secret changes, causing receivers to reject logs. With this update, the system rolls out a new pod when there is a change in the secret value, ensuring that the collector reloads the updated secrets. (LOG-5525)
-
Before this update, the Vector could not correctly parse field values that included a single dollar sign (
$
). With this update, field values with a single dollar sign are automatically changed to two dollar signs ($$
), ensuring proper parsing by the Vector. (LOG-5602) -
Before this update, the drop filter could not handle non-string values (e.g.,
.responseStatus.code: 403
). With this update, the drop filter now works properly with these values. (LOG-5815) - Before this update, the collector used the default settings to collect audit logs, without handling the backload from output receivers. With this update, the process for collecting audit logs has been improved to better manage file handling and log reading efficiency. (LOG-5866)
-
Before this update, the
must-gather
tool failed on clusters with non-AMD64 architectures such as Azure Resource Manager (ARM) or PowerPC. With this update, the tool now detects the cluster architecture at runtime and uses architecture-independent paths and dependencies. The detection allowsmust-gather
to run smoothly on platforms like ARM and PowerPC. (LOG-5997) - Before this update, the log level was set using a mix of structured and unstructured keywords that were unclear. With this update, the log level follows a clear, documented order, starting with structured keywords. (LOG-6016)
-
Before this update, multiple unnamed pipelines writing to the default output in the
ClusterLogForwarder
caused a validation error due to duplicate auto-generated names. With this update, the pipeline names are now generated without duplicates. (LOG-6033) -
Before this update, the collector pods did not have the
PreferredScheduling
annotation. With this update, thePreferredScheduling
annotation is added to the collector daemonset. (LOG-6023)
1.1.5.2. CVEs
1.1.6. Logging 5.9.5
This release includes OpenShift Logging Bug Fix Release 5.9.5
1.1.6.1. Bug Fixes
- Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. (LOG-5855)
- Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. (LOG-5895)
- Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. (LOG-5945)
1.1.6.2. CVEs
None.
1.1.7. Logging 5.9.4
This release includes OpenShift Logging Bug Fix Release 5.9.4
1.1.7.1. Bug Fixes
- Before this update, an incorrectly formatted timeout configuration caused the OCP plugin to crash. With this update, a validation prevents the crash and informs the user about the incorrect configuration. (LOG-5373)
-
Before this update, workloads with labels containing
-
caused an error in the collector when normalizing log entries. With this update, the configuration change ensures the collector uses the correct syntax. (LOG-5524) - Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. (LOG-5697)
-
Before this update, the Loki Operator would crash if the
CredentialRequest
specification was registered in an environment without thecloud-credentials-operator
. With this update, theCredentialRequest
specification only registers in environments that arecloud-credentials-operator
enabled. (LOG-5701) - Before this update, the Logging Operator watched and processed all config maps across the cluster. With this update, the dashboard controller only watches the config map for the logging dashboard. (LOG-5702)
-
Before this update, the
ClusterLogForwarder
introduced an extra space in the message payload which did not follow theRFC3164
specification. With this update, the extra space has been removed, fixing the issue. (LOG-5707) -
Before this update, removing the seeding for
grafana-dashboard-cluster-logging
as a part of (LOG-5308) broke new greenfield deployments without dashboards. With this update, the Logging Operator seeds the dashboard at the beginning and continues to update it for changes. (LOG-5747) -
Before this update, LokiStack was missing a route for the Volume API causing the following error:
404 not found
. With this update, LokiStack exposes the Volume API, resolving the issue. (LOG-5749)
1.1.7.2. CVEs
1.1.8. Logging 5.9.3
This release includes OpenShift Logging Bug Fix Release 5.9.3
1.1.8.1. Bug Fixes
-
Before this update, there was a delay in restarting Ingesters when configuring
LokiStack
, because the Loki Operator sets the write-ahead logreplay_memory_ceiling
to zero bytes for the1x.demo
size. With this update, the minimum value used for thereplay_memory_ceiling
has been increased to avoid delays. (LOG-5614) - Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. (LOG-5586)
1.1.8.2. CVEs
1.1.9. Logging 5.9.2
This release includes OpenShift Logging Bug Fix Release 5.9.2
1.1.9.1. Bug Fixes
-
Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the
ClusterLogForwarder
CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when aNot authorized to collect
error occurs. (LOG-4910) - Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. (LOG-5156)
-
Before this update, the Logging Operator did not monitor changes to the
grafana-dashboard-cluster-logging
config map. With this update, the Logging Operator monitors changes in theConfigMap
objects, ensuring the system stays synchronized and responds effectively to config map modifications. (LOG-5308) - Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5426)
-
Before this change, the Fluentd
out_http
plugin ignored theno_proxy
environment variable. With this update, the Fluentd patches theHTTP#start
method of ruby to honor theno_proxy
environment variable. (LOG-5466)
1.1.9.2. CVEs
1.1.10. Logging 5.9.1
This release includes OpenShift Logging Bug Fix Release 5.9.1
1.1.10.1. Enhancements
- Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5401)
-
Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the
LokiStack
status updates to indicate any invalid URLs. (LOG-5395)
1.1.10.2. Bug Fixes
- Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. (LOG-5268)
-
Before this update, a prune filter without a defined
pruneFilterSpec
would cause a segfault. With this update, there is a validation error if a prune filter is without a definedpuneFilterSpec
. (LOG-5322) -
Before this update, a drop filter without a defined
dropTestsSpec
would cause a segfault. With this update, there is a validation error if a prune filter is without a definedpuneFilterSpec
. (LOG-5323) -
Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the
LokiStack
. (LOG-5397) -
Before this update, poorly formatted timestamp fields in audit log records led to
WARN
messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. (LOG-4672) -
Before this update, the error message thrown while validating a
ClusterLogForwarder
resource name and namespace did not correspond to the correct error. With this update, the system checks if aClusterLogForwarder
resource with the same name exists in the same namespace. If not, it corresponds to the correct error. (LOG-5062) - Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. (LOG-5307)
- Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. (LOG-5309)
1.1.10.3. CVEs
No CVEs.
1.1.11. Logging 5.9.0
This release includes OpenShift Logging Bug Fix Release 5.9.0
1.1.11.1. Removal notice
The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.
1.1.11.2. Deprecation notice
- In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
-
In Logging 5.9, the
Fields
option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release.
1.1.11.3. Enhancements
1.1.11.3.1. Log Collection
-
This enhancement adds the ability to refine the process of log collection by using a workload’s metadata to
drop
orprune
logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such askube api
orovn
logs, to only collect individual sources. (LOG-2155) - This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. (LOG-3527)
-
With this update, the
ClusterLogForwarder
API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. (LOG-4605) -
This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the
ClusterLogForwarder
custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate theClusterLogForwarder
CR with thelogging.openshift.io/dev-preview-enable-collector-as-deployment
annotation. (LOG-4779) - This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. (LOG-4843)
-
With this update, the
ClusterLogForwarder
CR that specifies an application input with one or more infrastructure namespaces likedefault
,openshift*
, orkube*
, now requires a service account with thecollect-infrastructure-logs
role. (LOG-4943) -
This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the
AtLeastOnce
option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. (LOG-5026) - This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. (LOG-5055)
1.1.11.3.2. Log Storage
- This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. (LOG-4538)
-
This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled OpenShift Container Platform 4.14 and later clusters. Local storage requires the addition of a
CredentialMode: static
annotation underspec.storage.secret
in the LokiStack CR. (LOG-4540) - With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. (LOG-4571)
- With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. (LOG-4754)
1.1.11.4. Bug Fixes
-
Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new
oc
client is available incluster-logging-rhel9-operator
, and must-gather works properly on FIPS clusters. (LOG-4403) - Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. (LOG-4709)
- Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. (LOG-4792)
-
Before this update, when the
ClusterLogForwarder
CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception whenClusterLogging.Spec.Collection
was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. (LOG-5006) -
Before this update, in specific corner cases, replacing the
ClusterLogForwarder
CR status field caused theresourceVersion
to constantly update due to changing timestamps inStatus
conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. (LOG-5007) -
Before this update, there was an internal buffering behavior to
drop_newest
to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. (LOG-5123) -
Before this update, the Loki Operator
ServiceMonitor
in theopenshift-operators-redhat
namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on theServiceMonitor
configuration. With this update, the Loki OperatorServiceMonitor
inopenshift-operators-redhat
namespace now references a service account token secret by aLocalReference
object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki OperatorServiceMonitor
successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5165) -
Before this update, the configuration of the Loki Operator
ServiceMonitor
could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration ofServiceMonitor
now only matches the dedicated metrics service. (LOG-5212)
1.1.11.5. Known Issues
None.
1.1.11.6. CVEs
1.2. Logging 5.8
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y
represents the major and minor version of logging you have installed. For example, stable-5.7.
1.2.1. Logging 5.8.16
This release includes RHBA-2024:10989 and RHBA-2024:143685.
1.2.1.1. Bug fixes
- Before this update, Loki automatically tried to guess the log level of log messages, which caused confusion because the collector already does this, and Loki and the collector would sometimes come to different results. With this update, the automatic log level discovery in Loki is disabled. LOG-6322.
1.2.1.2. CVEs
- CVE-2019-12900
- CVE-2021-3903
- CVE-2023-38709
- CVE-2024-2236
- CVE-2024-2511
- CVE-2024-3596
- CVE-2024-4603
- CVE-2024-4741
- CVE-2024-5535
- CVE-2024-6232
- CVE-2024-9287
- CVE-2024-10041
- CVE-2024-10963
- CVE-2024-11168
- CVE-2024-24795
- CVE-2024-36387
- CVE-2024-41009
- CVE-2024-42244
- CVE-2024-47175
- CVE-2024-47875
- CVE-2024-50226
- CVE-2024-50602
1.2.2. Logging 5.8.15
This release includes RHBA-2024:10052 and RHBA-2024:10053.
1.2.2.1. Bug fixes
- Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-6294)
- Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. (LOG-6328)
1.2.2.2. CVEs
- CVE-2021-47385
- CVE-2023-28746
- CVE-2023-48161
- CVE-2023-52658
- CVE-2024-6119
- CVE-2024-6232
- CVE-2024-21208
- CVE-2024-21210
- CVE-2024-21217
- CVE-2024-21235
- CVE-2024-27403
- CVE-2024-35989
- CVE-2024-36889
- CVE-2024-36978
- CVE-2024-38556
- CVE-2024-39483
- CVE-2024-39502
- CVE-2024-40959
- CVE-2024-42079
- CVE-2024-42272
- CVE-2024-42284
- CVE-2024-3596
- CVE-2024-5535
1.2.3. Logging 5.8.14
This release includes OpenShift Logging Bug Fix Release 5.8.14 and OpenShift Logging Bug Fix Release 5.8.14.
1.2.3.1. Bug fixes
-
Before this update, it was possible to set the
.containerLimit.maxRecordsPerSecond
parameter in theClusterLogForwarder
custom resource to0
, which could lead to an exception during Vector’s startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. (LOG-4671) -
Before this update, the Loki Operator did not automatically add the default
namespace
label to all its alerting rules, which caused Alertmanager instance for user-defined projects to skip routing such alerts. With this update, all alerting and recording rules have thenamespace
label and Alertmanager now routes these alerts correctly. (LOG-6182) - Before this update, the LokiStack ruler component view was not properly initialized, which caused the invalid field error when the ruler component was disabled. With this update, the issue is resolved by the component view being initialized with an empty value. (LOG-6184)
1.2.3.2. CVEs
For detailed information on Red Hat security ratings, review Severity ratings.
1.2.4. Logging 5.8.13
This release includes OpenShift Logging Bug Fix Release 5.8.13 and OpenShift Logging Bug Fix Release 5.8.13.
1.2.4.1. Bug fixes
-
Before this update, the
clusterlogforwarder.spec.outputs.http.timeout
parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, theclusterlogforwarder.spec.outputs.http.timeout
parameter is now correctly applied, ensuring that Fluentd honors the specified timeout and handles HTTP connections according to the user’s configuration. (LOG-5210) - Before this update, the Elasticsearch Operator did not issue an alert to inform users about the upcoming removal, leaving existing installations unsupported without notice. With this update, the Elasticsearch Operator will trigger a continuous alert on OpenShift Container Platform version 4.16 and later, notifying users of its removal from the catalog in November 2025. (LOG-5966)
- Before this update, the Red Hat OpenShift Logging Operator was unavailable on OpenShift Container Platform version 4.16 and later, preventing Telco customers from completing their certifications for the upcoming Logging 6.0 release. With this update, the Red Hat OpenShift Logging Operator is now available on OpenShift Container Platform versions 4.16 and 4.17, resolving the issue. (LOG-6103)
- Before this update, the Elasticsearch Operator was not available in the OpenShift Container Platform versions 4.17 and 4.18, preventing the installation of ServiceMesh, Kiali, and Distributed Tracing. With this update, the Elasticsearch Operator properties have been expanded for OpenShift Container Platform versions 4.17 and 4.18, resolving the issue and allowing ServiceMesh, Kiali, and Distributed Tracing operators to install their stacks. (LOG-6134)
1.2.4.2. CVEs
- CVE-2023-52463
- CVE-2023-52801
- CVE-2024-6104
- CVE-2024-6119
- CVE-2024-26629
- CVE-2024-26630
- CVE-2024-26720
- CVE-2024-26886
- CVE-2024-26946
- CVE-2024-34397
- CVE-2024-35791
- CVE-2024-35797
- CVE-2024-35875
- CVE-2024-36000
- CVE-2024-36019
- CVE-2024-36883
- CVE-2024-36979
- CVE-2024-38559
- CVE-2024-38619
- CVE-2024-39331
- CVE-2024-40927
- CVE-2024-40936
- CVE-2024-41040
- CVE-2024-41044
- CVE-2024-41055
- CVE-2024-41073
- CVE-2024-41096
- CVE-2024-42082
- CVE-2024-42096
- CVE-2024-42102
- CVE-2024-42131
- CVE-2024-45490
- CVE-2024-45491
- CVE-2024-45492
- CVE-2024-2398
- CVE-2024-4032
- CVE-2024-6232
- CVE-2024-6345
- CVE-2024-6923
- CVE-2024-30203
- CVE-2024-30205
- CVE-2024-39331
- CVE-2024-45490
- CVE-2024-45491
- CVE-2024-45492
For detailed information on Red Hat security ratings, review Severity ratings.
1.2.5. Logging 5.8.12
This release includes OpenShift Logging Bug Fix Release 5.8.12 and OpenShift Logging Bug Fix Release 5.8.12.
1.2.5.1. Bug fixes
-
Before this update, the collector used internal buffering with the
drop_newest
setting to reduce high memory usage, which caused significant log loss. With this update, the collector goes back to its default behavior, wheresink<>.buffer
is not customized. (LOG-6026)
1.2.5.2. CVEs
- CVE-2023-52771
- CVE-2023-52880
- CVE-2024-2398
- CVE-2024-6345
- CVE-2024-6923
- CVE-2024-26581
- CVE-2024-26668
- CVE-2024-26810
- CVE-2024-26855
- CVE-2024-26908
- CVE-2024-26925
- CVE-2024-27016
- CVE-2024-27019
- CVE-2024-27020
- CVE-2024-27415
- CVE-2024-35839
- CVE-2024-35896
- CVE-2024-35897
- CVE-2024-35898
- CVE-2024-35962
- CVE-2024-36003
- CVE-2024-36025
- CVE-2024-37370
- CVE-2024-37371
- CVE-2024-37891
- CVE-2024-38428
- CVE-2024-38476
- CVE-2024-38538
- CVE-2024-38540
- CVE-2024-38544
- CVE-2024-38579
- CVE-2024-38608
- CVE-2024-39476
- CVE-2024-40905
- CVE-2024-40911
- CVE-2024-40912
- CVE-2024-40914
- CVE-2024-40929
- CVE-2024-40939
- CVE-2024-40941
- CVE-2024-40957
- CVE-2024-40978
- CVE-2024-40983
- CVE-2024-41041
- CVE-2024-41076
- CVE-2024-41090
- CVE-2024-41091
- CVE-2024-42110
- CVE-2024-42152
1.2.6. Logging 5.8.11
This release includes OpenShift Logging Bug Fix Release 5.8.11 and OpenShift Logging Bug Fix Release 5.8.11.
1.2.6.1. Bug fixes
-
Before this update, the TLS section was added without verifying the broker URL schema, leading to SSL connection errors if the URLs did not start with
tls
. With this update, the TLS section is added only if broker URLs start withtls
, preventing SSL connection errors. (LOG-5139) - Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. (LOG-5896)
- Before this update, the 4.16 GA catalog did not include Elasticsearch Operator 5.8, preventing the installation of products like Service Mesh, Kiali, and Tracing. With this update, Elasticsearch Operator 5.8 is now available on 4.16, resolving the issue and providing support for Elasticsearch storage for these products only. (LOG-5911)
- Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. (LOG-5857)
- Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. (LOG-5946)
1.2.6.2. CVEs
- CVE-2021-47548
- CVE-2021-47596
- CVE-2022-48627
- CVE-2023-52638
- CVE-2024-4032
- CVE-2024-6409
- CVE-2024-21131
- CVE-2024-21138
- CVE-2024-21140
- CVE-2024-21144
- CVE-2024-21145
- CVE-2024-21147
- CVE-2024-24806
- CVE-2024-26783
- CVE-2024-26858
- CVE-2024-27397
- CVE-2024-27435
- CVE-2024-35235
- CVE-2024-35958
- CVE-2024-36270
- CVE-2024-36886
- CVE-2024-36904
- CVE-2024-36957
- CVE-2024-38473
- CVE-2024-38474
- CVE-2024-38475
- CVE-2024-38477
- CVE-2024-38543
- CVE-2024-38586
- CVE-2024-38593
- CVE-2024-38663
- CVE-2024-39573
1.2.7. Logging 5.8.10
This release includes OpenShift Logging Bug Fix Release 5.8.10 and OpenShift Logging Bug Fix Release 5.8.10.
1.2.7.1. Known issues
- Before this update, when enabling retention, the Loki Operator produced an invalid configuration. As a result, Loki did not start properly. With this update, Loki pods can set retention. (LOG-5821)
1.2.7.2. Bug fixes
-
Before this update, the
ClusterLogForwarder
introduced an extra space in the message payload that did not follow theRFC3164
specification. With this update, the extra space has been removed, fixing the issue. (LOG-5647)
1.2.7.3. CVEs
1.2.8. Logging 5.8.9
This release includes OpenShift Logging Bug Fix Release 5.8.9 and OpenShift Logging Bug Fix Release 5.8.9.
1.2.8.1. Bug fixes
- Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. (LOG-5698)
-
Before this update, LokiStack was missing a route for the Volume API, which caused the following error:
404 not found
. With this update, LokiStack exposes the Volume API, resolving the issue. (LOG-5750) -
Before this update, the Elasticsearch operator overwrote all service account annotations without considering ownership. As a result, the
kube-controller-manager
recreated service account secrets because it logged the link to the owning service account. With this update, the Elasticsearch operator merges annotations, resolving the issue. (LOG-5776)
1.2.8.2. CVEs
1.2.9. Logging 5.8.8
This release includes OpenShift Logging Bug Fix Release 5.8.8 and OpenShift Logging Bug Fix Release 5.8.8.
1.2.9.1. Bug fixes
-
Before this update, there was a delay in restarting Ingesters when configuring
LokiStack
, because the Loki Operator sets the write-ahead logreplay_memory_ceiling
to zero bytes for the1x.demo
size. With this update, the minimum value used for thereplay_memory_ceiling
has been increased to avoid delays. (LOG-5615)
1.2.9.2. CVEs
- CVE-2020-15778
- CVE-2021-43618
- CVE-2023-6004
- CVE-2023-6597
- CVE-2023-6918
- CVE-2023-7008
- CVE-2024-0450
- CVE-2024-2961
- CVE-2024-22365
- CVE-2024-25062
- CVE-2024-26458
- CVE-2024-26461
- CVE-2024-26642
- CVE-2024-26643
- CVE-2024-26673
- CVE-2024-26804
- CVE-2024-28182
- CVE-2024-32487
- CVE-2024-33599
- CVE-2024-33600
- CVE-2024-33601
- CVE-2024-33602
1.2.10. Logging 5.8.7
This release includes OpenShift Logging Bug Fix Release 5.8.7 Security Update and OpenShift Logging Bug Fix Release 5.8.7.
1.2.10.1. Bug fixes
-
Before this update, the
elasticsearch-im-<type>-*
pods failed if no<type>
logs (audit, infrastructure, or application) were collected. With this update, the pods no longer fail when<type>
logs are not collected. (LOG-4949) - Before this update, the validation feature for output config required an SSL/TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message is more informative. (LOG-5467)
- Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. (LOG-5471)
-
Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the
ClusterLogForwarder
CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when aNot authorized to collect
error occurs. (LOG-5514)
1.2.10.2. CVEs
- CVE-2020-26555
- CVE-2021-29390
- CVE-2022-0480
- CVE-2022-38096
- CVE-2022-40090
- CVE-2022-45934
- CVE-2022-48554
- CVE-2022-48624
- CVE-2023-2975
- CVE-2023-3446
- CVE-2023-3567
- CVE-2023-3618
- CVE-2023-3817
- CVE-2023-4133
- CVE-2023-5678
- CVE-2023-6040
- CVE-2023-6121
- CVE-2023-6129
- CVE-2023-6176
- CVE-2023-6228
- CVE-2023-6237
- CVE-2023-6531
- CVE-2023-6546
- CVE-2023-6622
- CVE-2023-6915
- CVE-2023-6931
- CVE-2023-6932
- CVE-2023-7008
- CVE-2023-24023
- CVE-2023-25193
- CVE-2023-25775
- CVE-2023-28464
- CVE-2023-28866
- CVE-2023-31083
- CVE-2023-31122
- CVE-2023-37453
- CVE-2023-38469
- CVE-2023-38470
- CVE-2023-38471
- CVE-2023-38472
- CVE-2023-38473
- CVE-2023-39189
- CVE-2023-39193
- CVE-2023-39194
- CVE-2023-39198
- CVE-2023-40745
- CVE-2023-41175
- CVE-2023-42754
- CVE-2023-42756
- CVE-2023-43785
- CVE-2023-43786
- CVE-2023-43787
- CVE-2023-43788
- CVE-2023-43789
- CVE-2023-45288
- CVE-2023-45863
- CVE-2023-46862
- CVE-2023-47038
- CVE-2023-51043
- CVE-2023-51779
- CVE-2023-51780
- CVE-2023-52434
- CVE-2023-52448
- CVE-2023-52476
- CVE-2023-52489
- CVE-2023-52522
- CVE-2023-52529
- CVE-2023-52574
- CVE-2023-52578
- CVE-2023-52580
- CVE-2023-52581
- CVE-2023-52597
- CVE-2023-52610
- CVE-2023-52620
- CVE-2024-0565
- CVE-2024-0727
- CVE-2024-0841
- CVE-2024-1085
- CVE-2024-1086
- CVE-2024-21011
- CVE-2024-21012
- CVE-2024-21068
- CVE-2024-21085
- CVE-2024-21094
- CVE-2024-22365
- CVE-2024-25062
- CVE-2024-26582
- CVE-2024-26583
- CVE-2024-26584
- CVE-2024-26585
- CVE-2024-26586
- CVE-2024-26593
- CVE-2024-26602
- CVE-2024-26609
- CVE-2024-26633
- CVE-2024-27316
- CVE-2024-28834
- CVE-2024-28835
1.2.11. Logging 5.8.6
This release includes OpenShift Logging Bug Fix Release 5.8.6 Security Update and OpenShift Logging Bug Fix Release 5.8.6.
1.2.11.1. Enhancements
-
Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the
LokiStack
status updates to indicate any invalid URLs. (LOG-5392) - Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. (LOG-5402)
1.2.11.2. Bug fixes
-
Before this update, the Elastisearch Operator
ServiceMonitor
in theopenshift-operators-redhat
namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on theServiceMonitor
configuration. With this update, the Elastisearch OperatorServiceMonitor
in theopenshift-operators-redhat
namespace now references a service account token secret by aLocalReference
object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch OperatorServiceMonitor
successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. (LOG-5164) -
Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the
LokiStack
. (LOG-5398)
1.2.11.3. CVEs
1.2.12. Logging 5.8.5
This release includes OpenShift Logging Bug Fix Release 5.8.5.
1.2.12.1. Bug fixes
-
Before this update, the configuration of the Loki Operator’s
ServiceMonitor
could match many Kubernetes services, resulting in the Loki Operator’s metrics being collected multiple times. With this update, the configuration ofServiceMonitor
now only matches the dedicated metrics service. (LOG-5250) - Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. (LOG-5201)
-
Before this update, the Loki Operator checked if the pods were running to decide if the
LokiStack
was ready. With this update, it also checks if the pods are ready, so that the readiness of theLokiStack
reflects the state of its components. (LOG-5171) - Before this update, running a query for log metrics caused an error in the histogram. With this update, the histogram toggle function and the chart are disabled and hidden because the histogram doesn’t work with log metrics. (LOG-5044)
-
Before this update, the Loki and Elasticsearch bundle had the wrong
maxOpenShiftVersion
, resulting inIncompatibleOperatorsInstalled
alerts. With this update, including 4.16 as themaxOpenShiftVersion
property in the bundle fixes the issue. (LOG-5272) -
Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for
buildDate
andgoVersion
. With this update, adding the missing linker flags in the build pipeline fixes the issue. (LOG-5274) - Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. (LOG-5270)
-
Before this update, the Loki Operator
ServiceMonitor
in theopenshift-operators-redhat
namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on theServiceMonitor
configuration. With this update, the Loki OperatorServiceMonitor
inopenshift-operators-redhat
namespace now references a service account token secret by aLocalReference
object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki OperatorServiceMonitor
successfully, enabling Prometheus to scrape the Loki Operator metrics. (LOG-5240)
1.2.12.2. CVEs
1.2.13. Logging 5.8.4
This release includes OpenShift Logging Bug Fix Release 5.8.4.
1.2.13.1. Bug fixes
- Before this update, the developer console’s logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. (LOG-4905)
-
Before this update, the Cluster Logging Operator deployed
ClusterRoles
supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. (LOG-4987) -
Before this update, multiple
ClusterLogForwarders
defining the same input receiver name had their service endlessly reconciled because of changingownerReferences
on one service. With this update, each receiver input will have its own service named with the convention of<CLF.Name>-<input.Name>
. (LOG-5009) -
Before this update, the
ClusterLogForwarder
did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret:secret must be provided for cloudwatch output
. (LOG-5021) -
Before this update, the
log_forwarder_input_info
includedapplication
,infrastructure
, andaudit
input metric points. With this update,http
is also added as a metric point. (LOG-5043)
1.2.13.2. CVEs
- CVE-2021-35937
- CVE-2021-35938
- CVE-2021-35939
- CVE-2022-3545
- CVE-2022-24963
- CVE-2022-36402
- CVE-2022-41858
- CVE-2023-2166
- CVE-2023-2176
- CVE-2023-3777
- CVE-2023-3812
- CVE-2023-4015
- CVE-2023-4622
- CVE-2023-4623
- CVE-2023-5178
- CVE-2023-5363
- CVE-2023-5388
- CVE-2023-5633
- CVE-2023-6679
- CVE-2023-7104
- CVE-2023-27043
- CVE-2023-38409
- CVE-2023-40283
- CVE-2023-42753
- CVE-2023-43804
- CVE-2023-45803
- CVE-2023-46813
- CVE-2024-20918
- CVE-2024-20919
- CVE-2024-20921
- CVE-2024-20926
- CVE-2024-20945
- CVE-2024-20952
1.2.14. Logging 5.8.3
This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3
1.2.14.1. Bug fixes
- Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. (LOG-4969)
- Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. (LOG-4822)
- Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. (LOG-4962)
-
Before this update, the
tls.insecureSkipVerify
field of an output was not set to a value oftrue
without a secret defined. With this update, a secret is no longer required to set this value. (LOG-4963) - Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. (LOG-4893)
1.2.14.2. CVEs
1.2.15. Logging 5.8.2
This release includes OpenShift Logging Bug Fix Release 5.8.2.
1.2.15.1. Bug fixes
- Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. (LOG-4890)
- Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. (LOG-4947)
- Before this update, the logging view plugin of the OpenShift Container Platform web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OpenShift Container Platform web console. (LOG-4912)
1.2.15.2. CVEs
1.2.16. Logging 5.8.1
This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana.
1.2.16.1. Enhancements
1.2.16.1.1. Log Collection
- With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. (LOG-4780)
- With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. (LOG-4828)
1.2.16.2. Bug fixes
-
Before this update, the
ClusterLogForwarder
created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, theClusterLogForwarder
skips the rollover when thewrite-index
is empty. (LOG-4452) -
Before this update, the Vector set the
default
log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, orregexp
, for log level detection. (LOG-4480) -
Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the
{app,infra,audit}-000001
indexes. (LOG-4683) -
Before this update, Fluentd collector pods were in a
CrashLoopBackOff
state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. (LOG-4706) -
Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the
ClusterLogForwarder
. With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. (LOG-4741) -
Before this update, the Vector log collector pods were stuck in the
CrashLoopBackOff
state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. (LOG-4768) -
Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and
ca.crt
. (LOG-4791) -
Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and
ca.crt
. (LOG-4852) - Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. (LOG-4811)
-
Before this update, it was necessary to create a
ClusterRoleBinding
to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create theClusterRoleBinding
because the endpoint already depends upon the cluster certificate authority. (LOG-4815) - Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. (LOG-4836)
-
Before this update, while removing the
inputs.receiver
section in theClusterLogForwarder
, the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. (LOG-4612) -
Before this update, the
ClusterLogForwarder
indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. (LOG-4821) -
Before this update, changing a
LogQL
query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. (LOG-4841)
1.2.16.3. CVEs
- CVE-2007-4559
- CVE-2021-3468
- CVE-2021-3502
- CVE-2021-3826
- CVE-2021-43618
- CVE-2022-3523
- CVE-2022-3565
- CVE-2022-3594
- CVE-2022-4285
- CVE-2022-38457
- CVE-2022-40133
- CVE-2022-40982
- CVE-2022-41862
- CVE-2022-42895
- CVE-2023-0597
- CVE-2023-1073
- CVE-2023-1074
- CVE-2023-1075
- CVE-2023-1076
- CVE-2023-1079
- CVE-2023-1206
- CVE-2023-1249
- CVE-2023-1252
- CVE-2023-1652
- CVE-2023-1855
- CVE-2023-1981
- CVE-2023-1989
- CVE-2023-2731
- CVE-2023-3138
- CVE-2023-3141
- CVE-2023-3161
- CVE-2023-3212
- CVE-2023-3268
- CVE-2023-3316
- CVE-2023-3358
- CVE-2023-3576
- CVE-2023-3609
- CVE-2023-3772
- CVE-2023-3773
- CVE-2023-4016
- CVE-2023-4128
- CVE-2023-4155
- CVE-2023-4194
- CVE-2023-4206
- CVE-2023-4207
- CVE-2023-4208
- CVE-2023-4273
- CVE-2023-4641
- CVE-2023-22745
- CVE-2023-26545
- CVE-2023-26965
- CVE-2023-26966
- CVE-2023-27522
- CVE-2023-29491
- CVE-2023-29499
- CVE-2023-30456
- CVE-2023-31486
- CVE-2023-32324
- CVE-2023-32573
- CVE-2023-32611
- CVE-2023-32665
- CVE-2023-33203
- CVE-2023-33285
- CVE-2023-33951
- CVE-2023-33952
- CVE-2023-34241
- CVE-2023-34410
- CVE-2023-35825
- CVE-2023-36054
- CVE-2023-37369
- CVE-2023-38197
- CVE-2023-38545
- CVE-2023-38546
- CVE-2023-39191
- CVE-2023-39975
- CVE-2023-44487
1.2.17. Logging 5.8.0
This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana.
1.2.17.1. Deprecation notice
In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
1.2.17.2. Enhancements
1.2.17.2.1. Log Collection
-
With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a
LogFileMetricExporter
custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create theLogFileMetricExporter
CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs. (LOG-3819) With this update, you can deploy multiple, isolated, and RBAC-protected
ClusterLogForwarder
custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. (LOG-1343)ImportantIn order to support multi-cluster log forwarding in additional namespaces other than the
openshift-logging
namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations.- With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. (LOG-884)
- With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. (LOG-4562)
- With this update, you can configure audit polices to control which Kubernetes and OpenShift API server events are forwarded by the log collector. (LOG-3982)
1.2.17.2.2. Log Storage
- With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (LOG-3841)
-
With this update, the Loki Operator introduces
PodDisruptionBudget
configuration on LokiStack deployments to ensure normal operations during OpenShift Container Platform cluster restarts by keeping ingestion and the query path available. (LOG-3839) - With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. (LOG-3840)
- With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. (LOG-3266)
- With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OpenShift Container Platform clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). (LOG-4329)
- With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. (LOG-4327)
1.2.17.2.3. Log Console
- With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. (LOG-3856)
- With this update, OpenShift Container Platform application owners can receive notifications for application log-based alerts on the OpenShift Container Platform web console Developer perspective for OpenShift Container Platform version 4.14 and later. (LOG-3548)
1.2.17.3. Known Issues
Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension.
As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end.
-
Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an
RST_STREAM
frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. (LOG-4609) -
Currently, when using FluentD as the collector, the collector pod cannot start on the OpenShift Container Platform IPv6-enabled cluster. The pod logs produce the
fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known
error. There is currently no workaround for this issue. (LOG-4706) - Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. (LOG-4709)
-
Currently,
must-gather
cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in thecluster-logging-rhel9-operator
. There is currently no workaround for this issue. (LOG-4403) -
Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in
CrashLoopBackOff
status, while using FluentD as a collector. There is currently no workaround for this issue. (LOG-3933)
1.2.17.4. CVEs
Chapter 2. Logging 6.0
2.1. Release notes
2.1.1. Logging 6.0.3
This release includes RHBA-2024:10991.
2.1.1.1. New features and enhancements
- With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. (LOG-6421)
2.1.1.2. Bug fixes
- Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. (LOG-6034)
-
Before this update, any namespace containing
openshift
orkube
was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces:default
,kube
,openshift
, and namespaces that begin withopenshift-
orkube-
. (LOG-6204) -
Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the
ClusterLogForwarder
custom resource. (LOG-6343) - Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6352)
- Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6406)
-
Before this update, setting up the custom audit inputs in the
ClusterLogForwarder
custom resource with configuredLokiStack
output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6441) -
Before this update, the collector did not correctly mount the
/var/log/oauth-server/
path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6486) -
Before this update, the collector did not correctly mount the
oauth-apiserver
audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6543)
2.1.1.3. CVEs
2.1.2. Logging 6.0.2
This release includes RHBA-2024:10051.
2.1.2.1. Bug fixes
- Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. (LOG-5325)
- Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. (LOG-5998)
- Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. (LOG-6264)
- Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. (LOG-6296)
-
Before this update, when infrastructure namespaces were included in application inputs, the
log_type
was set asapplication
. With this update, thelog_type
of infrastructure namespaces included in application inputs is set toinfrastructure
. (LOG-6354) -
Before this update, specifying a value for the
syslog.enrichment
field of the ClusterLogForwarder addednamespace_name
,container_name
, andpod_name
to the messages of non-container logs. With this update, only container logs includenamespace_name
,container_name
, andpod_name
in their messages whensyslog.enrichment
is set. (LOG-6402)
2.1.2.2. CVEs
2.1.3. Logging 6.0.1
This release includes OpenShift Logging Bug Fix Release 6.0.1.
2.1.3.1. Bug fixes
- With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. (LOG-6180)
-
Before this update, the Loki Operator failed to add the default
namespace
label to allAlertingRule
resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. (LOG-6151) - Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. (LOG-6129)
-
Before this update, it was possible to set
log_source
in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includeslog_source
in the prune filter is rejected. (LOG-6202)
2.1.3.2. CVEs
2.1.4. Logging 6.0.0
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
logging Version | Component Version | |||||
---|---|---|---|---|---|---|
Operator |
|
|
|
|
|
|
6.0 | 0.4 | 1.1 | 3.1.0 | 0.1 | 0.1 | 0.37.1 |
2.1.5. Removal notice
-
With this release, logging no longer supports the
ClusterLogging.logging.openshift.io
andClusterLogForwarder.logging.openshift.io
custom resources. Refer to the product documentation for details on the replacement features. (LOG-5803) - With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. (LOG-5368)
In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object’s ownerRefs before deleting the ClusterLogging resource.
2.1.6. New features and enhancements
-
This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the
ClusterLogForwarder.observability.openshift.io
API for log collection and forwarding. Support for theClusterLogging.logging.openshift.io
andClusterLogForwarder.logging.openshift.io
APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red HatLokiStack
for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their previous custom resources. Refer to the official product documentation for more details. (LOG-3493) - With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. (LOG-5461)
- This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. (LOG-4745)
- This enhancement updates Vector to align with the upstream version v0.37.1. (LOG-5296)
- This enhancement introduces an alert that triggers when log collectors buffer logs to a node’s file system and use over 15% of the available space, indicating potential back pressure issues. (LOG-5381)
- This enhancement updates the selectors for all components to use common Kubernetes labels. (LOG-5906)
- This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. (LOG-5599)
- This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. (LOG-5372)
- This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. (LOG-5640)
- This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. (LOG-5964)
- This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. (LOG-5949)
- This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. (LOG-4571)
2.1.7. Technology Preview features
- This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. (LOG-4225)
2.1.8. Bug fixes
-
Before this update, the
CollectorHighErrorRate
andCollectorVeryHighErrorRate
alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. (LOG-3432)
2.1.9. CVEs
2.2. Logging 6.0
The ClusterLogForwarder
custom resource (CR) is the central configuration point for log collection and forwarding.
2.2.1. Inputs and Outputs
Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application
, infrastructure
, and audit
, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
2.2.2. Receiver Input Type
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http
and syslog
.
The ReceiverSpec
defines the configuration for a receiver input.
2.2.3. Pipelines and Filters
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
2.2.4. Operator Behavior
The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState
field:
-
When set to
Managed
(default), the operator actively manages the logging resources to match the configuration defined in the spec. -
When set to
Unmanaged
, the operator does not take any action, allowing you to manually manage the logging components.
2.2.5. Validation
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder
resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
2.2.5.1. Quick Start
Prerequisites
- Cluster administrator permissions
Procedure
-
Install the
OpenShift Logging
andLoki
Operators from OperatorHub. Create a secret to access an existing object storage bucket:
Example command for AWS
$ oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-logging
Create a
LokiStack
custom resource (CR) in theopenshift-logging
namespace:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging
Create a service account for the collector:
$ oc create sa collector -n openshift-logging
Bind the
ClusterRole
to the service account:$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging
- Install the Cluster Observability Operator.
Create a
UIPlugin
to enable the Log section in the Observe tab:apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki
Add additional roles to the collector service account:
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging
Create a
ClusterLogForwarder
CR to configure log forwarding:apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console.
2.3. Upgrading to Logging 6.0
Logging v6.0 is a significant upgrade from previous releases, achieving several longstanding goals of Cluster Logging:
- Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization).
- Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana).
- Deprecation of the Fluentd log collector implementation.
-
Removal of support for
ClusterLogging.logging.openshift.io
andClusterLogForwarder.logging.openshift.io
resources.
The cluster-logging-operator does not provide an automated upgrade process.
Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator. This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io
and ClusterLogForwarder.logging.openshift.io
specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io
resources for common use cases are included.
2.3.1. Using the oc explain
command
The oc explain
command is an essential tool in the OpenShift CLI oc
that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster.
2.3.1.1. Resource Descriptions
oc explain
offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators.
To view the documentation for the outputs
field of the ClusterLogForwarder
custom resource, you can use:
$ oc explain clusterlogforwarders.observability.openshift.io.spec.outputs
In place of clusterlogforwarder
the short form obsclf
can be used.
This will display detailed information about these fields, including their types, default values, and any associated sub-fields.
2.3.1.2. Hierarchical Structure
The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options.
For instance, here’s how you can drill down into the storage
configuration for a LokiStack
custom resource:
$ oc explain lokistacks.loki.grafana.com $ oc explain lokistacks.loki.grafana.com.spec $ oc explain lokistacks.loki.grafana.com.spec.storage $ oc explain lokistacks.loki.grafana.com.spec.storage.schemas
Each command reveals a deeper level of the resource specification, making the structure clear.
2.3.1.3. Type Information
oc explain
also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types.
For example:
$ oc explain lokistacks.loki.grafana.com.spec.size
This will show that size
should be defined using an integer value.
2.3.1.4. Default Values
When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified.
Again using lokistacks.loki.grafana.com
as an example:
$ oc explain lokistacks.spec.template.distributor.replicas
Example output
GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.
2.3.2. Log Storage
The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.
To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator, remove the owner references from the Elasticsearch
resource named elasticsearch
, and the Kibana
resource named kibana
in the openshift-logging
namespace before removing the ClusterLogging
resource named instance
in the same namespace.
Temporarily set ClusterLogging to state
Unmanaged
$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
Remove ClusterLogging
ownerReferences
from the Elasticsearch resourceThe following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s
logStore
field will no longer affect the Elasticsearch resource.$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
Remove ClusterLogging
ownerReferences
from the Kibana resourceThe following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s
visualization
field will no longer affect the Kibana resource.$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
-
Set ClusterLogging to state
Managed
$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge
2.3.3. Log Visualization
The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator.
2.3.4. Log Collection and Forwarding
Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io
API group. The following sections highlight the differences from the old API resources.
Vector is the only supported collector implementation.
2.3.5. Management, Resource Allocation, and Workload Scheduling
Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API.
Previous Configuration
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" spec: managementState: "Managed" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}
Current Configuration
apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}
2.3.6. Input Specifications
The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application, infrastructure, and audit to collect these sources.
2.3.6.1. Application Inputs
Namespace and container inclusions and exclusions have been consolidated into a single field.
5.9 Application Input with Namespace and Container Includes and Excludes
apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose
6.0 Application Input with Namespace and Container Includes and Excludes
apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose
application, infrastructure, and audit are reserved words and cannot be used as names when defining an input.
2.3.6.2. Input Receivers
Changes to input receivers include:
- Explicit configuration of the type at the receiver level.
- Port settings moved to the receiver level.
5.9 Input Receivers
apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442
6.0 Input Receivers
apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442
2.3.7. Output Specifications
High-level changes to output specifications include:
- URL settings moved to each output type specification.
- Tuning parameters moved to each output type specification.
- Separation of TLS configuration from authentication.
- Explicit configuration of keys and secret/configmap for TLS and authentication.
2.3.8. Secrets and TLS Configuration
Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions.
2.3.9. Red Hat Managed Elasticsearch
v5.9 Forwarding to Red Hat Managed Elasticsearch
apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch
v6.0 Forwarding to Red Hat Managed Elasticsearch
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: <log_type>-write-{+yyyy.MM.dd} tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - outputRefs: - default-elasticsearch - inputRefs: - application - infrastructure
In this example, application logs are written to the application-write
alias/index instead of app-write
.
2.3.10. Red Hat Managed LokiStack
v5.9 Forwarding to Red Hat Managed LokiStack
apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev
v6.0 Forwarding to Red Hat Managed LokiStack
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure
2.3.11. Filters and Pipeline Configuration
Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from previous releases have been converted to filters in this release. Individual filters are defined in the filters
specification and referenced by a pipeline.
5.9 Filters
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true
6.0 Filter Configuration
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json
2.3.12. Validation and Status
Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from previous releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time.
Instances of the ClusterLogForwarder.observability.openshift.io
must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is:
6.0 Status Conditions
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: "True" type: observability.openshift.io/Authorized - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ValidationSuccess status: "True" type: observability.openshift.io/Valid - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ReconciliationComplete status: "True" type: Ready filterConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "detectexception" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "parse-json" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: "2024-09-13T12:23:03Z" message: input "application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: output "default-lokistack-application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: pipeline "default-before" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidPipeline-default-before
Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue.
2.4. Configuring log forwarding
The ClusterLogForwarder
(CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
2.4.1. Setting up log collection
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs
, collect-application-logs
, and collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
2.4.1.1. Legacy service accounts
To use the existing legacy service account logcollector
, create the following ClusterRoleBinding:
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
2.4.1.2. Creating service accounts
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-logging
namespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
2.4.1.2.1. Cluster Role Binding for your Service Account
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
2.4.1.2.2. Writing application logs
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.
2.4.1.2.3. Writing audit logs
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9
- 1 1
- rules: Defines the permissions granted by this ClusterRole.
- 2 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5 5
- audit: Specifies that the role manages audit logs within Loki.
- 6 6
- resourceNames: Defines the specific resources that the role can access.
- 7 7
- logs: Refers to the logs that can be managed under this role.
- 8 8
- verbs: The actions allowed on the resources.
- 9 9
- create: Grants permission to create new audit logs.
2.4.1.2.4. Writing infrastructure logs
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
2.4.1.2.5. ClusterLogForwarder editor role
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
2.4.2. Modifying log level in collector
To modify the log level in the collector, you can set the observability.openshift.io/log-level
annotation to trace
, debug
, info
, warn
, error
, and off
.
Example log level annotation
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ...
2.4.3. Managing the Operator
The ClusterLogForwarder
resource has a managementState
field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState
to Unmanaged
.
2.4.4. Structure of the ClusterLogForwarder
The CLF has a spec
section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application
,infrastructure
andaudit
forward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
2.4.4.1. Inputs
Inputs are configured in an array under spec.inputs
. There are three built-in input types:
- application
-
Selects logs from all application containers, excluding those in infrastructure namespaces such as
default
,openshift
, or any namespace with thekube-
oropenshift-
prefix. - infrastructure
-
Selects logs from infrastructure components running in
default
andopenshift
namespaces and node logs. - audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application
that select logs from specific namespaces or using pod labels.
2.4.4.2. Outputs
Outputs are configured in an array under spec.outputs
. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
2.4.4.3. Pipelines
Pipelines are configured in an array under spec.pipelines
. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
2.4.4.4. Filters
Filters are configured in an array under spec.filters
. They can match incoming log messages based on the value of structured fields and modify or drop them.
Administrators can configure the following types of filters:
2.4.4.5. Enabling multi-line exception detection
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarder
Custom Resource (CR) contains adetectMultilineErrors
field under the.spec.filters
.
Example ClusterLogForwarder CR
apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>
2.4.4.5.1. Details
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
2.4.4.6. Configuring content filters to drop unwanted log records
When the drop
filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
Procedure
Add a configuration for a filter to the
filters
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to drop log records based on regular expressions:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ...
- 1
- Specifies the type of filter. The
drop
filter drops log records that match the filter configuration. - 2
- Specifies configuration options for applying the
drop
filter. - 3
- Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
- If all the conditions specified for a test are true, the test passes and the log record is dropped.
-
When multiple tests are specified for the
drop
filter configuration, if any of the tests pass, the record is dropped. - If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
- 4
- Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (
a-zA-Z0-9_
), for example,.kubernetes.namespace_name
. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz"
. You can include multiple field paths in a singletest
configuration, but they must all evaluate to true for the test to pass and thedrop
filter to be applied. - 5
- Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the
matches
ornotMatches
condition for a singlefield
path, but not both. - 6
- Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the
matches
ornotMatches
condition for a singlefield
path, but not both. - 7
- Specifies the pipeline that the
drop
filter is applied to.
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
Additional examples
The following additional example shows how you can configure the drop
filter to only keep higher priority log records:
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ...
In addition to including multiple field paths in a single test
configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test
configuration evaluates to true. However, for the second test
configuration, both field specs must be true for it to be evaluated to true:
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ...
2.4.4.7. Overview of API audit filter
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level
field:
-
None
: The event is dropped. -
Metadata
: Audit metadata is included, request and response bodies are removed. -
Request
: Audit metadata and the request body are included, the response body is removed. -
RequestResponse
: All data is included: metadata, request body and response body. The response body can be very large. For example,oc get pods -A
generates a response body containing the YAML description of every pod in the cluster.
The ClusterLogForwarder
custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*
asterisk character. For example, the namespaceopenshift-\*
matchesopenshift-apiserver
oropenshift-authentication
. Resource\*/status
matchesPod/status
orDeployment/status
. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get
,list
, andwatch
are dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a level
field or add an empty rule.
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodes
field, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]
. If the value is an empty list,[]
, then no status codes are omitted.
The ClusterLogForwarder
CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder
CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site.
You must have a cluster role collect-audit-logs
to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata
2.4.4.8. Filtering application logs at input by including the label expressions or a matching label key and values
You can include the application logs based on the label expressions or a matching label key and its values by using the input
selector.
Procedure
Add a configuration for a filter to the
input
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ...
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
2.4.4.9. Configuring content filters to prune log records
When the prune
filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Procedure
Add a configuration for a filter to the
prune
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to prune log records based on field paths:ImportantIf both are specified, records are pruned based on the
notIn
array first, which takes precedence over thein
array. After records have been pruned by using thenotIn
array, they are then pruned by using thein
array.Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ...
- 1
- Specify the type of filter. The
prune
filter prunes log records by configured fields. - 2
- Specify configuration options for applying the
prune
filter. Thein
andnotIn
fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_
), for example,.kubernetes.namespace_name
. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz"
. - 3
- Optional: Any fields that are specified in this array are removed from the log record.
- 4
- Optional: Any fields that are not specified in this array are removed from the log record.
- 5
- Specify the pipeline that the
prune
filter is applied to.
NoteThe filters exempts the
log_type
,.log_source
, and.message
fields.Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
2.4.5. Filtering the audit and infrastructure log inputs by source
You can define the list of audit
and infrastructure
sources to collect the logs by using the input
selector.
Procedure
Add a configuration to define the
audit
andinfrastructure
sources in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to defineaudit
andinfrastructure
sources:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ...
- 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node
: Journal log from the node -
container
: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI
: Logs from the Kubernetes API servers -
openshiftAPI
: Logs from the OpenShift API servers -
auditd
: Logs from a node auditd service -
ovn
: Logs from an open virtual network service
-
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
2.4.6. Filtering application logs at input by including or excluding the namespace or container name
You can include or exclude the application logs based on the namespace and container name by using the input
selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include or exclude namespaces and container names:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ...
NoteThe
excludes
field takes precedence over theincludes
field.Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
2.5. Storing logs with LokiStack
You can configure a LokiStack
CR to store application, audit, and infrastructure-related logs.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries.
For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days.
2.5.1. Prerequisites
- You have installed the Loki Operator by using the CLI or web console.
-
You have a
serviceAccount
in the same namespace in which you create theClusterLogForwarder
. -
The
serviceAccount
is assignedcollect-audit-logs
,collect-application-logs
, andcollect-infrastructure-logs
cluster roles.
2.5.2. Core Setup and Configuration
Role-based access controls, basic monitoring, and pod placement to deploy Loki.
2.5.3. Loki deployment sizing
Sizing for Loki follows the format of 1x.<size>
where the value 1x
is number of instances and <size>
specifies performance capabilities.
The 1x.pico
configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.
Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.
It is not possible to change the number 1x
for the deployment size.
1x.demo | 1x.pico [6.1+ only] | 1x.extra-small | 1x.small | 1x.medium | |
---|---|---|---|---|---|
Data transfer | Demo use only | 50GB/day | 100GB/day | 500GB/day | 2TB/day |
Queries per second (QPS) | Demo use only | 1-25 QPS at 200ms | 1-25 QPS at 200ms | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
Replication factor | None | 2 | 2 | 2 | 2 |
Total CPU requests | None | 7 vCPUs | 14 vCPUs | 34 vCPUs | 54 vCPUs |
Total CPU requests if using the ruler | None | 8 vCPUs | 16 vCPUs | 42 vCPUs | 70 vCPUs |
Total memory requests | None | 17Gi | 31Gi | 67Gi | 139Gi |
Total memory requests if using the ruler | None | 18Gi | 35Gi | 83Gi | 171Gi |
Total disk requests | 40Gi | 590Gi | 430Gi | 430Gi | 590Gi |
Total disk requests if using the ruler | 80Gi | 910Gi | 750Gi | 750Gi | 910Gi |
2.5.4. Authorizing LokiStack rules RBAC permissions
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole
objects that contain necessary role-based access control (RBAC) permissions for users.
The following cluster roles for alerting and recording rules are available for LokiStack:
Rule name | Description |
---|---|
|
Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
Users with this role have permission to create, update, and delete |
|
Users with this role can read |
|
Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
Users with this role have permission to create, update, and delete |
|
Users with this role can read |
2.5.4.1. Examples
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding
object is used, as when using the oc adm policy add-role-to-user
command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding
object is used, as when using the oc adm policy add-cluster-role-to-user
command, the cluster role applies to all namespaces in the cluster.
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
2.5.5. Creating a log-based alerting rule with Loki
The AlertingRule
CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack
instance. In addition, the webhook validation definition provides support for rule validation conditions:
-
If an
AlertingRule
CR includes an invalidinterval
period, it is an invalid alerting rule -
If an
AlertingRule
CR includes an invalidfor
period, it is an invalid alerting rule. -
If an
AlertingRule
CR includes an invalid LogQLexpr
, it is an invalid alerting rule. -
If an
AlertingRule
CR includes two groups with the same name, it is an invalid alerting rule. - If none of the above applies, an alerting rule is considered valid.
Tenant type | Valid namespaces for AlertingRule CRs |
---|---|
application |
|
audit |
|
infrastructure |
|
Procedure
Create an
AlertingRule
custom resource (CR):Example infrastructure
AlertingRule
CRapiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7
- 1
- The namespace where this
AlertingRule
CR is created must have a label matching the LokiStackspec.rules.namespaceSelector
definition. - 2
- The
labels
block must match the LokiStackspec.rules.selector
definition. - 3
AlertingRule
CRs forinfrastructure
tenants are only supported in theopenshift-*
,kube-\*
, ordefault
namespaces.- 4
- The value for
kubernetes_namespace_name:
must match the value formetadata.namespace
. - 5
- The value of this mandatory field must be
critical
,warning
, orinfo
. - 6
- This field is mandatory.
- 7
- This field is mandatory.
Example application
AlertingRule
CRapiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6
- 1
- The namespace where this
AlertingRule
CR is created must have a label matching the LokiStackspec.rules.namespaceSelector
definition. - 2
- The
labels
block must match the LokiStackspec.rules.selector
definition. - 3
- Value for
kubernetes_namespace_name:
must match the value formetadata.namespace
. - 4
- The value of this mandatory field must be
critical
,warning
, orinfo
. - 5
- The value of this mandatory field is a summary of the rule.
- 6
- The value of this mandatory field is a detailed description of the rule.
Apply the
AlertingRule
CR:$ oc apply -f <filename>.yaml
2.5.6. Configuring Loki to tolerate memberlist creation failure
In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks.
As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack
custom resource (CR) to use the podIP
address in the hashRing
spec. To configure the LokiStack
CR, use the following command:
$ oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}'
Example LokiStack to include podIP
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ...
2.5.7. Enabling stream-based retention with Loki
You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules.
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage.
Schema v13 is recommended.
Procedure
Create a
LokiStack
CR:Enable stream-based retention globally as shown in the following example:
Example global stream-based retention for AWS
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging
- 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.spec: limits:
Enable stream-based retention per-tenant basis as shown in the following example:
Example per-tenant stream-based retention for AWS
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging
- 1
- Sets retention policy by tenant. Valid tenant types are
application
,audit
, andinfrastructure
. - 2
- Contains the LogQL query used to define the log stream.
Apply the
LokiStack
CR:$ oc apply -f <filename>.yaml
2.5.8. Loki pod placement
You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods.
You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value
pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value
pair that is not on other pods ensures that only the log store pods can run on that node.
Example LokiStack with node selectors
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ...
Example LokiStack CR with node selectors and tolerations
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ...
To configure the nodeSelector
and tolerations
fields of the LokiStack (CR), you can use the oc explain
command to view the description and fields for a particular resource:
$ oc explain lokistack.spec.template
Example output
KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ...
For more detailed information, you can add a specific field:
$ oc explain lokistack.spec.template.compactor
Example output
KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ...
2.5.8.1. Enhanced Reliability and Performance
Configurations to ensure Loki’s reliability and efficiency in production.
2.5.8.2. Enabling authentication to cloud-based log stores using short-lived tokens
Workload identity federation enables authentication to cloud-based log stores using short-lived tokens.
Procedure
Use one of the following options to enable authentication:
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
CredentialsRequest
object, which populates a secret. If you use the OpenShift CLI (
oc
) to install the Loki Operator, you must manually create aSubscription
object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated.Example Azure sample subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>
Example AWS sample subscription
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>
-
If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a
2.5.8.3. Configuring Loki to tolerate node failure
The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node.
In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods.
The Operator sets default, preferred podAntiAffinity
rules for all Loki components, which includes the compactor
, distributor
, gateway
, indexGateway
, ingester
, querier
, queryFrontend
, and ruler
components.
You can override the preferred podAntiAffinity
settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution
field:
Example user settings for the ingester component
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ...
2.5.8.4. LokiStack behavior during cluster restarts
When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget
resources. The Loki Operator provisions PodDisruptionBudget
resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.
2.5.8.5. Advanced Deployment and Scalability
Specialized configurations for high availability, scalability, and error handling.
2.5.8.6. Zone aware data replication
The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small
, 1x.small
, or 1x.medium
, the replication.factor
field is automatically set to 2.
To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.
Example LokiStack CR with zone replication enabled
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4
- 1
- Deprecated field, values entered are overwritten by
replication.factor
. - 2
- This value is automatically set when deployment size is selected at setup.
- 3
- The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
- 4
- Defines zones in the form of a topology key that corresponds to a node label.
2.5.8.7. Recovering Loki pods from failed zones
In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider’s data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss.
Loki pods are part of a StatefulSet, and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass
object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone.
The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack
CR should always be set to a value greater than 1 to ensure that Loki is replicating.
Prerequisites
-
Verify your
LokiStack
CR has a replication factor greater than 1. - Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration.
The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone.
Procedure
List the pods in
Pending
status by running the following command:$ oc get pods --field-selector status.phase==Pending -n openshift-logging
Example
oc get pods
outputNAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m
- 1
- These pods are in
Pending
status because their corresponding PVCs are in the failed zone.
List the PVCs in
Pending
status by running the following command:$ oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r
Example
oc get pvc
outputstorage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1
Delete the PVC(s) for a pod by running the following command:
$ oc delete pvc <pvc_name> -n openshift-logging
Delete the pod(s) by running the following command:
$ oc delete pod <pod_name> -n openshift-logging
Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone.
2.5.8.7.1. Troubleshooting PVC in a terminating state
The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection
. Removing the finalizers should allow the PVCs to delete successfully.
Remove the finalizer for each PVC by running the command below, then retry deletion.
$ oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging
2.5.8.8. Troubleshooting Loki rate limit errors
If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit (429
) errors.
These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention.
In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack
custom resource (CR).
The LokiStack
CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers.
Conditions
- The Log Forwarder API is configured to forward logs to Loki.
Your system sends a block of messages that is larger than 2 MB to Loki. For example:
"values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]}
After you enter
oc logs -n openshift-logging -l component=collector
, the collector logs in your cluster show a line containing one of the following error messages:429 Too Many Requests Ingestion rate limit exceeded
Example Vector error message
2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true
Example Fluentd error message
2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n"
The error is also visible on the receiving end. For example, in the LokiStack ingester pod:
Example Loki ingester error message
level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream
Procedure
Update the
ingestionBurstSize
andingestionRate
fields in theLokiStack
CR:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ...
- 1
- The
ingestionBurstSize
field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than theingestionBurstSize
value are not permitted. - 2
- The
ingestionRate
field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention.
2.6. Visualization for logging
Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator, which requires Operator installation.
Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
Chapter 3. Logging 6.1
3.1. Logging 6.1
3.1.1. Logging 6.1.1 Release Notes
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1.
3.1.1.1. New Features and Enhancements
- With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. (LOG-6420)
3.1.1.2. Bug Fixes
-
Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.]. With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size,
max_line_bytes
, is3145728
bytes. The maximum number of bytes read during a read cycle,max_read_bytes
, is262144
bytes. (LOG-6379) -
Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the
ClusterLogForwarder
custom resource. (LOG-6383) - Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. (LOG-6405)
- Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. (LOG-6407)
-
Before this update, setting up the custom audit inputs in the
ClusterLogForwarder
custom resource with configuredLokiStack
output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. (LOG-6449) -
Before this update, the
ValidLokistackOTLPOutputs
condition appeared in the status of theClusterLogForwarder
custom resource even when the output type is notLokiStack
. With this update, theValidLokistackOTLPOutputs
condition is removed, and the validation messages for the existing output conditions are corrected. (LOG-6469) -
Before this update, the collector did not correctly mount the
/var/log/oauth-server/
path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. (LOG-6484) -
Before this update, the
must-gather
script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, themust-gather
script is fixed, and the LokiStack data is gathered reliably. (LOG-6498) -
Before this update, the collector did not correctly mount the
oauth-apiserver
audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. (LOG-6533)
3.1.1.3. CVEs
3.1.2. Logging 6.1.0 Release Notes
This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0.
3.1.2.1. New Features and Enhancements
3.1.2.1.1. Log Collection
-
This enhancement adds the source
iostream
to the attributes sent from collected container logs. The value is set to eitherstdout
orstderr
based on how the collector received it. (LOG-5292) - With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (LOG-6072)
-
With this update, users can now set the syslog output delivery mode of the
ClusterLogForwarder
CR to eitherAtLeastOnce
orAtMostOnce.
(LOG-6355)
3.1.2.1.2. Log Storage
-
With this update, the new
1x.pico
LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (LOG-5939)
3.1.2.2. Technology Preview
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
With this update, OpenTelemetry logs can now be forwarded using the
OTel
(OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add theobservability.openshift.io/tech-preview-otlp-output: "enabled"
annotation to yourClusterLogForwarder
configuration. For additional configuration information, see OTLP Forwarding. -
With this update, a
dataModel
field has been added to thelokiStack
output specification. Set thedataModel
toOtel
to configure log forwarding using the OpenTelemetry data format. The default is set toViaq
. For information about data mapping see OTLP Specification.
3.1.2.3. Bug Fixes
None.
3.1.2.4. CVEs
3.2. Logging 6.1
context: logging-6x-6.1
The ClusterLogForwarder
custom resource (CR) is the central configuration point for log collection and forwarding.
3.2.1. Inputs and outputs
Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application
, receiver
, infrastructure
, and audit
, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
3.2.2. Receiver input type
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http
and syslog
.
The ReceiverSpec
defines the configuration for a receiver input.
3.2.3. Pipelines and filters
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
3.2.4. Operator behavior
The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState
field of the ClusterLogForwarder
resource:
-
When set to
Managed
(default), the operator actively manages the logging resources to match the configuration defined in the spec. -
When set to
Unmanaged
, the operator does not take any action, allowing you to manually manage the logging components.
3.2.5. Validation
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder
resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
3.2.6. Quick start
OpenShift Logging supports two data models:
- ViaQ (General Availability)
- OpenTelemetry (Technology Preview)
You can select either of these data models based on your requirement by configuring the lokiStack.dataModel
field in the ClusterLogForwarder
. ViaQ is the default data model when forwarding logs to LokiStack.
In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry.
3.2.6.1. Quick start with ViaQ
To use the default ViaQ data model, follow these steps:
Prerequisites
- Cluster administrator permissions
Procedure
- Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub.
Create a
LokiStack
custom resource (CR) in theopenshift-logging
namespace:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging
NoteEnsure that the
logging-loki-s3
secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.Create a service account for the collector:
$ oc create sa collector -n openshift-logging
Allow the collector’s service account to write data to the
LokiStack
CR:$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector
NoteThe
ClusterRole
resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.Allow the collector’s service account to collect logs:
$ oc project openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector
NoteThe example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarder
configuration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPlugin
CR to enable the Log section in the Observe tab:apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki
Create a
ClusterLogForwarder
CR to configure log forwarding:apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack
NoteThe
dataModel
field is optional and left unset (dataModel: ""
) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. SpecifyingdataModel: ViaQ
ensures the configuration remains compatible if the default changes.
Verification
- Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console.
3.2.6.2. Quick start with OpenTelemetry
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:
Prerequisites
- Cluster administrator permissions
Procedure
- Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub.
Create a
LokiStack
custom resource (CR) in theopenshift-logging
namespace:apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging
NoteEnsure that the
logging-loki-s3
secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".Create a service account for the collector:
$ oc create sa collector -n openshift-logging
Allow the collector’s service account to write data to the
LokiStack
CR:$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector
NoteThe
ClusterRole
resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.Allow the collector’s service account to collect logs:
$ oc project openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector
NoteThe example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your
ClusterLogForwarder
configuration to include them. Assign roles based on the specific log types required for your environment.Create a
UIPlugin
CR to enable the Log section in the Observe tab:apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki
Create a
ClusterLogForwarder
CR to configure log forwarding:apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp
NoteYou cannot use
lokiStack.labelKeys
whendataModel
isOtel
. To achieve similar functionality whendataModel
isOtel
, refer to "Configuring LokiStack for OTLP data ingestion".
Verification
- Verify that OTLP is functioning correctly by going to Observe → OpenShift Logging → LokiStack → Writes in the OpenShift web console, and checking Distributor - Structured Metadata.
3.3. Configuring log forwarding
The ClusterLogForwarder
(CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
Key Functions of the ClusterLogForwarder
- Selects log messages using inputs
- Forwards logs to external destinations using outputs
- Filters, transforms, and drops log messages using filters
- Defines log forwarding pipelines connecting inputs, filters and outputs
3.3.1. Setting up log collection
This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder. This was not required in previous releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource.
The Red Hat OpenShift Logging Operator provides collect-audit-logs
, collect-application-logs
, and collect-infrastructure-logs
cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
Setup log collection by binding the required cluster roles to your service account.
3.3.1.1. Legacy service accounts
To use the existing legacy service account logcollector
, create the following ClusterRoleBinding:
$ oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector
Additionally, create the following ClusterRoleBinding if collecting audit logs:
$ oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector
3.3.1.2. Creating service accounts
Prerequisites
-
The Red Hat OpenShift Logging Operator is installed in the
openshift-logging
namespace. - You have administrator permissions.
Procedure
- Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
Bind the appropriate cluster roles to the service account:
Example binding command
$ oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>
3.3.1.2.1. Cluster Role Binding for your Service Account
The role_binding.yaml file binds the ClusterLogging operator’s ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8
- 1
- roleRef: References the ClusterRole to which the binding applies.
- 2
- apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system.
- 3
- kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide.
- 4
- name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator.
- 5
- subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole.
- 6
- kind: Specifies that the subject is a ServiceAccount.
- 7
- Name: The name of the ServiceAccount being granted the permissions.
- 8
- namespace: Indicates the namespace where the ServiceAccount is located.
3.3.1.2.2. Writing application logs
The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.
3.3.1.2.3. Writing audit logs
The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9
- 1 1
- rules: Defines the permissions granted by this ClusterRole.
- 2 2
- apiGroups: Specifies the API group loki.grafana.com.
- 3 3
- loki.grafana.com: The API group responsible for Loki logging resources.
- 4 4
- resources: Refers to the resource type this role manages, in this case, audit.
- 5 5
- audit: Specifies that the role manages audit logs within Loki.
- 6 6
- resourceNames: Defines the specific resources that the role can access.
- 7 7
- logs: Refers to the logs that can be managed under this role.
- 8 8
- verbs: The actions allowed on the resources.
- 9 9
- create: Grants permission to create new audit logs.
3.3.1.2.4. Writing infrastructure logs
The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system.
Sample YAML
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Specifies the API group for Loki-related resources.
- 3
- loki.grafana.com: The API group managing the Loki logging system.
- 4
- resources: Defines the resource type that this role can interact with.
- 5
- infrastructure: Refers to infrastructure-related resources that this role manages.
- 6
- resourceNames: Specifies the names of resources this role can manage.
- 7
- logs: Refers to the log resources related to infrastructure.
- 8
- verbs: The actions permitted by this role.
- 9
- create: Grants permission to create infrastructure logs in the Loki system.
3.3.1.2.5. ClusterLogForwarder editor role
The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13
- 1
- rules: Specifies the permissions this ClusterRole grants.
- 2
- apiGroups: Refers to the OpenShift-specific API group
- 3
- obervability.openshift.io: The API group for managing observability resources, like logging.
- 4
- resources: Specifies the resources this role can manage.
- 5
- clusterlogforwarders: Refers to the log forwarding resources in OpenShift.
- 6
- verbs: Specifies the actions allowed on the ClusterLogForwarders.
- 7
- create: Grants permission to create new ClusterLogForwarders.
- 8
- delete: Grants permission to delete existing ClusterLogForwarders.
- 9
- get: Grants permission to retrieve information about specific ClusterLogForwarders.
- 10
- list: Allows listing all ClusterLogForwarders.
- 11
- patch: Grants permission to partially modify ClusterLogForwarders.
- 12
- update: Grants permission to update existing ClusterLogForwarders.
- 13
- watch: Grants permission to monitor changes to ClusterLogForwarders.
3.3.2. Modifying log level in collector
To modify the log level in the collector, you can set the observability.openshift.io/log-level
annotation to trace
, debug
, info
, warn
, error
, and off
.
Example log level annotation
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ...
3.3.3. Managing the Operator
The ClusterLogForwarder
resource has a managementState
field that controls whether the operator actively manages its resources or leaves them Unmanaged:
- Managed
- (default) The operator will drive the logging resources to match the desired state in the CLF spec.
- Unmanaged
- The operator will not take any action related to the logging components.
This allows administrators to temporarily pause log forwarding by setting managementState
to Unmanaged
.
3.3.4. Structure of the ClusterLogForwarder
The CLF has a spec
section that contains the following key components:
- Inputs
-
Select log messages to be forwarded. Built-in input types
application
,infrastructure
andaudit
forward logs from different parts of the cluster. You can also define custom inputs. - Outputs
- Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
- Pipelines
- Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
- Filters
- Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
3.3.4.1. Inputs
Inputs are configured in an array under spec.inputs
. There are three built-in input types:
- application
-
Selects logs from all application containers, excluding those in infrastructure namespaces such as
default
,openshift
, or any namespace with thekube-
oropenshift-
prefix. - infrastructure
-
Selects logs from infrastructure components running in
default
andopenshift
namespaces and node logs. - audit
- Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
Users can define custom inputs of type application
that select logs from specific namespaces or using pod labels.
3.3.4.2. Outputs
Outputs are configured in an array under spec.outputs
. Each output must have a unique name and a type. Supported types are:
- azureMonitor
- Forwards logs to Azure Monitor.
- cloudwatch
- Forwards logs to AWS CloudWatch.
- elasticsearch
- Forwards logs to an external Elasticsearch instance.
- googleCloudLogging
- Forwards logs to Google Cloud Logging.
- http
- Forwards logs to a generic HTTP endpoint.
- kafka
- Forwards logs to a Kafka broker.
- loki
- Forwards logs to a Loki logging backend.
- lokistack
- Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy
- otlp
- Forwards logs using the OpenTelemetry Protocol.
- splunk
- Forwards logs to Splunk.
- syslog
- Forwards logs to an external syslog server.
Each output type has its own configuration fields.
3.3.5. Configuring OTLP output
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding.
The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Procedure
Create or edit a
ClusterLogForwarder
custom resource (CR) to enable forwarding using OTLP by adding the following annotation:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework.
3.3.5.1. Pipelines
Pipelines are configured in an array under spec.pipelines
. Each pipeline must have a unique name and consists of:
- inputRefs
- Names of inputs whose logs should be forwarded to this pipeline.
- outputRefs
- Names of outputs to send logs to.
- filterRefs
- (optional) Names of filters to apply.
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
3.3.5.2. Filters
Filters are configured in an array under spec.filters
. They can match incoming log messages based on the value of structured fields and modify or drop them.
Administrators can configure the following types of filters:
3.3.5.3. Enabling multi-line exception detection
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarder
Custom Resource (CR) contains adetectMultilineErrors
field under the.spec.filters
.
Example ClusterLogForwarder CR
apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>
3.3.5.3.1. Details
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
The collector supports the following languages:
- Java
- JS
- Ruby
- Python
- Golang
- PHP
- Dart
3.3.5.4. Configuring content filters to drop unwanted log records
When the drop
filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.
Procedure
Add a configuration for a filter to the
filters
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to drop log records based on regular expressions:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ...
- 1
- Specifies the type of filter. The
drop
filter drops log records that match the filter configuration. - 2
- Specifies configuration options for applying the
drop
filter. - 3
- Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
- If all the conditions specified for a test are true, the test passes and the log record is dropped.
-
When multiple tests are specified for the
drop
filter configuration, if any of the tests pass, the record is dropped. - If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
- 4
- Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (
a-zA-Z0-9_
), for example,.kubernetes.namespace_name
. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz"
. You can include multiple field paths in a singletest
configuration, but they must all evaluate to true for the test to pass and thedrop
filter to be applied. - 5
- Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the
matches
ornotMatches
condition for a singlefield
path, but not both. - 6
- Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the
matches
ornotMatches
condition for a singlefield
path, but not both. - 7
- Specifies the pipeline that the
drop
filter is applied to.
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
Additional examples
The following additional example shows how you can configure the drop
filter to only keep higher priority log records:
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ...
In addition to including multiple field paths in a single test
configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test
configuration evaluates to true. However, for the second test
configuration, both field specs must be true for it to be evaluated to true:
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ...
3.3.5.5. Overview of API audit filter
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level
field:
-
None
: The event is dropped. -
Metadata
: Audit metadata is included, request and response bodies are removed. -
Request
: Audit metadata and the request body are included, the response body is removed. -
RequestResponse
: All data is included: metadata, request body and response body. The response body can be very large. For example,oc get pods -A
generates a response body containing the YAML description of every pod in the cluster.
The ClusterLogForwarder
custom resource (CR) uses the same format as the standard Kubernetes audit policy, while providing the following additional functions:
- Wildcards
-
Names of users, groups, namespaces, and resources can have a leading or trailing
*
asterisk character. For example, the namespaceopenshift-\*
matchesopenshift-apiserver
oropenshift-authentication
. Resource\*/status
matchesPod/status
orDeployment/status
. - Default Rules
Events that do not match any rule in the policy are filtered as follows:
-
Read-only system events such as
get
,list
, andwatch
are dropped. - Service account write events that occur within the same namespace as the service account are dropped.
- All other events are forwarded, subject to any configured rate limits.
-
Read-only system events such as
To disable these defaults, either end your rules list with a rule that has only a level
field or add an empty rule.
- Omit Response Codes
-
A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the
OmitResponseCodes
field, which lists HTTP status codes for which no events are created. The default value is[404, 409, 422, 429]
. If the value is an empty list,[]
, then no status codes are omitted.
The ClusterLogForwarder
CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder
CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site.
You must have a cluster role collect-audit-logs
to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Example audit policy
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata
3.3.5.6. Filtering application logs at input by including the label expressions or a matching label key and values
You can include the application logs based on the label expressions or a matching label key and its values by using the input
selector.
Procedure
Add a configuration for a filter to the
input
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include logs based on label expressions or matched label key/values:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ...
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
3.3.5.7. Configuring content filters to prune log records
When the prune
filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.
Procedure
Add a configuration for a filter to the
prune
spec in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to prune log records based on field paths:ImportantIf both are specified, records are pruned based on the
notIn
array first, which takes precedence over thein
array. After records have been pruned by using thenotIn
array, they are then pruned by using thein
array.Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ...
- 1
- Specify the type of filter. The
prune
filter prunes log records by configured fields. - 2
- Specify configuration options for applying the
prune
filter. Thein
andnotIn
fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (a-zA-Z0-9_
), for example,.kubernetes.namespace_name
. If segments contain characters outside of this range, the segment must be in quotes, for example,.kubernetes.labels."foo.bar-bar/baz"
. - 3
- Optional: Any fields that are specified in this array are removed from the log record.
- 4
- Optional: Any fields that are not specified in this array are removed from the log record.
- 5
- Specify the pipeline that the
prune
filter is applied to.
NoteThe filters exempts the
log_type
,.log_source
, and.message
fields.Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
3.3.6. Filtering the audit and infrastructure log inputs by source
You can define the list of audit
and infrastructure
sources to collect the logs by using the input
selector.
Procedure
Add a configuration to define the
audit
andinfrastructure
sources in theClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to defineaudit
andinfrastructure
sources:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ...
- 1
- Specifies the list of infrastructure sources to collect. The valid sources include:
-
node
: Journal log from the node -
container
: Logs from the workloads deployed in the namespaces
-
- 2
- Specifies the list of audit sources to collect. The valid sources include:
-
kubeAPI
: Logs from the Kubernetes API servers -
openshiftAPI
: Logs from the OpenShift API servers -
auditd
: Logs from a node auditd service -
ovn
: Logs from an open virtual network service
-
Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
3.3.7. Filtering application logs at input by including or excluding the namespace or container name
You can include or exclude the application logs based on the namespace and container name by using the input
selector.
Procedure
Add a configuration to include or exclude the namespace and container names in the
ClusterLogForwarder
CR.The following example shows how to configure the
ClusterLogForwarder
CR to include or exclude namespaces and container names:Example
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ...
NoteThe
excludes
field takes precedence over theincludes
field.Apply the
ClusterLogForwarder
CR by running the following command:$ oc apply -f <filename>.yaml
3.4. Storing logs with LokiStack
You can configure a LokiStack
CR to store application, audit, and infrastructure-related logs.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries.
For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days.
3.4.1. Loki deployment sizing
Sizing for Loki follows the format of 1x.<size>
where the value 1x
is number of instances and <size>
specifies performance capabilities.
The 1x.pico
configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.
Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.
It is not possible to change the number 1x
for the deployment size.
1x.demo | 1x.pico [6.1+ only] | 1x.extra-small | 1x.small | 1x.medium | |
---|---|---|---|---|---|
Data transfer | Demo use only | 50GB/day | 100GB/day | 500GB/day | 2TB/day |
Queries per second (QPS) | Demo use only | 1-25 QPS at 200ms | 1-25 QPS at 200ms | 25-50 QPS at 200ms | 25-75 QPS at 200ms |
Replication factor | None | 2 | 2 | 2 | 2 |
Total CPU requests | None | 7 vCPUs | 14 vCPUs | 34 vCPUs | 54 vCPUs |
Total CPU requests if using the ruler | None | 8 vCPUs | 16 vCPUs | 42 vCPUs | 70 vCPUs |
Total memory requests | None | 17Gi | 31Gi | 67Gi | 139Gi |
Total memory requests if using the ruler | None | 18Gi | 35Gi | 83Gi | 171Gi |
Total disk requests | 40Gi | 590Gi | 430Gi | 430Gi | 590Gi |
Total disk requests if using the ruler | 80Gi | 910Gi | 750Gi | 750Gi | 910Gi |
3.4.2. Prerequisites
- You have installed the Loki Operator by using the CLI or web console.
-
You have a
serviceAccount
in the same namespace in which you create theClusterLogForwarder
. -
The
serviceAccount
is assignedcollect-audit-logs
,collect-application-logs
, andcollect-infrastructure-logs
cluster roles.
3.4.3. Core Setup and Configuration
Role-based access controls, basic monitoring, and pod placement to deploy Loki.
3.4.4. Authorizing LokiStack rules RBAC permissions
Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole
objects that contain necessary role-based access control (RBAC) permissions for users.
The following cluster roles for alerting and recording rules are available for LokiStack:
Rule name | Description |
---|---|
|
Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
Users with this role have permission to create, update, and delete |
|
Users with this role can read |
|
Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch |
|
Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to |
|
Users with this role have permission to create, update, and delete |
|
Users with this role can read |
3.4.4.1. Examples
To apply cluster roles for a user, you must bind an existing cluster role to a specific username.
Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding
object is used, as when using the oc adm policy add-role-to-user
command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding
object is used, as when using the oc adm policy add-cluster-role-to-user
command, the cluster role applies to all namespaces in the cluster.
The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster:
Example cluster role binding command for alerting rule CRUD permissions in a specific namespace
$ oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>
The following command gives the specified user administrator permissions for alerting rules in all namespaces:
Example cluster role binding command for administrator permissions
$ oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>
3.4.5. Creating a log-based alerting rule with Loki
The AlertingRule
CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack
instance. In addition, the webhook validation definition provides support for rule validation conditions:
-
If an
AlertingRule
CR includes an invalidinterval
period, it is an invalid alerting rule -
If an
AlertingRule
CR includes an invalidfor
period, it is an invalid alerting rule. -
If an
AlertingRule
CR includes an invalid LogQLexpr
, it is an invalid alerting rule. -
If an
AlertingRule
CR includes two groups with the same name, it is an invalid alerting rule. - If none of the above applies, an alerting rule is considered valid.
Tenant type | Valid namespaces for AlertingRule CRs |
---|---|
application |
|
audit |
|
infrastructure |
|
Procedure
Create an
AlertingRule
custom resource (CR):Example infrastructure
AlertingRule
CRapiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat