Chapter 4. Logging 5.5
4.1. Logging 5.5 Release Notes
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
4.1.1. Logging 5.5.16
This release includes OpenShift Logging Bug Fix Release 5.5.16.
4.1.1.1. Bug fixes
- Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. (LOG-4434)
4.1.1.2. CVEs
4.1.2. Logging 5.5.14
This release includes OpenShift Logging Bug Fix Release 5.5.14.
4.1.2.1. Bug fixes
-
Before this update, the Vector collector occasionally panicked with the following error message in its log:
thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9
. With this update, the error has been resolved. (LOG-4279)
4.1.2.2. CVEs
4.1.3. Logging 5.5.13
This release includes OpenShift Logging Bug Fix Release 5.5.13.
4.1.3.1. Bug fixes
None.
4.1.3.2. CVEs
4.1.4. Logging 5.5.11
This release includes OpenShift Logging Bug Fix Release 5.5.11.
4.1.4.1. Bug fixes
- Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. (LOG-4102)
- Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. (LOG-4117)
4.1.4.2. CVEs
- CVE-2021-26341
- CVE-2021-33655
- CVE-2021-33656
- CVE-2022-1462
- CVE-2022-1679
- CVE-2022-1789
- CVE-2022-2196
- CVE-2022-2663
- CVE-2022-2795
- CVE-2022-3028
- CVE-2022-3239
- CVE-2022-3522
- CVE-2022-3524
- CVE-2022-3564
- CVE-2022-3566
- CVE-2022-3567
- CVE-2022-3619
- CVE-2022-3623
- CVE-2022-3625
- CVE-2022-3627
- CVE-2022-3628
- CVE-2022-3707
- CVE-2022-3970
- CVE-2022-4129
- CVE-2022-20141
- CVE-2022-24765
- CVE-2022-25265
- CVE-2022-29187
- CVE-2022-30594
- CVE-2022-36227
- CVE-2022-39188
- CVE-2022-39189
- CVE-2022-39253
- CVE-2022-39260
- CVE-2022-41218
- CVE-2022-41674
- CVE-2022-42703
- CVE-2022-42720
- CVE-2022-42721
- CVE-2022-42722
- CVE-2022-43750
- CVE-2022-47929
- CVE-2023-0394
- CVE-2023-0461
- CVE-2023-1195
- CVE-2023-1582
- CVE-2023-2491
- CVE-2023-23454
- CVE-2023-27535
4.1.5. Logging 5.5.10
This release includes OpenShift Logging Bug Fix Release 5.5.10.
4.1.5.1. Bug fixes
- Before this update, the logging view plugin of the OpenShift Web Console showed only an error text when the LokiStack was not reachable. After this update the plugin shows a proper error message with details on how to fix the unreachable LokiStack. (LOG-2874)
4.1.5.2. CVEs
4.1.6. Logging 5.5.9
This release includes OpenShift Logging Bug Fix Release 5.5.9.
4.1.6.1. Bug fixes
-
Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in
/var/log/auth-server/audit.log
. This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in/var/log/auth-server/audit.log
, as expected.(LOG-3730) - Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received logs now have structured messages included, even when they are forwarded to multiple destinations.(LOG-3767)
4.1.6.2. CVEs
4.1.7. Logging 5.5.8
This release includes OpenShift Logging Bug Fix Release 5.5.8.
4.1.7.1. Bug fixes
-
Before this update, the
priority
field was missing fromsystemd
logs due to an error in how the collector setlevel
fields. With this update, these fields are set correctly, resolving the issue. (LOG-3630)
4.1.7.2. CVEs
4.1.8. Logging 5.5.7
This release includes OpenShift Logging Bug Fix Release 5.5.7.
4.1.8.1. Bug fixes
- Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3534)
-
Before this update, the
ClusterLogForwarder
custom resource (CR) did not pass TLS credentials for syslog output to Fluentd, resulting in errors during forwarding. With this update, credentials pass correctly to Fluentd, resolving the issue. (LOG-3533)
4.1.8.2. CVEs
CVE-2021-46848CVE-2022-3821CVE-2022-35737CVE-2022-42010CVE-2022-42011CVE-2022-42012CVE-2022-42898CVE-2022-43680
4.1.9. Logging 5.5.6
This release includes OpenShift Logging Bug Fix Release 5.5.6.
4.1.9.1. Bug fixes
-
Before this update, the Pod Security admission controller added the label
podSecurityLabelSync = true
to theopenshift-logging
namespace. This resulted in our specified security labels being overwritten, and as a result Collector pods would not start. With this update, the labelpodSecurityLabelSync = false
preserves security labels. Collector pods deploy as expected. (LOG-3340) - Before this update, the Operator installed the console view plugin, even when it was not enabled on the cluster. This caused the Operator to crash. With this update, if an account for a cluster does not have the console view enabled, the Operator functions normally and does not install the console view. (LOG-3407)
-
Before this update, a prior fix to support a regression where the status of the Elasticsearch deployment was not being updated caused the Operator to crash unless the
Red Hat Elasticsearch Operator
was deployed. With this update, that fix has been reverted so the Operator is now stable but re-introduces the previous issue related to the reported status. (LOG-3428) - Before this update, the Loki Operator only deployed one replica of the LokiStack gateway regardless of the chosen stack size. With this update, the number of replicas is correctly configured according to the selected size. (LOG-3478)
- Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3341)
- Before this update, the logging view plugin contained an incompatible feature for certain versions of OpenShift Container Platform. With this update, the correct release stream of the plugin resolves the issue. (LOG-3467)
-
Before this update, the reconciliation of the
ClusterLogForwarder
custom resource would incorrectly report a degraded status of one or more pipelines causing the collector pods to restart every 8-10 seconds. With this update, reconciliation of theClusterLogForwarder
custom resource processes correctly, resolving the issue. (LOG-3469) -
Before this change the spec for the
outputDefaults
field of the ClusterLogForwarder custom resource would apply the settings to every declared Elasticsearch output type. This change corrects the behavior to match the enhancement specification where the setting specifically applies to the default managed Elasticsearch store. (LOG-3342) -
Before this update, the OpenShift CLI (oc)
must-gather
script did not complete because the OpenShift CLI (oc) needs a folder with write permission to build its cache. With this update, the OpenShift CLI (oc) has write permissions to a folder, and themust-gather
script completes successfully. (LOG-3472) - Before this update, the Loki Operator webhook server caused TLS errors. With this update, the Loki Operator webhook PKI is managed by the Operator Lifecycle Manager’s dynamic webhook management resolving the issue. (LOG-3511)
4.1.9.2. CVEs
4.1.10. Logging 5.5.5
This release includes OpenShift Logging Bug Fix Release 5.5.5.
4.1.10.1. Bug fixes
-
Before this update, Kibana had a fixed
24h
OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever theaccessTokenInactivityTimeout
field was set to a value lower than24h
. With this update, Kibana’s OAuth cookie expiration time synchronizes to theaccessTokenInactivityTimeout
, with a default value of24h
. (LOG-3305) -
Before this update, Vector parsed the message field when JSON parsing was enabled without also defining
structuredTypeKey
orstructuredTypeName
values. With this update, a value is required for eitherstructuredTypeKey
orstructuredTypeName
when writing structured logs to Elasticsearch. (LOG-3284) -
Before this update, the
FluentdQueueLengthIncreasing
alert could fail to fire when there was a cardinality issue with the set of labels returned from this alert expression. This update reduces labels to only include those required for the alert. (LOG-3226) - Before this update, Loki did not have support to reach an external storage in a disconnected cluster. With this update, proxy environment variables and proxy trusted CA bundles are included in the container image to support these connections. (LOG-2860)
-
Before this update, OpenShift Container Platform web console users could not choose the
ConfigMap
object that includes the CA certificate for Loki, causing pods to operate without the CA. With this update, web console users can select the config map, resolving the issue. (LOG-3310) - Before this update, the CA key was used as volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters (such as dots). With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3332)
4.1.10.2. CVEs
- CVE-2016-3709
- CVE-2020-35525
- CVE-2020-35527
- CVE-2020-36516
- CVE-2020-36558
- CVE-2021-3640
- CVE-2021-30002
- CVE-2022-0168
- CVE-2022-0561
- CVE-2022-0562
- CVE-2022-0617
- CVE-2022-0854
- CVE-2022-0865
- CVE-2022-0891
- CVE-2022-0908
- CVE-2022-0909
- CVE-2022-0924
- CVE-2022-1016
- CVE-2022-1048
- CVE-2022-1055
- CVE-2022-1184
- CVE-2022-1292
- CVE-2022-1304
- CVE-2022-1355
- CVE-2022-1586
- CVE-2022-1785
- CVE-2022-1852
- CVE-2022-1897
- CVE-2022-1927
- CVE-2022-2068
- CVE-2022-2078
- CVE-2022-2097
- CVE-2022-2509
- CVE-2022-2586
- CVE-2022-2639
- CVE-2022-2938
- CVE-2022-3515
- CVE-2022-20368
- CVE-2022-21499
- CVE-2022-21618
- CVE-2022-21619
- CVE-2022-21624
- CVE-2022-21626
- CVE-2022-21628
- CVE-2022-22624
- CVE-2022-22628
- CVE-2022-22629
- CVE-2022-22662
- CVE-2022-22844
- CVE-2022-23960
- CVE-2022-24448
- CVE-2022-25255
- CVE-2022-26373
- CVE-2022-26700
- CVE-2022-26709
- CVE-2022-26710
- CVE-2022-26716
- CVE-2022-26717
- CVE-2022-26719
- CVE-2022-27404
- CVE-2022-27405
- CVE-2022-27406
- CVE-2022-27950
- CVE-2022-28390
- CVE-2022-28893
- CVE-2022-29581
- CVE-2022-30293
- CVE-2022-34903
- CVE-2022-36946
- CVE-2022-37434
- CVE-2022-39399
4.1.11. Logging 5.5.4
This release includes OpenShift Logging Bug Fix Release 5.5.4.
4.1.11.1. Bug fixes
-
Before this update, an error in the query parser of the logging view plugin caused parts of the logs query to disappear if the query contained curly brackets
{}
. This made the queries invalid, leading to errors being returned for valid queries. With this update, the parser correctly handles these queries. (LOG-3042) - Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3049)
- Before this update, no alerts were implemented to support the collector implementation of Vector. This change adds Vector alerts and deploys separate alerts, depending upon the chosen collector implementation. (LOG-3127)
- Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3138)
-
Before this update, a prior refactoring of the logging
must-gather
scripts removed the expected location for the artifacts. This update reverts that change to write artifacts to the/must-gather
folder. (LOG-3213) -
Before this update, on certain clusters, the Prometheus exporter would bind on IPv4 instead of IPv6. After this update, Fluentd detects the IP version and binds to
0.0.0.0
for IPv4 or[::]
for IPv6. (LOG-3162)
4.1.11.2. CVEs
4.1.12. Logging 5.5.3
This release includes OpenShift Logging Bug Fix Release 5.5.3.
4.1.12.1. Bug fixes
- Before this update, log entries that had structured messages included the original message field, which made the entry larger. This update removes the message field for structured logs to reduce the increased size. (LOG-2759)
-
Before this update, the collector configuration excluded logs from
collector
,default-log-store
, andvisualization
pods, but was unable to exclude logs archived in a.gz
file. With this update, archived logs stored as.gz
files ofcollector
,default-log-store
, andvisualization
pods are also excluded. (LOG-2844) - Before this update, when requests to an unavailable pod were sent through the gateway, no alert would warn of the disruption. With this update, individual alerts will generate if the gateway has issues completing a write or read request. (LOG-2884)
- Before this update, pod metadata could be altered by fluent plugins because the values passed through the pipeline by reference. This update ensures each log message receives a copy of the pod metadata so each message processes independently. (LOG-3046)
-
Before this update, selecting unknown severity in the OpenShift Console Logs view excluded logs with a
level=unknown
value. With this update, logs without level and withlevel=unknown
values are visible when filtering by unknown severity. (LOG-3062) -
Before this update, log records sent to Elasticsearch had an extra field named
write-index
that contained the name of the index to which the logs needed to be sent. This field is not a part of the data model. After this update, this field is no longer sent. (LOG-3075) - With the introduction of the new built-in Pod Security Admission Controller, Pods not configured in accordance with the enforced security standards defined globally or on the namespace level cannot run. With this update, the Operator and collectors allow privileged execution and run without security audit warnings or errors. (LOG-3077)
-
Before this update, the Operator removed any custom outputs defined in the
ClusterLogForwarder
custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing theClusterLogForwarder
custom resource. (LOG-3095)
4.1.12.2. CVEs
4.1.13. Logging 5.5.2
This release includes OpenShift Logging Bug Fix Release 5.5.2.
4.1.13.1. Bug fixes
- Before this update, alerting rules for the Fluentd collector did not adhere to the OpenShift Container Platform monitoring style guidelines. This update modifies those alerts to include the namespace label, resolving the issue. (LOG-1823)
- Before this update, the index management rollover script failed to generate a new index name whenever there was more than one hyphen character in the name of the index. With this update, index names generate correctly. (LOG-2644)
-
Before this update, the Kibana route was setting a
caCertificate
value without a certificate present. With this update, nocaCertificate
value is set. (LOG-2661) - Before this update, a change in the collector dependencies caused it to issue a warning message for unused parameters. With this update, removing unused configuration parameters resolves the issue. (LOG-2859)
- Before this update, pods created for deployments that Loki Operator created were mistakenly scheduled on nodes with non-Linux operating systems, if such nodes were available in the cluster the Operator was running in. With this update, the Operator attaches an additional node-selector to the pod definitions which only allows scheduling the pods on Linux-based nodes. (LOG-2895)
- Before this update, the OpenShift Console Logs view did not filter logs by severity due to a LogQL parser issue in the LokiStack gateway. With this update, a parser fix resolves the issue and the OpenShift Console Logs view can filter by severity. (LOG-2908)
- Before this update, a refactoring of the Fluentd collector plugins removed the timestamp field for events. This update restores the timestamp field, sourced from the event’s received time. (LOG-2923)
-
Before this update, absence of a
level
field in audit logs caused an error in vector logs. With this update, the addition of alevel
field in the audit log record resolves the issue. (LOG-2961) - Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-3053)
-
Before this update, each rollover job created empty indices when the
ClusterLogForwarder
custom resource had JSON parsing defined. With this update, new indices are not empty. (LOG-3063) - Before this update, when the user deleted the LokiStack after an update to Loki Operator 5.5 resources originally created by Loki Operator 5.4 remained. With this update, the resources' owner-references point to the 5.5 LokiStack. (LOG-2945)
- Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-2918)
- Before this update, users with cluster-admin privileges were not able to properly view infrastructure and audit logs using the logging console. With this update, the authorization check has been extended to also recognize users in cluster-admin and dedicated-admin groups as admins. (LOG-2970)
4.1.13.2. CVEs
4.1.14. Logging 5.5.1
This release includes OpenShift Logging Bug Fix Release 5.5.1.
4.1.14.1. Enhancements
- This enhancement adds an Aggregated Logs tab to the Pod Details page of the OpenShift Container Platform web console when the Logging Console Plug-in is in use. This enhancement is only available on OpenShift Container Platform 4.10 and later. (LOG-2647)
- This enhancement adds Google Cloud Logging as an output option for log forwarding. (LOG-1482)
4.1.14.2. Bug fixes
- Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. (LOG-2745)
- Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. (LOG-2995)
- Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. (LOG-2801)
- Before this update, changing the OpenShift Container Platform web console’s refresh interval created an error when the Query field was empty. With this update, changing the interval is not an available option when the Query field is empty. (LOG-2917)
4.1.14.3. CVEs
4.1.15. Logging 5.5.0
This release includes:OpenShift Logging Bug Fix Release 5.5.0.
4.1.15.1. Enhancements
- With this update, you can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. (LOG-1296)
JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats.
-
With this update, you can filter logs with Elasticsearch outputs by using the Kubernetes common labels,
app.kubernetes.io/component
,app.kubernetes.io/managed-by
,app.kubernetes.io/part-of
, andapp.kubernetes.io/version
. Non-Elasticsearch output types can use all labels included inkubernetes.labels
. (LOG-2388) - With this update, clusters with AWS Security Token Service (STS) enabled may use STS authentication to forward logs to Amazon CloudWatch. (LOG-1976)
- With this update, the 'LokiOperator' Operator and Vector collector move from Technical Preview to General Availability. Full feature parity with prior releases are pending, and some APIs remain Technical Previews. See the Logging with the LokiStack section for details.
4.1.15.2. Bug fixes
- Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for all storage options has been disabled, resolving the issue. (LOG-2746)
- Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. (LOG-2656)
-
Before this update, multiple
ClusterLogForwarder
pipelines configured for multiline error detection caused the collector to go into acrashloopbackoff
error state. This update fixes the issue where multiple configuration sections had the same unique ID. (LOG-2241) - Before this update, the collector could not save non UTF-8 symbols to the Elasticsearch storage logs. With this update the collector encodes non UTF-8 symbols, resolving the issue. (LOG-2203)
- Before this update, non-latin characters displayed incorrectly in Kibana. With this update, Kibana displays all valid UTF-8 symbols correctly. (LOG-2784)
4.1.15.3. CVEs
4.2. Getting started with logging 5.5
This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended.
As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Prerequisites
- LogStore preference: Elasticsearch or LokiStack
- Collector implementation preference: Fluentd or Vector
- Credentials for your log forwarding outputs
As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
Install the Operator for the logstore you’d like to use.
- For Elasticsearch, install the OpenShift Elasticsearch Operator.
For LokiStack, install the Loki Operator.
-
Create a
LokiStack
custom resource (CR) instance.
-
Create a
- Install the Red Hat OpenShift Logging Operator.
Create a
ClusterLogging
custom resource (CR) instance.Select your Collector Implementation.
NoteAs of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
-
Create a
ClusterLogForwarder
custom resource (CR) instance. - Create a secret for the selected output pipeline.
4.3. Understanding logging architecture
The logging subsystem consists of these logical components:
-
Collector
- Reads container log data from each node and forwards log data to configured outputs. -
Store
- Stores log data for analysis; the default output for the forwarder. -
Visualization
- Graphical interface for searching, querying, and viewing stored logs.
These components are managed by Operators and Custom Resource (CR) YAML files.
The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types:
-
application
- Container logs generated by non-infrastructure containers. -
infrastructure
- Container logs from namespaceskube-*
andopenshift-\*
, and node logs fromjournald
. -
audit
- Logs fromauditd
,kube-apiserver
,openshift-apiserver
, andovn
if enabled.
The logging collector is a daemonset that deploys pods to each OpenShift Container Platform node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder
custom resource.
4.4. Administering your logging deployment
4.4.1. Deploying Red Hat OpenShift Logging Operator using the web console
You can use the OpenShift Container Platform web console to deploy the Red Hat OpenShift Logging Operator.
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Procedure
To deploy the Red Hat OpenShift Logging Operator using the OpenShift Container Platform web console:
Install the Red Hat OpenShift Logging Operator:
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Type Logging in the Filter by keyword field.
- Choose Red Hat OpenShift Logging from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stable
channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-X
whereX
is the version of logging you have installed.- Ensure that A specific namespace on the cluster is selected under Installation Mode.
- Ensure that Operator recommended namespace is openshift-logging under Installed Namespace.
- Select Enable Operator recommended cluster monitoring on this Namespace.
Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Select Enable or Disable for the Console plugin.
- Click Install.
-
In the OpenShift Container Platform web console, click Operators
Verify that the Red Hat OpenShift Logging Operator is installed by switching to the Operators
Installed Operators page. - Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded.
Create a ClusterLogging instance.
NoteThe form view of the web console does not include all available options. The YAML view is recommended for completing your setup.
In the collection section, select a Collector Implementation.
NoteAs of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
In the logStore section, select a type.
NoteAs of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
- Click Create.
4.4.2. Deploying the Loki Operator using the web console
You can use the OpenShift Container Platform web console to install the Loki Operator.
Prerequisites
- Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
Procedure
To install the Loki Operator using the OpenShift Container Platform web console:
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. Type Loki in the Filter by keyword field.
- Choose Loki Operator from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stable
channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-X
whereX
is the version of logging you have installed.- Ensure that All namespaces on the cluster is selected under Installation Mode.
- Ensure that openshift-operators-redhat is selected under Installed Namespace.
Select Enable Operator recommended cluster monitoring on this Namespace.
This option sets the
openshift.io/cluster-monitoring: "true"
label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhat
namespace.Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Click Install.
Verify that the LokiOperator installed by switching to the Operators
Installed Operators page. - Ensure that LokiOperator is listed with Status as Succeeded in all the projects.
Create a
Secret
YAML file that uses theaccess_key_id
andaccess_key_secret
fields to specify your credentials andbucketnames
,endpoint
, andregion
to define the object storage location. AWS is used in the following example:apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1
Select Create instance under LokiStack on the Details tab. Then select YAML view. Paste in the following template, subsituting values where appropriate.
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging
- 1
- Name should be
logging-loki
. - 2
- Select your Loki deployment size.
- 3
- Define the secret used for your log storage.
- 4
- Define corresponding storage type.
- 5
- Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using
oc get storageclasses
.Apply the configuration:
oc apply -f logging-loki.yaml
Create or edit a
ClusterLogging
CR:apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vector
Apply the configuration:
oc apply -f cr-lokistack.yaml
4.4.3. Installing from OperatorHub using the CLI
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc
command to create or update a Subscription
object.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
Install the
oc
command to your local system.
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an
OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespaces
orSingleNamespace
mode. If the Operator you intend to install uses theAllNamespaces
, then theopenshift-operators
namespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespace
mode and you do not already have an appropriate Operator group in place, you must create one.NoteThe web console version of this procedure handles the creation of the
OperatorGroup
andSubscription
objects automatically behind the scenes for you when choosingSingleNamespace
mode.Create an
OperatorGroup
object YAML file, for exampleoperatorgroup.yaml
:Example
OperatorGroup
objectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>
Create the
OperatorGroup
object:$ oc apply -f operatorgroup.yaml
Create a
Subscription
object YAML file to subscribe a namespace to an Operator, for examplesub.yaml
:Example
Subscription
objectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar
- 1
- For
AllNamespaces
install mode usage, specify theopenshift-operators
namespace. Otherwise, specify the relevant single namespace forSingleNamespace
install mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplace
for the default OperatorHub catalog sources. - 6
- The
env
parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFrom
parameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumes
parameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMounts
parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMount
references avolume
that does not exist, OLM fails to deploy the Operator. - 10
- The
tolerations
parameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resources
parameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelector
parameter defines aNodeSelector
for the pod created by OLM.
Create the
Subscription
object:$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
4.4.4. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
Access to an OpenShift Container Platform cluster web console using an account with
cluster-admin
permissions.
Procedure
-
Navigate to the Operators
Installed Operators page. - Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
4.4.5. Deleting Operators from a cluster using the CLI
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. -
oc
command installed on workstation.
Procedure
Check the current version of the subscribed Operator (for example,
jaeger
) in thecurrentCSV
field:$ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV
Example output
currentCSV: jaeger-operator.v1.8.2
Delete the subscription (for example,
jaeger
):$ oc delete subscription jaeger -n openshift-operators
Example output
subscription.operators.coreos.com "jaeger" deleted
Delete the CSV for the Operator in the target namespace using the
currentCSV
value from the previous step:$ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators
Example output
clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted