Chapter 1. Upgrading to logging 6
1.1. An overview of changes in Logging 6 リンクのコピーリンクがクリップボードにコピーされました!
Logging 6 is a significant upgrade from earlier releases, achieving several longstanding goals of Cluster Logging. Following are some of the notable changes:
- Introduction of distinct Operators to manage logging components
- Red Hat OpenShift Logging Operator manages both collection and forwarding.
- Loki Operator manages storage.
- Cluster Observability Operator (COO) manages visualization.
- Removal of support for managed log storage and visualization based on Elastic products
- Elasticsearch is replaced with Loki.
- Kibana is replaced with the UIplugin provided by COO.
- Removal of the Fluentd log collector implementation
- Vector is now the supported collection service.
- API change for log collection and forwarding
-
The API for log collection is changed from
logging.openshift.io
toobservability.openshift.io
. -
ClusterLogForwarder
andClusterLogging
have been combined under theClusterLogForwarder
resource in the new API.
-
The API for log collection is changed from
1.2. An overview of steps for upgrading Logging 5 to 6 リンクのコピーリンクがクリップボードにコピーされました!
The broad steps to upgrade Logging 5 to Logging 6 are as follows:
- Ensure that you are not using any deprecated resources. For more information, see "Preparation for upgrading to Logging 6".
- Migrate log visualization from Kibana to Cluster Observability Operator (COO). For more information, see "Migrating logging visualization".
- Upgrade log storage. For more information, see "Upgrading log storage".
- Upgrade log collection and forwarding. For more information, see "Upgrading log collection and forwarding".
- Finally, delete resources that you no longer need. For more information, see "Deleting old resources".
1.3. Preparation for upgrading to Logging 6 リンクのコピーリンクがクリップボードにコピーされました!
Before being able to upgrade to Logging 6 from Logging 5, you must first ensure that you are not using any deprecated resources. Therefore, if you haven’t already, you must make the following migrations:
- Migrate collection service from Fluentd to Vector. For more information, see how to migrate Fluentd to Vector in Red Hat OpenShift Logging 5.5+ versions.
- Migrate storage from Elasticsearch to LokiStack. For more information, see "Migrating storage from Elasticsearch to LokiStack".
1.3.1. Migrating storage from Elasticsearch to LokiStack リンクのコピーリンクがクリップボードにコピーされました!
You can migrate your existing Red Hat managed Elasticsearch to LokiStack.
Prerequisites
- You have installed Loki Operator.
Procedure
Temporarily set the state of the
ClusterLogging
resource asUnmanaged
.oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
$ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove
ClusterLogging
ownerReferences
from theElasticsearch
resource.The following command ensures that the
ClusterLogging
resource no longer owns theElasticsearch
resource. Updates to theClusterLogging
resource’slogStore
field will no longer affect theElasticsearch
resource.oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
$ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove
ClusterLogging
ownerReferences
from theKibana
resource.The following command ensures that
ClusterLogging
no longer owns theKibana
resource. Updates to theClusterLogging
resource’svisualization
field will no longer affect theKibana
resource.oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
$ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update cluster logging to use LokiStack as the log store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Migrating logging visualization リンクのコピーリンクがクリップボードにコピーされました!
The OpenShift console UI plugin for log visualization is moved to the Cluster Observability Operator from the Cluster Logging Operator.
1.4.1. Deleting the logging view plugin リンクのコピーリンクがクリップボードにコピーされました!
When updating from Logging 5 to Logging 6, delete the logging view plugin before installing the UIPlugin.
Prerequisites
- You have administrator permissions.
-
You installed the OpenShift CLI (
oc
).
Procedure
Delete the logging view plugin by running the following command:
oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4.2. Installing the logging UI plugin by using the web console リンクのコピーリンクがクリップボードにコピーされました!
Install the logging UI plugin by using the web console so that you can visualize logs.
Prerequisites
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console.
- You installed and configured Loki Operator.
Procedure
- Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator.
-
Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the
UIPlugin
resource and click Create Instance. Select the YAML view, and then use the following template to create a
UIPlugin
custom resource (CR):Example
UIPlugin
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set
name
tologging
. - 2
- Set
type
toLogging
. - 3
- The
name
value must match the name of your LokiStack instance. If you did not install LokiStack in theopenshift-logging
namespace, set the LokiStack namespace under thelokiStack
configuration. - 4
schema
is one ofotel
,viaq
, orselect
. The default isviaq
if no value is specified. When you chooseselect
, you can select the mode in the UI when you run a query.
NoteThese are the known issues for the logging UI plugin - for more information, see OU-587.
-
The
schema
feature is only supported in OpenShift Container Platform 4.15 and later. In earlier versions of OpenShift Container Platform, the logging UI plugin will only use theviaq
attribute, ignoring any other values that might be set. -
Non-administrator users cannot query logs using the
otel
attribute with logging for Red Hat OpenShift versions 5.8 to 6.2. This issue will be fixed in a future logging release. (LOG-6589) -
In logging for Red Hat OpenShift version 5.9, the
severity_text
Otel attribute is not set.
- Click Create.
Verification
- Refresh the page when a pop-up message instructs you to do so.
-
Navigate to the Observe
Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod.
1.5. Upgrading log storage リンクのコピーリンクがクリップボードにコピーされました!
The only managed log storage solution available in this release is a LokiStack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.
To continue using an existing Red Hat-managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator
, remove the owner references from the Elasticsearch
resource named elasticsearch
, and the Kibana
resource named kibana
in the openshift-logging
namespace before removing the ClusterLogging
resource named instance
in the same namespace.
To upgrade Loki storage, follow these steps:
- Update the Loki Operator. For more information, see "Updating the Loki Operator".
- Upgrade the LokiStack storage schema. For more information, see "Upgrading the LokiStack storage schema".
1.5.1. Updating the Loki Operator リンクのコピーリンクがクリップボードにコピーされました!
To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription.
Prerequisites
- You have installed the Loki Operator.
- You have administrator permissions.
- You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.
Procedure
-
Navigate to Operators
Installed Operators. - Select the openshift-operators-redhat project.
- Click the Loki Operator.
- Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.
In the Change Subscription Update Channel window, select the update channel, stable-6.y, and click Save. Note the
loki-operator.v6.y.z
version.ImportantOnly update to an N+2 version, where N is your current version. For example, if you are upgrading from Logging 5.8, select
stable-6.0
as the update channel. Updating to a version that is more than two versions newer is not supported.-
Wait for a few seconds, then click Operators
Installed Operators. Verify that the Loki Operator version matches the latest loki-operator.v6.y.z
version. -
On the Operators
Installed Operators page, wait for the Status field to report Succeeded. -
Check if the
LokiStack
custom resource contains thev13
schema version and add it if it is missing. For correctly adding thev13
schema version, see "Upgrading the LokiStack storage schema".
1.5.2. Upgrading the LokiStack storage schema リンクのコピーリンクがクリップボードにコピーされました!
If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator supports the v13
schema version in the LokiStack
custom resource. Adding the v13
schema version is recommended because it is the schema version to be supported going forward. The schema will be upgraded to v13
when the date matches the value defined in the effectiveDate
attribute.
Procedure
Add the
v13
schema version in theLokiStack
custom resource as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipTo edit the
LokiStack
custom resource, you can run theoc edit
command:oc edit lokistack <name> -n openshift-logging
$ oc edit lokistack <name> -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
On or after the specified
effectiveDate
date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in AdministratorObserve Alerting.
1.6. Upgrading log collection and forwarding リンクのコピーリンクがクリップボードにコピーされました!
Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io
API group.
Vector is the only supported collector implementation.
To upgrade the Red Hat OpenShift Logging Operator, follow these steps:
- Update the log collection and forwarding configurations by going through the changes listed in "Changes to Cluster logging and forwarding in Logging 6".
- Update the Red Hat OpenShift Logging Operator.
1.6.1. Changes to cluster logging and forwarding in Logging 6 リンクのコピーリンクがクリップボードにコピーされました!
Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io
API group. The following sections highlight the differences from the old API resources.
Vector is the only supported collector implementation.
1.6.1.1. Management, resource allocation, and workload scheduling リンクのコピーリンクがクリップボードにコピーされました!
Configuration for management state, resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder
API.
Logging 5.x configuration
Logging 6 configuration
1.6.1.2. Input specifications リンクのコピーリンクがクリップボードにコピーされました!
The input specification is an optional part of the ClusterLogForwarder
specification. Administrators can continue to use the predefined values application
, infrastructure
, and audit
to collect these sources.
Namespace and container inclusions and exclusions have been consolidated into a single field.
5.x application input with namespace and container includes and excludes
6.x application input with namespace and container includes and excludes
"application", "infrastructure", and "audit" are reserved words and cannot be used as names when defining an input.
Changes to input receivers include:
- Explicit configuration of the type at the receiver level.
- Port settings moved to the receiver level.
5.x input receivers
6.x input receivers
1.6.1.3. Output specifications リンクのコピーリンクがクリップボードにコピーされました!
High-level changes to output specifications include:
- URL settings moved to each output type specification.
- Tuning parameters moved to each output type specification.
- Separation of TLS configuration from authentication.
- Explicit configuration of keys and secret/config map for TLS and authentication.
1.6.1.4. Secrets and TLS configuration リンクのコピーリンクがクリップボードにコピーされました!
Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. The examples in this section illustrate how to configure ClusterLogForwarder
secrets to forward to existing Red Hat managed log storage solutions.
Logging 6.x output configuration using service account token and config map
Logging 6.x output authentication and TLS configuration using secrets
1.6.1.5. Filters and pipeline configuration リンクのコピーリンクがクリップボードにコピーされました!
All attributes of pipelines in previous releases have been converted to filters in this release. Individual filters are defined in the filters
spec and referenced by a pipeline.
5.x filters
6.x filters and pipelines spec
Drop
, Prune
, and KubeAPIAudit
filters remain unchanged.
1.6.1.6. Validation and status リンクのコピーリンクがクリップボードにコピーされました!
Most validations are now enforced when a resource is created or updated which provides immediate feedback. This is a departure from previous releases where all validation occurred post creation requiring inspection of the resource status location. Some validation still occurs post resource creation for cases where is not possible to do so at creation or update time.
Instances of the ClusterLogForwarder.observability.openshift.io
resource must satisfy the following conditions before the operator deploys the log collector:
- Resource status conditions: Authorized, Valid, Ready
- Spec validations: Filters, Inputs, Outputs, Pipelines
All must evaluate to the status value of True
.
1.6.2. Updating the Red Hat OpenShift Logging Operator リンクのコピーリンクがクリップボードにコピーされました!
The Red Hat OpenShift Logging Operator does not provide an automated upgrade from Logging 5.x to Logging 6.x because of the different combinations in which Logging can be configured. You must install all the different operators for managing logging separately.
You can update Red Hat OpenShift Logging Operator by either changing the subscription channel in the OpenShift Container Platform web console, or by uninstalling it. The following procedure demonstrates updating Red Hat OpenShift Logging Operator by changing the subscription channel in the OpenShift Container Platform web console.
When you migrate, all the logs that have not been compressed will be reprocessed by Vector. The reprocessing might lead to the following issues:
- Duplicated logs during migration.
- Too many requests in the Log storage receiving the logs, or requests reaching the rate limit.
- Impact on the disk and performance because of reading and processing of all old logs in the collector.
- Impact on the Kube API.
- A peak in memory and CPU use by Vector until all the old logs are processed. The logs can be several GB per node.
Prerequisites
-
You have updated the log collection and forwarding configurations to the
observability.openshift.io
API. - You have administrator permissions.
- You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.
Procedure
Create a service account by running the following command:
NoteIf your previous log forwarder is deployed in the namespace
openshift-logging
and namedinstance
, the earlier versions of the operator created alogcollector
service account. This service account gets removed when you delete cluster logging. Therefore, you need to create a new service account. Any other service account will be preserved. and can be used in Logging 6.x.oc create sa logging-collector -n openshift-logging
$ oc create sa logging-collector -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Provide the required RBAC permissions to the service account.
Bind the
ClusterRole
role to the service account to be able to write the logs to the Red Hat LokiStackoc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign permission to collect and forward application logs by running the following command:
oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign permission to collect and forward audit logs by running the following command:
oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign permission to collect and forward infrastructure logs by running the following command:
oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Move Vector checkpoints to the new path.
The Vector checkpoints in Logging v5 are located at the path
/var/lib/vector/input*/checkpoints.json
. Move these checkpoints to the path/var/lib/vector/<namespace>/<clusterlogforwarder cr name>/*
. The following example usesopenshift-logging
as the namespace andcollector
as the ClusterForwarder custom resource name.ns="openshift-logging" cr="collector" for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "mkdir -p /var/lib/vector/$ns/$cr" ; done for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "chmod -R 755 /var/lib/vector/$ns" ; done for node in $(oc get nodes -o name); do echo "### $node ###"; oc debug $node -- chroot /host /bin/bash -c "cp -Ra /var/lib/vector/input* /var/lib/vector/$ns/$cr/"; done
$ ns="openshift-logging" $ cr="collector" $ for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "mkdir -p /var/lib/vector/$ns/$cr" ; done $ for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "chmod -R 755 /var/lib/vector/$ns" ; done $ for node in $(oc get nodes -o name); do echo "### $node ###"; oc debug $node -- chroot /host /bin/bash -c "cp -Ra /var/lib/vector/input* /var/lib/vector/$ns/$cr/"; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the Red Hat OpenShift Logging Operator by using the OpenShift Container Platform web console.
-
Navigate to Operators
Installed Operators. - Select the openshift-logging project.
- Click the Red Hat OpenShift Logging Operator.
- Click Subscription. In the Subscription details section, click the Update channel link.
In the Change Subscription Update Channel window, select the update channel, stable-6.x, and click Save. Note the
cluster-logging.v6.y.z
version.ImportantOnly update to an N+2 version, where N is your current version. For example, if you are upgrading from Logging 5.8, select
stable-6.0
as the update channel. Updating to a version that is more than two versions newer is not supported.-
Wait for a few seconds, and then go to Operators
Installed Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v6.y.z
version. On the Operators
Installed Operators page, wait for the Status field to report Succeeded. Your existing Logging v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created.
-
Navigate to Operators
1.7. Deleting old resources リンクのコピーリンクがクリップボードにコピーされました!
1.7.1. Deleting the ClusterLogging instance リンクのコピーリンクがクリップボードにコピーされました!
Delete the ClusterLogging instance because it is no longer needed in Logging 6.x.
Prerequisites
- You have administrator permissions.
-
You installed the OpenShift CLI (
oc
).
Procedure
Delete the ClusterLogging instance.
oc delete clusterlogging <CR name> -n <namespace>
$ oc delete clusterlogging <CR name> -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that no collector pods are running by running the following command:
oc get pods -l component=collector -n <namespace>
$ oc get pods -l component=collector -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that no
clusterLogForwarder.logging.openshift.io
custom resource (CR) exists by running the following command:oc get clusterlogforwarders.logging.openshift.io -A
$ oc get clusterlogforwarders.logging.openshift.io -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If any clusterLogForwarder.logging.openshift.io
CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a backup of the CRs and delete them before deploying any clusterLogForwarder.observability.openshift.io
CR with the new APIversion.
1.7.2. Deleting Red Hat OpenShift Logging 5 CRD リンクのコピーリンクがクリップボードにコピーされました!
Delete Red Hat OpenShift Logging 5 custom resource definitions (CRD) when upgrading to Logging 6.
Prerequisites
- You have administrator permissions.
-
You installed the OpenShift CLI (
oc
).
Procedure
Delete
clusterlogforwarders.logging.openshift.io
andclusterloggings.logging.openshift.io
CRD by running the following command:oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.7.3. Uninstalling Elasticsearch リンクのコピーリンクがクリップボードにコピーされました!
You can uninstall Elasticsearch by using the OpenShift Container Platform web console. Uninstall Elasticsearch only if it is not used for by component such as Jaeger, Service Mesh, or Kiali.
Prerequisites
- You have administrator permissions.
-
If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the
ClusterLogging
custom resource.
Procedure
-
Go to the Administration
Custom Resource Definitions page, and click Elasticsearch. - On the Custom Resource Definition Details page, click Instances.
-
Click the Options menu
next to the instance, and then click Delete Elasticsearch.
-
Go to the Administration
Custom Resource Definitions page. -
Click the Options menu
next to Elasticsearch, and select Delete Custom Resource Definition.
-
Go to the Operators
Installed Operators page. -
Click the Options menu
next to the OpenShift Elasticsearch Operator, and then click Uninstall Operator.
Optional: Delete the
openshift-operators-redhat
project.ImportantDo not delete the
openshift-operators-redhat
project if other global Operators are installed in this namespace.-
Go to the Home
Projects page. -
Click the Options menu
next to the openshift-operators-redhat project, and then click Delete Project.
-
Confirm the deletion by typing
openshift-operators-redhat
in the dialog box, and then click Delete.
-
Go to the Home