Chapter 1. Upgrading to logging 6


1.1. An overview of changes in Logging 6

Logging 6 is a significant upgrade from earlier releases, achieving several longstanding goals of Cluster Logging. Following are some of the notable changes:

Introduction of distinct Operators to manage logging components
  • Red Hat OpenShift Logging Operator manages both collection and forwarding.
  • Loki Operator manages storage.
  • Cluster Observability Operator (COO) manages visualization.
Removal of support for managed log storage and visualization based on Elastic products
  • Elasticsearch is replaced with Loki.
  • Kibana is replaced with the UIplugin provided by COO.
Removal of the Fluentd log collector implementation
Vector is now the supported collection service.
API change for log collection and forwarding
  • The API for log collection is changed from logging.openshift.io to observability.openshift.io.
  • ClusterLogForwarder and ClusterLogging have been combined under the ClusterLogForwarder resource in the new API.

1.2. An overview of steps for upgrading Logging 5 to 6

The broad steps to upgrade Logging 5 to Logging 6 are as follows:

  1. Ensure that you are not using any deprecated resources. For more information, see "Preparation for upgrading to Logging 6".
  2. Migrate log visualization from Kibana to Cluster Observability Operator (COO). For more information, see "Migrating logging visualization".
  3. Upgrade log storage. For more information, see "Upgrading log storage".
  4. Upgrade log collection and forwarding. For more information, see "Upgrading log collection and forwarding".
  5. Finally, delete resources that you no longer need. For more information, see "Deleting old resources".

1.3. Preparation for upgrading to Logging 6

Before being able to upgrade to Logging 6 from Logging 5, you must first ensure that you are not using any deprecated resources. Therefore, if you haven’t already, you must make the following migrations:

1.3.1. Migrating storage from Elasticsearch to LokiStack

You can migrate your existing Red Hat managed Elasticsearch to LokiStack.

Prerequisites

  • You have installed Loki Operator.

Procedure

  1. Temporarily set the state of the ClusterLogging resource as Unmanaged.

    $ oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge
    Copy to Clipboard Toggle word wrap
  2. Remove ClusterLogging ownerReferences from the Elasticsearch resource.

    The following command ensures that the ClusterLogging resource no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource’s logStore field will no longer affect the Elasticsearch resource.

    $ oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge
    Copy to Clipboard Toggle word wrap
  3. Remove ClusterLogging ownerReferences from the Kibana resource.

    The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource’s visualization field will no longer affect the Kibana resource.

    $ oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge
    Copy to Clipboard Toggle word wrap
  4. Update cluster logging to use LokiStack as the log store:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
      namespace: openshift-logging
    spec:
      managementState: "Managed"
      logStore:
        type: "lokistack"
        lokistack:
          name: logging-loki
      collection:
        type: "vector"
    Copy to Clipboard Toggle word wrap

1.4. Migrating logging visualization

The OpenShift console UI plugin for log visualization is moved to the Cluster Observability Operator from the Cluster Logging Operator.

1.4.1. Deleting the logging view plugin

When updating from Logging 5 to Logging 6, delete the logging view plugin before installing the UIPlugin.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).

Procedure

  • Delete the logging view plugin by running the following command:

    $ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
    Copy to Clipboard Toggle word wrap

1.4.2. Installing the logging UI plugin by using the web console

Install the logging UI plugin by using the web console so that you can visualize logs.

Prerequisites

  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console.
  • You installed and configured Loki Operator.

Procedure

  1. Install the Cluster Observability Operator. For more information, see Installing the Cluster Observability Operator.
  2. Navigate to the Installed Operators page. Under Provided APIs, select ClusterObservabilityOperator. Find the UIPlugin resource and click Create Instance.
  3. Select the YAML view, and then use the following template to create a UIPlugin custom resource (CR):

    Example UIPlugin CR

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: logging  
    1
    
    spec:
      type: Logging  
    2
    
      logging:
        lokiStack:
          name: logging-loki  
    3
    
        logsLimit: 50
        timeout: 30s
        schema: otel 
    4
    Copy to Clipboard Toggle word wrap

    1
    Set name to logging.
    2
    Set type to Logging.
    3
    The name value must match the name of your LokiStack instance. If you did not install LokiStack in the openshift-logging namespace, set the LokiStack namespace under the lokiStack configuration.
    4
    schema is one of otel, viaq, or select. The default is viaq if no value is specified. When you choose select, you can select the mode in the UI when you run a query.
    Note

    These are the known issues for the logging UI plugin - for more information, see OU-587.

    • The schema feature is only supported in OpenShift Container Platform 4.15 and later. In earlier versions of OpenShift Container Platform, the logging UI plugin will only use the viaq attribute, ignoring any other values that might be set.
    • Non-administrator users cannot query logs using the otel attribute with logging for Red Hat OpenShift versions 5.8 to 6.2. This issue will be fixed in a future logging release. (LOG-6589)
    • In logging for Red Hat OpenShift version 5.9, the severity_text Otel attribute is not set.
  4. Click Create.

Verification

  1. Refresh the page when a pop-up message instructs you to do so.
  2. Navigate to the Observe Logs panel, where you can run LogQL queries. You can also query logs for individual pods from the Aggregated Logs tab of a specific pod.

1.5. Upgrading log storage

The only managed log storage solution available in this release is a LokiStack, managed by the Loki Operator. This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process.

Important

To continue using an existing Red Hat-managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator, remove the owner references from the Elasticsearch resource named elasticsearch, and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace.

To upgrade Loki storage, follow these steps:

  1. Update the Loki Operator. For more information, see "Updating the Loki Operator".
  2. Upgrade the LokiStack storage schema. For more information, see "Upgrading the LokiStack storage schema".

1.5.1. Updating the Loki Operator

To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription.

Prerequisites

  • You have installed the Loki Operator.
  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.

Procedure

  1. Navigate to Operators Installed Operators.
  2. Select the openshift-operators-redhat project.
  3. Click the Loki Operator.
  4. Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.
  5. In the Change Subscription Update Channel window, select the update channel, stable-6.y, and click Save. Note the loki-operator.v6.y.z version.

    Important

    Only update to an N+2 version, where N is your current version. For example, if you are upgrading from Logging 5.8, select stable-6.0 as the update channel. Updating to a version that is more than two versions newer is not supported.

  6. Wait for a few seconds, then click Operators Installed Operators. Verify that the Loki Operator version matches the latest loki-operator.v6.y.z version.
  7. On the Operators Installed Operators page, wait for the Status field to report Succeeded.
  8. Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema".

1.5.2. Upgrading the LokiStack storage schema

If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator supports the v13 schema version in the LokiStack custom resource. Adding the v13 schema version is recommended because it is the schema version to be supported going forward. The schema will be upgraded to v13 when the date matches the value defined in the effectiveDate attribute.

Procedure

  • Add the v13 schema version in the LokiStack custom resource as follows:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    # ...
    spec:
    # ...
      storage:
        schemas:
        # ...
          version: v12 
    1
    
        - effectiveDate: "<yyyy>-<mm>-<future_dd>" 
    2
    
          version: v13
    # ...
    Copy to Clipboard Toggle word wrap
    1
    Do not delete this line so that the data persists in its original schema version. Deleting the previous schema versions might lead to data loss.
    2
    Set a future date that has not yet started in the Coordinated Universal Time (UTC) time zone format.
    Tip

    To edit the LokiStack custom resource, you can run the oc edit command:

    $ oc edit lokistack <name> -n openshift-logging
    Copy to Clipboard Toggle word wrap

Verification

  • On or after the specified effectiveDate date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in Administrator Observe Alerting.

1.6. Upgrading log collection and forwarding

Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group.

Note

Vector is the only supported collector implementation.

To upgrade the Red Hat OpenShift Logging Operator, follow these steps:

  1. Update the log collection and forwarding configurations by going through the changes listed in "Changes to Cluster logging and forwarding in Logging 6".
  2. Update the Red Hat OpenShift Logging Operator.

1.6.1. Changes to cluster logging and forwarding in Logging 6

Log collection and forwarding configurations are now specified under the new API, part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources.

Note

Vector is the only supported collector implementation.

1.6.1.1. Management, resource allocation, and workload scheduling

Configuration for management state, resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API.

Logging 5.x configuration

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
spec:
  managementState: "Managed"
  collection:
    resources:
      limits: {}
      requests: {}
    nodeSelector: {}
    tolerations: {}
Copy to Clipboard Toggle word wrap

Logging 6 configuration

apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  managementState: Managed
  collector:
    resources:
      limits: {}
      requests: {}
    nodeSelector: {}
    tolerations: {}
Copy to Clipboard Toggle word wrap

1.6.1.2. Input specifications

The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values application, infrastructure, and audit to collect these sources.

Namespace and container inclusions and exclusions have been consolidated into a single field.

5.x application input with namespace and container includes and excludes

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
   - name: application-logs
     type: application
     application:
       namespaces:
       - foo
       - bar
       includes:
       - namespace: my-important
         container: main
       excludes:
       - container: too-verbose
Copy to Clipboard Toggle word wrap

6.x application input with namespace and container includes and excludes

apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
   - name: application-logs
     type: application
     application:
       includes:
       - namespace: foo
       - namespace: bar
       - namespace: my-important
         container: main
       excludes:
       - container: too-verbose
Copy to Clipboard Toggle word wrap

Note

"application", "infrastructure", and "audit" are reserved words and cannot be used as names when defining an input.

Changes to input receivers include:

  • Explicit configuration of the type at the receiver level.
  • Port settings moved to the receiver level.

5.x input receivers

apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
  - name: an-http
    receiver:
      http:
        port: 8443
        format: kubeAPIAudit
  - name: a-syslog
    receiver:
      type: syslog
      syslog:
        port: 9442
Copy to Clipboard Toggle word wrap

6.x input receivers

apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
  inputs:
  - name: an-http
    type: receiver
    receiver:
      type: http
      port: 8443
      http:
        format: kubeAPIAudit
  - name: a-syslog
    type: receiver
    receiver:
      type: syslog
      port: 9442
Copy to Clipboard Toggle word wrap

1.6.1.3. Output specifications

High-level changes to output specifications include:

  • URL settings moved to each output type specification.
  • Tuning parameters moved to each output type specification.
  • Separation of TLS configuration from authentication.
  • Explicit configuration of keys and secret/config map for TLS and authentication.

1.6.1.4. Secrets and TLS configuration

Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. The examples in this section illustrate how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions.

Logging 6.x output configuration using service account token and config map

...
spec:
  outputs:
  - lokiStack
      authentication:
        token:
          from: serviceAccount
      target:
        name: logging-loki
        namespace: openshift-logging
    name: my-lokistack
    tls:
      ca:
        configMapName: openshift-service-ca.crt
        key: service-ca.crt
    type: lokiStack
...
Copy to Clipboard Toggle word wrap

Logging 6.x output authentication and TLS configuration using secrets

...
spec:
  outputs:
  - name: my-output
    type: http
    http:
      url: https://my-secure-output:8080
    authentication:
      password:
        key: pass
        secretName: my-secret
      username:
        key: user
        secretName: my-secret
    tls:
      ca:
        key: ca-bundle.crt
        secretName: collector
      certificate:
        key: tls.crt
        secretName: collector
      key:
        key: tls.key
        secretName: collector
...
Copy to Clipboard Toggle word wrap

1.6.1.5. Filters and pipeline configuration

All attributes of pipelines in previous releases have been converted to filters in this release. Individual filters are defined in the filters spec and referenced by a pipeline.

5.x filters

...
spec:
  pipelines:
  - name: app-logs
    detectMultilineErrors: true
    parse: json
    labels:
      <key>: <value>
...
Copy to Clipboard Toggle word wrap

6.x filters and pipelines spec

...
spec:
  filters:
  - name: my-multiline
    type: detectMultilineException
  - name: my-parse
    type: parse
  - name: my-labels
    type: openshiftLabels
    openshiftLabels:
      <key>: <label>
  pipelines:
  - name: app-logs
    filterRefs:
    - my-multiline
    - my-parse
    - my-labels
...
Copy to Clipboard Toggle word wrap

Note

Drop, Prune, and KubeAPIAudit filters remain unchanged.

1.6.1.6. Validation and status

Most validations are now enforced when a resource is created or updated which provides immediate feedback. This is a departure from previous releases where all validation occurred post creation requiring inspection of the resource status location. Some validation still occurs post resource creation for cases where is not possible to do so at creation or update time.

Instances of the ClusterLogForwarder.observability.openshift.io resource must satisfy the following conditions before the operator deploys the log collector:

  • Resource status conditions: Authorized, Valid, Ready
  • Spec validations: Filters, Inputs, Outputs, Pipelines

All must evaluate to the status value of True.

1.6.2. Updating the Red Hat OpenShift Logging Operator

The Red Hat OpenShift Logging Operator does not provide an automated upgrade from Logging 5.x to Logging 6.x because of the different combinations in which Logging can be configured. You must install all the different operators for managing logging separately.

You can update Red Hat OpenShift Logging Operator by either changing the subscription channel in the OpenShift Container Platform web console, or by uninstalling it. The following procedure demonstrates updating Red Hat OpenShift Logging Operator by changing the subscription channel in the OpenShift Container Platform web console.

Important

When you migrate, all the logs that have not been compressed will be reprocessed by Vector. The reprocessing might lead to the following issues:

  • Duplicated logs during migration.
  • Too many requests in the Log storage receiving the logs, or requests reaching the rate limit.
  • Impact on the disk and performance because of reading and processing of all old logs in the collector.
  • Impact on the Kube API.
  • A peak in memory and CPU use by Vector until all the old logs are processed. The logs can be several GB per node.

Prerequisites

  • You have updated the log collection and forwarding configurations to the observability.openshift.io API.
  • You have administrator permissions.
  • You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective.

Procedure

  1. Create a service account by running the following command:

    Note

    If your previous log forwarder is deployed in the namespace openshift-logging and named instance, the earlier versions of the operator created a logcollector service account. This service account gets removed when you delete cluster logging. Therefore, you need to create a new service account. Any other service account will be preserved. and can be used in Logging 6.x.

    $ oc create sa logging-collector -n openshift-logging
    Copy to Clipboard Toggle word wrap
  2. Provide the required RBAC permissions to the service account.

    1. Bind the ClusterRole role to the service account to be able to write the logs to the Red Hat LokiStack

      $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
      Copy to Clipboard Toggle word wrap
    2. Assign permission to collect and forward application logs by running the following command:

      $ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
      Copy to Clipboard Toggle word wrap
    3. Assign permission to collect and forward audit logs by running the following command:

      $ oc adm policy add-cluster-role-to-user collect-audit-logs -z logging-collector -n openshift-logging
      Copy to Clipboard Toggle word wrap
    4. Assign permission to collect and forward infrastructure logs by running the following command:

       $ oc adm policy add-cluster-role-to-user collect-infrastructure-log -z logging-collector -n openshift-logging
      Copy to Clipboard Toggle word wrap
  3. Move Vector checkpoints to the new path.

    The Vector checkpoints in Logging v5 are located at the path /var/lib/vector/input*/checkpoints.json. Move these checkpoints to the path /var/lib/vector/<namespace>/<clusterlogforwarder cr name>/*. The following example uses openshift-logging as the namespace and collector as the ClusterForwarder custom resource name.

    $ ns="openshift-logging"
    $ cr="collector"
    $ for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "mkdir -p /var/lib/vector/$ns/$cr" ; done
    $ for node in $(oc get nodes -o name); do oc debug $node -- chroot /host /bin/bash -c "chmod -R 755 /var/lib/vector/$ns" ; done
    $ for node in $(oc get nodes -o name); do echo "### $node ###"; oc debug $node -- chroot /host /bin/bash -c "cp -Ra /var/lib/vector/input* /var/lib/vector/$ns/$cr/"; done
    Copy to Clipboard Toggle word wrap
  4. Update the Red Hat OpenShift Logging Operator by using the OpenShift Container Platform web console.

    1. Navigate to Operators Installed Operators.
    2. Select the openshift-logging project.
    3. Click the Red Hat OpenShift Logging Operator.
    4. Click Subscription. In the Subscription details section, click the Update channel link.
    5. In the Change Subscription Update Channel window, select the update channel, stable-6.x, and click Save. Note the cluster-logging.v6.y.z version.

      Important

      Only update to an N+2 version, where N is your current version. For example, if you are upgrading from Logging 5.8, select stable-6.0 as the update channel. Updating to a version that is more than two versions newer is not supported.

    6. Wait for a few seconds, and then go to Operators Installed Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v6.y.z version.
    7. On the Operators Installed Operators page, wait for the Status field to report Succeeded.

      Your existing Logging v5 resources will continue to run, but are no longer managed by your operator. These unmanaged resources can be removed once your new resources are ready to be created.

1.7. Deleting old resources

1.7.1. Deleting the ClusterLogging instance

Delete the ClusterLogging instance because it is no longer needed in Logging 6.x.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).

Procedure

  • Delete the ClusterLogging instance.

    $ oc delete clusterlogging <CR name> -n <namespace>
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that no collector pods are running by running the following command:

    $ oc get pods -l component=collector -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. Verify that no clusterLogForwarder.logging.openshift.io custom resource (CR) exists by running the following command:

    $ oc get clusterlogforwarders.logging.openshift.io -A
    Copy to Clipboard Toggle word wrap
Important

If any clusterLogForwarder.logging.openshift.io CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a backup of the CRs and delete them before deploying any clusterLogForwarder.observability.openshift.io CR with the new APIversion.

1.7.2. Deleting Red Hat OpenShift Logging 5 CRD

Delete Red Hat OpenShift Logging 5 custom resource definitions (CRD) when upgrading to Logging 6.

Prerequisites

  • You have administrator permissions.
  • You installed the OpenShift CLI (oc).

Procedure

  • Delete clusterlogforwarders.logging.openshift.io and clusterloggings.logging.openshift.io CRD by running the following command:

    $ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
    Copy to Clipboard Toggle word wrap

1.7.3. Uninstalling Elasticsearch

You can uninstall Elasticsearch by using the OpenShift Container Platform web console. Uninstall Elasticsearch only if it is not used for by component such as Jaeger, Service Mesh, or Kiali.

Prerequisites

  • You have administrator permissions.
  • If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource.

Procedure

  1. Go to the Administration Custom Resource Definitions page, and click Elasticsearch.
  2. On the Custom Resource Definition Details page, click Instances.
  3. Click the Options menu kebab next to the instance, and then click Delete Elasticsearch.
  4. Go to the Administration Custom Resource Definitions page.
  5. Click the Options menu kebab next to Elasticsearch, and select Delete Custom Resource Definition.
  6. Go to the Operators Installed Operators page.
  7. Click the Options menu kebab next to the OpenShift Elasticsearch Operator, and then click Uninstall Operator.
  8. Optional: Delete the openshift-operators-redhat project.

    Important

    Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace.

    1. Go to the Home Projects page.
    2. Click the Options menu kebab next to the openshift-operators-redhat project, and then click Delete Project.
    3. Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete.
トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat