This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 8. Updating cluster logging
After updating the OpenShift Container Platform cluster from 4.4 to 4.5, you can then update the OpenShift Elasticsearch Operator and Cluster Logging Operator from 4.4 to 4.5.
Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.
Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.
Due to the nature of these changes, you are not required to update your cluster logging to 4.5. However, when you update to OpenShift Container Platform 4.6, you must update cluster logging to 4.6 at that time.
8.1. Updating cluster logging Copia collegamentoCollegamento copiato negli appunti!
After updating the OpenShift Container Platform cluster, you can update cluster logging from 4.5 to 4.6 by changing the subscription for the OpenShift Elasticsearch Operator and the Cluster Logging Operator.
When you update:
- You must update the OpenShift Elasticsearch Operator before updating the Cluster Logging Operator.
You must update both the OpenShift Elasticsearch Operator and the Cluster Logging Operator.
Kibana is unusable when the OpenShift Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated.
If you update the Cluster Logging Operator before the OpenShift Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created.
If your cluster logging version is prior to 4.5, you must upgrade cluster logging to 4.5 before updating to 4.6.
Prerequisites
- Update the OpenShift Container Platform cluster from 4.5 to 4.6.
Make sure the cluster logging status is healthy:
-
All pods are
ready
. - The Elasticsearch cluster is healthy.
-
All pods are
- Back up your Elasticsearch and Kibana data.
Procedure
Update the OpenShift Elasticsearch Operator:
-
From the web console, click Operators
Installed Operators. -
Select the
openshift-operators-redhat
project. - Click the OpenShift Elasticsearch Operator.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.6 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The OpenShift Elasticsearch Operator is shown as 4.6. For example:
OpenShift Elasticsearch Operator 4.6.0-202007012112.p0 provided by Red Hat, Inc
OpenShift Elasticsearch Operator 4.6.0-202007012112.p0 provided by Red Hat, Inc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Status field to report Succeeded.
-
From the web console, click Operators
Update the Cluster Logging Operator:
-
From the web console, click Operators
Installed Operators. -
Select the
openshift-logging
project. - Click the Cluster Logging Operator.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.6 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The Cluster Logging Operator is shown as 4.6. For example:
Cluster Logging 4.6.0-202007012112.p0 provided by Red Hat, Inc
Cluster Logging 4.6.0-202007012112.p0 provided by Red Hat, Inc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Status field to report Succeeded.
-
From the web console, click Operators
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
oc get pod -n openshift-logging --selector component=elasticsearch
$ oc get pod -n openshift-logging --selector component=elasticsearch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Elasticsearch cluster is healthy:
oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
Copy to Clipboard Copied! Toggle word wrap Toggle overflow { "cluster_name" : "elasticsearch", "status" : "green", } ...
{ "cluster_name" : "elasticsearch", "status" : "green", } ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Elasticsearch cron jobs are created:
oc project openshift-logging
$ oc project openshift-logging
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get cronjob
$ oc get cronjob
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE curator 30 3,9,15,21 * * * False 0 <none> 20s elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE curator 30 3,9,15,21 * * * False 0 <none> 20s elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log store is updated to 4.6 and the indices are
green
:oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes the
app-00000x
,infra-00000x
,audit-00000x
,.security
indices.Example 8.1. Sample output with indices in a green status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log collector is updated to 4.6:
oc get ds fluentd -o json | grep fluentd-init
$ oc get ds fluentd -o json | grep fluentd-init
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes a
fluentd-init
container:"containerName": "fluentd-init"
"containerName": "fluentd-init"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log visualizer is updated to 4.6 using the Kibana CRD:
oc get kibana kibana -o json
$ oc get kibana kibana -o json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes a Kibana pod with the
ready
status:Example 8.2. Sample output with a ready Kibana pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the Curator is updated to 4.6:
oc get cronjob -o name
$ oc get cronjob -o name
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cronjob.batch/curator cronjob.batch/elasticsearch-im-app cronjob.batch/elasticsearch-im-audit cronjob.batch/elasticsearch-im-infra
cronjob.batch/curator cronjob.batch/elasticsearch-im-app cronjob.batch/elasticsearch-im-audit cronjob.batch/elasticsearch-im-infra
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes the
elasticsearch-im-*
indices.
Post-update tasks
If you use the Log Forwarding API to forward logs, after the OpenShift Elasticsearch Operator and Cluster Logging Operator are fully updated to 4.6, you must replace your LogForwarding
custom resource (CR) with a ClusterLogForwarder
CR.
8.2. Updating log forwarding custom resources Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform Log Forward API has been promoted from Technology Preview to Generally Available in OpenShift Container Platform 4.6. The GA release contains some improvements and enhancements that require you to make a change to your ClusterLogging
custom resource (CR) and to replace your LogForwarding
custom resource (CR) with a ClusterLogForwarder
CR.
Sample ClusterLogForwarder
instance in OpenShift Container Platform 4.6
Sample ClusterLogForwarder
CR in OpenShift Container Platform 4.5
The following procedure shows each parameter you must change.
Procedure
To update the ClusterLogForwarder
CR in 4.5 to the ClusterLogForwarding
CR for 4.6, make the following modifications:
Edit the
ClusterLogging
custom resource (CR) to remove thelogforwardingtechpreview
annotation:Sample
ClusterLogging
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Remove the
logforwardingtechpreview
annotation.
Export the
ClusterLogForwarder
CR to create a YAML file for theClusterLogForwarder
instance:oc get LogForwarding instance -n openshift-logging -o yaml| tee ClusterLogForwarder.yaml
$ oc get LogForwarding instance -n openshift-logging -o yaml| tee ClusterLogForwarder.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the YAML file to make the following modifications:
Sample
ClusterLogForwarder
instance in OpenShift Container Platform 4.6Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Change the
apiVersion
from"logging.openshift.io/v1alpha1"
to"logging.openshift.io/v1"
. - 2
- Change the object kind from
kind: "LogForwarding"
tokind: "ClusterLogForwarder"
. - 3
- Remove the
disableDefaultForwarding: true
parameter. - 4
- Change the output parameter from
spec.outputs.endpoint
tospec.outputs.url
. Add a prefix to the URL, such ashttps://
,tcp://
, and so forth, if a prefix is not present. - 5
- For Fluentd outputs, change the
type
fromforward
tofluentdForward
. - 6
- Change the pipelines:
-
Change
spec.pipelines.inputSource
tospec.pipelines.inputRefs
-
Change
logs.infra
toinfrastructure
-
Change
logs.app
toapplication
-
Change
logs.audit
toaudit
-
Change
- 7
- Optional: Add a
default
pipeline to send logs to the internal Elasticsearch instance. You are not required to configure adefault
output.NoteIf you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, do not configure the Log Forwarding API.
Create the CR object:
oc create -f ClusterLogForwarder.yaml
$ oc create -f ClusterLogForwarder.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For information on the new capabilities of the Log Forwarding API, see Forwarding logs to third party systems.