This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Updating cluster logging
After updating the OpenShift Container Platform cluster from 4.4 to 4.5, you can then update the Elasticsearch Operator and Cluster Logging Operator from 4.4 to 4.5.
Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.
Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.
Due to the nature of these changes, you are not required to update your cluster logging to 4.5. However, when you update to OpenShift Container Platform 4.6, you must update cluster logging to 4.6 at that time.
8.1. Updating cluster logging Copiar enlaceEnlace copiado en el portapapeles!
After updating the OpenShift Container Platform cluster, you can update cluster logging from 4.4 to 4.5 by changing the subscription for the Elasticsearch Operator and the Cluster Logging Operator.
When you update:
- You must update the Elasticsearch Operator before updating the Cluster Logging Operator.
You must update both the Elasticsearch Operator and the Cluster Logging Operator.
Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated.
If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created.
If your cluster logging version is prior to 4.4, you must upgrade cluster logging to 4.4 before updating to 4.5.
Prerequisites
- Update the OpenShift Container Platform cluster from 4.4 to 4.5.
Make sure the cluster logging status is healthy:
-
All pods are
ready. - The Elasticsearch cluster is healthy.
-
All pods are
- Back up your Elasticsearch and Kibana data.
If your internal Elasticsearch instance uses persistent volume claims (PVCs), the PVCs must contain a
logging-cluster:elasticsearchlabel. Without the label, during the upgrade the garbage collection process removes those PVCs and the Elasticsearch operator creates new PVCs.If you are updating from an OpenShift Container Platform version prior to version 4.4.30, you must manually add the label to the Elasticsearch PVCs.
For example, you can use the following command to add a label to all the Elasticsearch PVCs:
oc label pvc --all -n openshift-logging logging-cluster=elasticsearch
$ oc label pvc --all -n openshift-logging logging-cluster=elasticsearchCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After OpenShift Container Platform 4.4.30, the Elasticsearch operator automatically adds the label to the PVCs.
Procedure
Update the Elasticsearch Operator:
-
From the web console, click Operators
Installed Operators. -
Select the
openshift-operators-redhatproject. - Click the Elasticsearch Operator.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.5 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The Elasticsearch Operator is shown as 4.5. For example:
Elasticsearch Operator 4.5.0-202007012112.p0 provided by Red Hat, Inc
Elasticsearch Operator 4.5.0-202007012112.p0 provided by Red Hat, IncCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Status field to report Succeeded.
-
From the web console, click Operators
Update the Cluster Logging Operator:
-
From the web console, click Operators
Installed Operators. -
Select the
openshift-loggingproject. - Click the Cluster Logging Operator.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.5 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The Cluster Logging Operator is shown as 4.5. For example:
Cluster Logging 4.5.0-202007012112.p0 provided by Red Hat, Inc
Cluster Logging 4.5.0-202007012112.p0 provided by Red Hat, IncCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Status field to report Succeeded.
-
From the web console, click Operators
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
oc get pod -n openshift-logging --selector component=elasticsearch
$ oc get pod -n openshift-logging --selector component=elasticsearchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Elasticsearch cluster is healthy:
oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow { "cluster_name" : "elasticsearch", "status" : "green", } ...{ "cluster_name" : "elasticsearch", "status" : "green", } ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the Elasticsearch cron jobs are created:
oc project openshift-logging
$ oc project openshift-loggingCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get cronjob
$ oc get cronjobCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log store is updated to 4.5 and the indices are
green:oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes the
app-00000x,infra-00000x,audit-00000x,.securityindices.Example 8.1. Sample output with indices in a green status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log collector is updated to 4.5:
oc get ds fluentd -o json | grep fluentd-init
$ oc get ds fluentd -o json | grep fluentd-initCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes a
fluentd-initcontainer:"containerName": "fluentd-init"
"containerName": "fluentd-init"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the log visualizer is updated to 4.5 using the Kibana CRD:
oc get kibana kibana -o json
$ oc get kibana kibana -o jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes a Kibana pod with the
readystatus:Example 8.2. Sample output with a ready Kibana pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the Curator is updated to 4.5:
oc get cronjob -o name
$ oc get cronjob -o nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the output includes the
elasticsearch-delete-*andelasticsearch-rollover-*indices.
8.1.1. Post-update tasks Copiar enlaceEnlace copiado en el portapapeles!
If you use Kibana, after the Elasticsearch Operator and Cluster Logging Operator are fully updated to 4.5, you must recreate your Kibana index patterns and visualizations. Because of changes in the security plug-in, the cluster logging upgrade does not automatically create index patterns.
8.2. Defining Kibana index patterns Copiar enlaceEnlace copiado en el portapapeles!
An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern.
Prerequisites
A user must have the
cluster-adminrole, thecluster-readerrole, or both roles to view the infra and audit indices in Kibana. The defaultkubeadminuser has proper permissions to view these indices.If you can view the pods and logs in the
default,kube-andopenshift-projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:oc auth can-i get pods/log -n <project>
$ oc auth can-i get pods/log -n <project>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
yes
yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the
defaultoutput for audit logs.- Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster.
Procedure
To define index patterns and create visualizations in Kibana:
-
In the OpenShift Container Platform console, click the Application Launcher
and select Logging.
Create your Kibana index patterns by clicking Management
Index Patterns Create index pattern: -
Each user must manually create index patterns when logging into Kibana the first time in order to see logs for their projects. Users must create an index pattern named
appand use the@timestamptime field to view their container logs. -
Each admin user must create index patterns when logged into Kibana the first time for the
app,infra, andauditindices using the@timestamptime field.
-
Each user must manually create index patterns when logging into Kibana the first time in order to see logs for their projects. Users must create an index pattern named
- Create Kibana Visualizations from the new index patterns.