Chapter 4. Upgrading cluster logging
After upgrading the OpenShift Container Platform cluster from 4.1 to 4.2, you must then upgrade cluster logging from 4.1 to 4.2.
Because of a change in the default global catalog Namespace and Catalog Source, if you manaully created CatalogSourceConfig and Subscription objects from YAML files, as described by the Elasticsearch installation, you need to update these objects to point to the new catalog Namespace and Source before upgrading, as shown below.
4.1. Updating cluster logging
After upgrading the OpenShift Container Platform cluster, you can upgrade cluster logging from 4.1 to 4.2 by updating the Elasticsearch Operator and the Cluster Logging Operator.
Prerequisites
- Upgrade cluster from 4.1 to 4.2.
Make sure the clusterlogging status is healthy:
-
All Pods are
ready
. - Elasticsearch cluster is healthy.
-
All Pods are
Procedure
Edit the CatalogSourceConfig (CSC) and Subscription objects to point to the new catalog Namespace and Cource:
From the CLI, get the name of the Elasticsearch CSC.
$ oc get csc --all-namespaces NAMESPACE NAME STATUS MESSAGE AGE openshift-marketplace certified-operators Succeeded The object has been successfully reconciled 42m openshift-marketplace community-operators Succeeded The object has been successfully reconciled 42m openshift-marketplace elasticsearch Succeeded The object has been successfully reconciled 27m openshift-marketplace installed-redhat-default Succeeded The object has been successfully reconciled 26m openshift-marketplace installed-redhat-openshift-logging Succeeded The object has been successfully reconciled 18m openshift-marketplace redhat-operators Succeeded The object has been successfully reconciled 42m
Edit the file as follows:
$ oc edit csc elasticsearch -n openshift-marketplace apiVersion: operators.coreos.com/v1 kind: CatalogSourceConfig metadata: creationTimestamp: "2020-02-18T15:09:00Z" finalizers: - finalizer.catalogsourceconfigs.operators.coreos.com generation: 3 name: elasticsearch namespace: openshift-marketplace resourceVersion: "17694" selfLink: /apis/operators.coreos.com/v1/namespaces/openshift-marketplace/catalogsourceconfigs/elasticsearch uid: 97c0cd55-5260-11ea-873c-02939b2f528f spec: csDisplayName: Custom csPublisher: Custom packages: elasticsearch-operator targetNamespace: openshift-operators-redhat source: redhat-operators 1
- 1
- Change the current value to
redhat-operators
.
Get the name of the Elasticsearch Subscription object:
$ oc get sub NAME PACKAGE SOURCE CHANNEL elasticsearch-pj7pf elasticsearch-operator elasticsearch preview
Edit the file as follows:
$ oc edit sub elasticsearch-pj7pf apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: creationTimestamp: "2020-02-17T17:51:18Z" generateName: elasticsearch- generation: 2 name: elasticsearch-p5k7n namespace: openshift-operators-redhat resourceVersion: "38098" selfLink: /apis/operators.coreos.com/v1alpha1/namespaces/openshift-operators-redhat/subscriptions/elasticsearch-p5k7n uid: 19f6df33-51ae-11ea-82b9-027dfdb65ec2 spec: channel: "4.2" installPlanApproval: Automatic name: elasticsearch-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace 2 ....
Upgrade the Elasticsearch Operator:
- From the web console, click Operator Management.
- Change the project to all projects.
- Click the Elasticsearch Operator, which has the same name as the Elasticsearch subscription.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.2 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The Elasticsearch Operator is shown as 4.2. For example:
Elasticsearch Operator 4.2.0-201909201915 provided by Red Hat, Inc
Upgrade the Cluster Logging Operator:
- From the web console, click Operator Management.
- Change the project to all projects.
- Click the Cluster Logging Operator.
-
Click Subscription
Channel. - In the Change Subscription Update Channel window, select 4.2 and click Save.
Wait for a few seconds, then click Operators
Installed Operators. The Cluster Logging Operator is shown as 4.2. For example:
Cluster Logging 4.2.0-201909201915 provided by Red Hat, Inc
Check the logging components:
Ensure that the Elasticsearch Pods are using a 4.2 image:
$ oc get pod -o yaml -n openshift-logging --selector component=elasticsearch |grep 'image:' image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915
Ensure that all Elasticsearch Pods are in the Ready status:
$ oc get pod -n openshift-logging --selector component=elasticsearch NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Ensure that the Elasticsearch cluster is healthy:
oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health { "cluster_name" : "elasticsearch", "status" : "green", ....
Ensure that the logging collector Pods are using a 4.2 image:
$ oc get pod -n openshift-logging --selector logging-infra=fluentd -o yaml |grep 'image:' image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2.0-201909201915
Ensure that the Kibana Pods are using a 4.2 image:
$ oc get pod -n openshift-logging --selector logging-infra=kibana -o yaml |grep 'image:' image: registry.redhat.io/openshift4/ose-logging-kibana5:v4.2.0-201909210748 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915 image: registry.redhat.io/openshift4/ose-logging-kibana5:v4.2.0-201909210748 image: registry.redhat.io/openshift4/ose-oauth-proxy:v4.2.0-201909201915
Ensure that the Curator CronJob is using a 4.2 image:
$ $ oc get CronJob curator -n openshift-logging -o yaml |grep 'image:' image: registry.redhat.io/openshift4/ose-logging-curator5:v4.2.0-201909201915