Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Updating Logging
There are two types of logging updates: minor release updates (5.y.z) and major release updates (5.y).
6.1. Minor release updates
If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps.
If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update.
6.2. Major release updates
For major version updates you must complete some manual steps.
For major release version compatibility and support information, see OpenShift Operator Life Cycles.
6.3. Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces
In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging
namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - You have administrator permissions.
Procedure
Delete the subscription by running the following command:
$ oc -n openshift-logging delete subscription <subscription>
Delete the Operator group by running the following command:
$ oc -n openshift-logging delete operatorgroup <operator_group_name>
Delete the cluster service version (CSV) by running the following command:
$ oc delete clusterserviceversion cluster-logging.<version>
- Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation.
Verification
Check that the
targetNamespaces
field in theOperatorGroup
resource is not present or is set to an empty string.To do this, run the following command and inspect the output:
$ oc get operatorgroup <operator_group_name> -o yaml
Example output
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - "" # ...
6.4. Updating the Red Hat OpenShift Logging Operator
To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription.
Prerequisites
- You have installed the Red Hat OpenShift Logging Operator.
- You have administrator permissions.
- You have access to the Red Hat OpenShift Service on AWS web console and are viewing the Administrator perspective.
Procedure
-
Navigate to Operators
Installed Operators. - Select the openshift-logging project.
- Click the Red Hat OpenShift Logging Operator.
- Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.9, depending on your current update channel.
-
In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.9, and click Save. Note the
cluster-logging.v5.9.<z>
version. -
Wait for a few seconds, and then go to Operators
Installed Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.9.<z>
version. -
On the Operators
Installed Operators page, wait for the Status field to report Succeeded. -
Check if the
LokiStack
custom resource contains thev13
schema version and add it if it is missing. For correctly adding thev13
schema version, see "Upgrading the LokiStack storage schema".
6.5. Updating the Loki Operator
To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription.
Prerequisites
- You have installed the Loki Operator.
- You have administrator permissions.
- You have access to the Red Hat OpenShift Service on AWS web console and are viewing the Administrator perspective.
Procedure
-
Navigate to Operators
Installed Operators. - Select the openshift-operators-redhat project.
- Click the Loki Operator.
- Click Subscription. In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y, depending on your current update channel.
-
In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y, and click Save. Note the
loki-operator.v5.y.z
version. -
Wait for a few seconds, then click Operators
Installed Operators. Verify that the Loki Operator version matches the latest loki-operator.v5.y.z
version. -
On the Operators
Installed Operators page, wait for the Status field to report Succeeded. -
Check if the
LokiStack
custom resource contains thev13
schema version and add it if it is missing. For correctly adding thev13
schema version, see "Upgrading the LokiStack storage schema".
6.6. Upgrading the LokiStack storage schema
If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator 5.9 or later supports the v13
schema version in the LokiStack
custom resource. Upgrading to the v13
schema version is recommended because it is the schema version to be supported going forward.
Procedure
Add the
v13
schema version in theLokiStack
custom resource as follows:apiVersion: loki.grafana.com/v1 kind: LokiStack # ... spec: # ... storage: schemas: # ... version: v12 1 - effectiveDate: "<yyyy>-<mm>-<future_dd>" 2 version: v13 # ...
TipTo edit the
LokiStack
custom resource, you can run theoc edit
command:$ oc edit lokistack <name> -n openshift-logging
Verification
-
On or after the specified
effectiveDate
date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in AdministratorObserve Alerting.
6.7. Updating the OpenShift Elasticsearch Operator
To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription.
The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.
Prerequisites
If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator.
ImportantIf you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.
The Logging status is healthy:
-
All pods have a
ready
status. - The Elasticsearch cluster is healthy.
-
All pods have a
- Your Elasticsearch and Kibana data is backed up.
- You have administrator permissions.
-
You have installed the OpenShift CLI (
oc
) for the verification steps.
Procedure
-
In the Red Hat Hybrid Cloud Console, click Operators
Installed Operators. - Select the openshift-operators-redhat project.
- Click OpenShift Elasticsearch Operator.
-
Click Subscription
Channel. -
In the Change Subscription Update Channel window, select stable-5.y and click Save. Note the
elasticsearch-operator.v5.y.z
version. -
Wait for a few seconds, then click Operators
Installed Operators. Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z
version. -
On the Operators
Installed Operators page, wait for the Status field to report Succeeded.
Verification
Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output:
$ oc get pod -n openshift-logging --selector component=elasticsearch
Example output
NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Verify that the Elasticsearch cluster status is
green
by entering the following command and observing the output:$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
Example output
{ "cluster_name" : "elasticsearch", "status" : "green", }
Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output:
$ oc project openshift-logging
$ oc get cronjob
Example output
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s
Verify that the log store is updated to the correct version and the indices are
green
by entering the following command and observing the output:$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Verify that the output includes the
app-00000x
,infra-00000x
,audit-00000x
,.security
indices:Example 6.1. Sample output with indices in a green status
Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
Verify that the log visualizer is updated to the correct version by entering the following command and observing the output:
$ oc get kibana kibana -o json
Verify that the output includes a Kibana pod with the
ready
status:Example 6.2. Sample output with a ready Kibana pod
[ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ]