2.6. Upgrading optional components
If you installed an EFK logging stack or cluster metrics, you must separately upgrade the component.
2.6.1. Upgrading the EFK Logging Stack
To upgrade an existing EFK logging stack deployment, you review your parameters and run the openshift-logging/config.yml playbook.
The EFK upgrade also upgrades Elasticsearch from version 2 to version 5. For important information on changes in Elasticsearch 5, you should review the Elasticsearch breaking changes.
It is important to note that Elasticsearch 5 has some significant changes to the index structures. Previously, Elasticsearch permitted a dot character, .
, in field names. In version 5, Elasticsearch interprets any dot in an Elasticsearch field name as nested structure. If you have a field with a dot, the string after the dot is interpreted as the type of field, leading to mapping conflicts during the upgrade.
To help identify potential conflicts, OpenShift Container Platform provides a script that examines your Elasticsearch fields to determine if any fields contain a dot in the name.
For example, the following fields were allowed in Elasticsearch 2:
{ "field": 123 // "field" is of type number } // Any dot in field name is treated as any other valid character in the field name. // It is just part of the field name. { "field.name": "Bob" // "field.name" is of type String
In Elasticsearch 5 and higher the field
string would become the field and the name
string would become a type for the field:
{ "field": 123 // "field" is of type number } // Any dot in field name is always interpreted as nested structure. { "field" : { // "field" is of type Object "name": "Bob" // "name" is of type String } }
Upgrading in this case would result in the field
field having two different types, which is not permitted.
If you need to keep these conflicting indices, you need to reindex the data and change the documents to get rid of conflicting data structure. For more information, see Upgrading fields with dots to 5.x.
2.6.1.1. Determining if fields have dots in field names
You can run the following script to determine if your indices contain any fields with a dot in the name.
The following command uses the jq
JSON processor to get directly at the necessary data. Red Hat Enterprise Linux (RHEL), depending on version, might not provide a package for jq
. You might need to install this from external sources, or unsupported locations.
oc exec -c elasticsearch -n $LOGGING_NS $pod -- es_util --query='_mapping?pretty&filter_path=**.mappings.*.properties' \ | jq '.[].mappings[].properties | keys' \ | jq .[] \ | egrep -e "\."
The upgrade path depends on whether the indices have fields with dots or do not have fields with dots.
2.6.1.2. Upgrading if fields have dots in field names
If the script above indicates your indices contain fields with a dot in the name, use the following steps to correct this issue and upgrade.
To upgrade your EFK stack:
Review how to specify logging Ansible variables and update your Ansible inventory file to at least set the following required variable in the
[OSEv3:vars]
section:[OSEv3:vars] openshift_logging_install_logging=true 1
- 1
- Enables the ability to upgrade the logging stack.
Update any other
openshift_logging_*
variables that you want to override the default values for, as described in Specifying Logging Ansible Variables.You can set the
openshift_logging_elasticsearch_replace_configmap
parameter totrue
to replace yourlogging-elasticsearch
ConfigMap with the current default values. In some cases, using an older ConfigMap can cause the upgrade to fail. The default is set tofalse
. For more information, see the parameter in specify logging Ansible variables.Dechedule your Fluentd pods to stop data ingestion and ensure the cluster state does not change.
For example, you can change the node selector in Fluentd pods to one that does not match any nodes.
oc patch daemonset logging-fluentd -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
- Perform an link: Elasticsearch Index flush on all relevant indices. The flush process persists all logs from memory to disk, which prevents log loss when Elasticsearch is shutdown during the upgrade.
Perform an online or offline backup:
- Perform an online backup of specific Elasticsearch indices the entire cluster.
Perform an offline backup:
Scale down all Elasticsearch DaemonSets to
0
:$ oc scale dc <name> -n openshift-logging --replicas=0
- Back up external persistent volumes using the appropriate method for your organization.
For any file name with a dot character, you need to take one of the following actions before upgrading:
- Deleting the indices. This is the better approach to avoid mapping conflicts during the upgrade.
- Reindexing the data and changing the documents to get rid of conflicting data structure. This method retain the data. For information on potential mapping conflicts see Mappging changes in the Elasticsearch documentation.
- Repeat the on-line or offline backup.
- Run the openshift-logging/config.yml playbook according to the deploying the EFK stack instructions to complete the logging upgrade. You run the installation playbook for the new OpenShift Container Platform version to upgrade the logging deployment.
2.6.1.3. Upgrading if fields do not have dots
If the script above indicates your indices do not contain fields with a dot in the name, use the following steps to upgrade.
Optionally, dechedule your Fluentd pods and scale down your Elasticsearch pods to stop data ingestion and ensure the cluster state does not change.
For example, you can change the node selector in Fluentd pods to one that does not match any nodes.
oc patch daemonset logging-fluentd -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
Optionally, perform and online or offline backup:
- Perform an online backup of specific Elasticsearch indices the entire cluster.
Perform an offline backup:
Scale down all Elasticsearch DeploymentConfigs to
0
:$ oc scale dc <name> -n openshift-sdn --replicas=0
- Back up external persistent volumes using the appropriate method for your organization.
- Run the openshift-logging/config.yml playbook according to the deploying the EFK stack instructions to complete the logging upgrade. You run the installation playbook for the new OpenShift Container Platform version to upgrade the logging deployment.
- Optionally, use the Elasticsearch restore module to restore your Elasticsearch indices from the snapshot.
2.6.2. Upgrading cluster metrics
To upgrade an existing cluster metrics deployment, you review your parameters and run the openshift-metrics/config.yml playbook.
Review how to specify metrics Ansible variables and update your Ansible inventory file to at least set the following required variable in the
[OSEv3:vars]
section:[OSEv3:vars] openshift_metrics_install_metrics=true 1 openshift_metrics_hawkular_hostname=<fqdn> 2 openshift_metrics_cassandra_storage_type=(emptydir|pv|dynamic) 3
-
Update any other
openshift_metrics_*
variables that you want to override the default values for, as described in Specifying Metrics Ansible Variables. - Run the openshift-metrics/config.yml playbook according to the deploying the metrics deployment instructions to complete the metrics upgrade. You run the installation playbook for the new OpenShift Container Platform version to upgrade the logging deployment.