Ce contenu n'est pas disponible dans la langue sélectionnée.
Updating OpenShift Data Foundation
Instructions for cluster and storage administrators regarding upgrading
Abstract
Making open source more inclusive Copier lienLien copié sur presse-papiers!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Copier lienLien copié sur presse-papiers!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Chapter 1. Overview of the OpenShift Data Foundation update process Copier lienLien copié sur presse-papiers!
This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments.
You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.12 and 4.13, or between z-stream updates like 4.13.0 and 4.13.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic.
You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments:
- Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform.
Update Red Hat OpenShift Data Foundation.
- To prepare a disconnected environment for updates, see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use.
- For updating between minor releases, see Updating Red Hat OpenShift Data Foundation 4.12 to 4.13.
- For updating between z-stream releases, see Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y.
- For updating external mode deployments, you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret.
- If you use local storage, then update the Local Storage operator. See Checking for Local Storage Operator deployments if you are unsure.
If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. After the upgrade, you must run step 1 of the workaround for BZ#2215462 as documented in the DR upgrade Known issues section of Release notes.
Update considerations
Review the following important considerations before you begin.
The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation.
See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation.
- To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode.
- The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version.
- The flexible scaling feature is available only in new deployments of OpenShift Data Foundation. For more information, see Scaling storage guide.
The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article.
Chapter 2. OpenShift Data Foundation upgrade channels and releases Copier lienLien copié sur presse-papiers!
In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation.
For example, OpenShift Data Foundation 4.13 upgrade channels recommend upgrades within 4.13. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the next minor version of OpenShift Data Foundation.
Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.13, OpenShift Data Foundation 4.13 will be the latest version which can be installed.
OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.13, OpenShift Container Platform 4.13 and 4.14 (when generally available) are supported. OpenShift Container Platform 4.13 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release.
Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.11 to 4.12 and then to 4.13. You cannot update from OpenShift Container Platform 4.10 to 4.12 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation.
OpenShift Data Foundation 4.13 offers the following upgrade channel:
- stable-4.13
- stable-4.12
stable-4.13 channel
Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.13 channel to upgrade from OpenShift Data Foundation 4.12 and upgrades within 4.13.
stable-4.12
You can use the stable-4.12 channel to upgrade from OpenShift Data Foundation 4.11 and upgrades within 4.12.
Chapter 3. Updating Red Hat OpenShift Data Foundation 4.12 to 4.13 Copier lienLien copié sur presse-papiers!
This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what’s not.
- For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Ceph Storage cluster.
For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately.
We recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases.
Upgrading to 4.13 directly from any version older than 4.12 is unsupported.
OpenShift Data Foundation 4.13 clusters deployed with the Multus technology preview feature enabled fail to update CSI images during upgrade. Refer to the knowledgebase article for information on how to update the CSI images manually.
Prerequisites
- Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.13.X, see Updating Clusters.
Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient.
- Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency are all healthy.
Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the
openshift-storagenamespace.To view the state of the pods, on the OpenShift Web Console, click Workloads → Pods. Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
- Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster.
Procedure
- On the OpenShift Web Console, navigate to Operators → Installed Operators.
-
Select
openshift-storageproject. - Click the OpenShift Data Foundation operator name.
- Click the Subscription tab and click the link under Update Channel.
- Select the stable-4.13 update channel and Save it.
If the Upgrade status shows
requires approval, click on requires approval.- On the Install Plan Details page, click Preview Install Plan.
Review the install plan and click Approve.
Wait for the Status to change from
UnknowntoCreated.
- Navigate to Operators → Installed Operators.
Select the
openshift-storageproject.Wait for the OpenShift Data Foundation Operator Status to change to Up to date.
-
After the operator is successfully upgraded, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
Verification steps
Check the Version below the OpenShift Data Foundation name and check the operator status.
-
Navigate to Operators → Installed Operators and select the
openshift-storageproject. - When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick.
-
Navigate to Operators → Installed Operators and select the
Verify that the OpenShift Data Foundation cluster is healthy and data is resilient.
- Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy.
- If verification steps fail, contact Red Hat Support.
After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret.
Additional Resources
If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide.
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y Copier lienLien copié sur presse-papiers!
This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what’s not.
- For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Ceph Storage cluster.
For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately.
Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases.
When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. If the update strategy is set to Manual then use the following procedure.
Prerequisites
- Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.13.X, see Updating Clusters.
Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient.
- Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy.
Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the
openshift-storagenamespace.To view the state of the pods, on the OpenShift Web Console, click Workloads → Pods. Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
- Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster.
Procedure
- On the OpenShift Web Console, navigate to Operators → Installed Operators.
Select
openshift-storageproject.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
- Click the OpenShift Data Foundation operator name.
- Click the Subscription tab.
-
If the Upgrade Status shows
require approval, click on requires approval link. - On the InstallPlan Details page, click Preview Install Plan.
- Review the install plan and click Approve.
-
Wait for the Status to change from
UnknowntoCreated. -
After the operator is successfully upgraded, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.
Verification steps
Check the Version below the OpenShift Data Foundation name and check the operator status.
-
Navigate to Operators → Installed Operators and select the
openshift-storageproject. - When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick.
-
Navigate to Operators → Installed Operators and select the
Verify that the OpenShift Data Foundation cluster is healthy and data is resilient.
- Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy
- If verification steps fail, contact Red Hat Support.
Chapter 5. Changing the update approval strategy Copier lienLien copié sur presse-papiers!
To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic. Changing the update approval strategy to Manual will need manual approval for each upgrade.
Procedure
- Navigate to Operators → Installed Operators.
Select
openshift-storagefrom the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
- Click on OpenShift Data Foundation operator name
- Go to the Subscription tab.
- Click on the pencil icon for changing the Update approval.
- Select the update approval strategy and click Save.
Verification steps
- Verify that the Update approval shows the newly selected approval strategy below it.
Chapter 6. Updating the OpenShift Data Foundation external secret Copier lienLien copié sur presse-papiers!
Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation.
Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.13.x to 4.13.y.
Prerequisites
- Update the OpenShift Container Platform cluster to the latest stable release of 4.13.z, see Updating Clusters.
Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy.
- Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
- Red Hat Ceph Storage must have a Ceph dashboard installed and configured.
Procedure
Download the OpenShift Data Foundation version of the
ceph-external-cluster-details-exporter.pypython script.oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.py# oc get csv $(oc get csv -n openshift-storage | grep ocs-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.external\.features\.ocs\.openshift\.io/export-script}' | base64 --decode > ceph-external-cluster-details-exporter.pyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update permission caps on the external Red Hat Ceph Storage cluster by running
ceph-external-cluster-details-exporter.pyon any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this.python3 ceph-external-cluster-details-exporter.py --upgrade
# python3 ceph-external-cluster-details-exporter.py --upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow The updated permissions for the user are set as:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the previously downloaded python script and save the JSON output that gets generated, from the external Red Hat Ceph Storage cluster.
Run the previously downloaded python script:
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> --monitoring-endpoint <ceph mgr prometheus exporter endpoint> --monitoring-endpoint-port <ceph mgr prometheus exporter port> --rgw-endpoint <rgw endpoint> --run-as-user <ocs_client_name> [optional arguments]Copy to Clipboard Copied! Toggle word wrap Toggle overflow --rbd-data-pool-name- Is a mandatory parameter used for providing block storage in OpenShift Data Foundation.
--rgw-endpoint-
Is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format:
<ip_address>:<port>. --monitoring-endpoint- Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
--monitoring-endpoint-port-
Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by
--monitoring-endpoint. If not provided, the value is automatically populated. --run-as-userThe client name used during OpenShift Data Foundation cluster deployment. Use the default client name
client.healthcheckerif a different client name was not set.NoteEnsure that all the parameters, including the optional arguments, except for monitoring-endpoint and monitoring-endpoint-port, are the same as what was used during the deployment of OpenShift Data Foundation in external mode.
Save the JSON output generated after running the script in the previous step.
Example output:
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}][{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Upload the generated JSON file.
- Log in to the OpenShift Web Console.
- Click Workloads → Secrets.
-
Set project to
openshift-storage. - Click rook-ceph-external-cluster-details.
- Click Actions (⋮) → Edit Secret.
- Click Browse and upload the JSON file.
- Click Save.
Verification steps
To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage → Data Foundation → Storage Systems tab and then click on the storage system name.
- On the Overview → Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
- Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode.
- If verification steps fail, contact Red Hat Support.