Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 4. Uninstalling OpenShift Data Foundation
4.1. Uninstalling OpenShift Data Foundation in Internal-attached devices mode Copier lienLien copié sur presse-papiers!
Use the steps in this section to uninstall OpenShift Data Foundation.
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
-
uninstall.ocs.openshift.io/cleanup-policy: delete -
uninstall.ocs.openshift.io/mode: graceful
The following table provides information on the different values that can used with these annotations:
| Annotation | Value | Default | Behavior |
|---|---|---|---|
| cleanup-policy | delete | Yes |
Rook cleans up the physical drives and the |
| cleanup-policy | retain | No |
Rook does not clean up the physical drives and the |
| mode | graceful | Yes | Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs) |
| mode | forced | No | Rook and NooBaa proceeds with uninstall even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively |
Edit the value of the annotation to change the cleanup policy or the uninstall mode.
$ oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
$ oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite
Expected output for both commands:
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
Prerequisites
- Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation.
- Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation.
- If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them.
Procedure
Delete the volume snapshots that are using OpenShift Data Foundation.
List the volume snapshots from all the namespaces.
$ oc get volumesnapshot --all-namespacesFrom the output of the previous command, identify and delete the volume snapshots that are using OpenShift Data Foundation.
$ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE><VOLUME-SNAPSHOT-NAME>- Is the name of the volume snapshot
<NAMESPACE>- Is the project namespace
Delete PVCs and OBCs that are using OpenShift Data Foundation.
In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted.
If you want to delete the Storage Cluster without deleting the PVCs, you can set the uninstall mode annotation to
forcedand skip this step. Doing so results in orphan PVCs and OBCs in the system.Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation.
See Removing monitoring stack from OpenShift Data Foundation
Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation.
Removing OpenShift Container Platform registry from OpenShift Data Foundation
Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation.
Removing the cluster logging operator from OpenShift Data Foundation
Delete the other PVCs and OBCs provisioned using OpenShift Data Foundation.
Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs that are used internally by OpenShift Data Foundation.
#!/bin/bash RBD_PROVISIONER="openshift-storage.rbd.csi.ceph.com" CEPHFS_PROVISIONER="openshift-storage.cephfs.csi.ceph.com" NOOBAA_PROVISIONER="openshift-storage.noobaa.io/obc" RGW_PROVISIONER="openshift-storage.ceph.rook.io/bucket" NOOBAA_DB_PVC="noobaa-db" NOOBAA_BACKINGSTORE_PVC="noobaa-default-backing-store-noobaa-pvc" # Find all the OCS StorageClasses OCS_STORAGECLASSES=$(oc get storageclasses | grep -e "$RBD_PROVISIONER" -e "$CEPHFS_PROVISIONER" -e "$NOOBAA_PROVISIONER" -e "$RGW_PROVISIONER" | awk '{print $1}') # List PVCs in each of the StorageClasses for SC in $OCS_STORAGECLASSES do echo "======================================================================" echo "$SC StorageClass PVCs and OBCs" echo "======================================================================" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep $SC | grep -v -e "$NOOBAA_DB_PVC" -e "$NOOBAA_BACKINGSTORE_PVC" oc get obc --all-namespaces --no-headers 2>/dev/null | grep $SC echo doneNoteOmit
RGW_PROVISIONERfor cloud platforms.Delete the OBCs.
$ oc delete obc <obc-name> -n <project-name><obc-name>- Is the name of the OBC
<project-name>- Is the name of the project
Delete the PVCs.
$ oc delete pvc <pvc-name> -n <project-name><pvc-name>- Is the name of the PVC
<project-name>Is the name of the project
NoteEnsure that you have removed any custom backing stores, bucket classes, etc., created in the cluster.
Delete the Storage System object and wait for the removal of the associated resources.
$ oc delete -n openshift-storage storagesystem --all --wait=trueCheck the cleanup pods if the
uninstall.ocs.openshift.io/cleanup-policywas set todelete(default) and ensure that their status isCompleted.$ oc get pods -n openshift-storage | grep -i cleanupExample output:
NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35sConfirm that the directory
/var/lib/rookis now empty. This directory is empty only if theuninstall.ocs.openshift.io/cleanup-policyannotation was set todelete(default).$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; doneIf encryption was enabled at the time of install, remove
dm-cryptmanageddevice-mappermapping from the OSD devices on all the OpenShift Data Foundation nodes.Create a
debugpod andchrootto the host on the storage node.$ oc debug node/<node-name>$ chroot /host<node-name>- Is the name of the node
Get Device names and make note of the OpenShift Data Foundation devices.
$ dmsetup lsExample output:
ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)Remove the mapped device.
$ cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcryptImportantIf the above command gets stuck due to insufficient privileges, run the following commands:
-
Press
CTRL+Zto exit the above command. Find PID of the process which was stuck.
$ ps -ef | grep cryptTerminate the process using
killcommand.$ kill -9 <PID><PID>- Is the process ID
Verify that the device name is removed.
$ dmsetup ls
-
Press
Delete the namespace and wait till the deletion is complete. You need to switch to another project if
openshift-storageis the active project.For example:
$ oc project default$ oc delete project openshift-storage --wait=true --timeout=5mThe project is deleted if the following command returns a NotFound error.
$ oc get project openshift-storageNoteWhile uninstalling OpenShift Data Foundation, if
namespaceis not deleted completely and remains inTerminatingstate, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.- Delete local storage operator configurations if you have deployed OpenShift Data Foundation using local storage devices. See Removing local storage operator configurations.
Unlabel the storage nodes.
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage-$ oc label nodes --all topology.rook.io/rack-Remove the OpenShift Data Foundation taint if the nodes were tainted.
$ oc adm taint nodes --all node.ocs.openshift.io/storage-Confirm all the Persistent volumes (PVs) provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the
Releasedstate, delete it.$ oc get pv$ oc delete pv <pv-name><pv-name>- Is the name of the PV
Remove the
CustomResourceDefinitions.$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5mTo ensure that OpenShift Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console,
- Click Storage.
- Verify that OpenShift Data Foundation no longer appears under Storage.
4.1.1. Removing local storage operator configurations Copier lienLien copié sur presse-papiers!
Use the instructions in this section only if you have deployed OpenShift Data Foundation using local storage devices.
For OpenShift Data Foundation deployments only using localvolume resources, go directly to step 8.
Procedure
Identify the
LocalVolumeSetand the correspondingStorageClassNamebeing used by OpenShift Data Foundation.$ oc get localvolumesets.local.storage.openshift.io -n openshift-local-storageSet the variable SC to the
StorageClassproviding theLocalVolumeSet.$ export SC="<StorageClassName>"List and note the devices to be cleaned up later. Inorder to list the device ids of the disks, please follow the procedure mentioned here, See Find the available storage devices.
Example output:
/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3Delete the
LocalVolumeSet.$ oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storageDelete the local storage PVs for the given
StorageClassName.$ oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pvDelete the
StorageClassName.$ oc delete sc $SCDelete the symlinks created by the
LocalVolumeSet.[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneDelete
LocalVolumeDiscovery.$ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storageRemove the
LocalVolumeresources (if any).Use the following steps to remove the
LocalVolumeresources that were used to provision PVs in the current or previous OpenShift Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster.For each of the local volumes, do the following:
Identify the
LocalVolumeand the correspondingStorageClassNamebeing used by OpenShift Data Foundation.$ oc get localvolume.local.storage.openshift.io -n openshift-local-storageSet the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass
For example:
$ LV=local-block $ SC=localblockList and note the devices to be cleaned up later.
$ oc get localvolume -n openshift-local-storage $LV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{"\n"}'Example output:
/dev/sdb /dev/sdc /dev/sdd /dev/sdeDelete the local volume resource.
$ oc delete localvolume -n openshift-local-storage --wait=true $LVDelete the remaining PVs and StorageClasses if they exist.
$ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m $ oc delete storageclass $SC --wait --timeout=5mClean up the artifacts from the storage nodes for that resource.
$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneExample output:
Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-yyy-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-zzz-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ...
Wipe the disks for each of the local volumesets or local volumes listed in step 1 and 8 respectively so that they can be reused.
List the storage nodes.
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=Example output:
NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8Obtain the node console and execute
chroot /hostcommand when the prompt appears.$ oc debug node/node-xxx Starting pod/node-xxx-debug … To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /hostStore the disk paths in the DISKS variable within quotes. For the list of disk paths, see step 3 and step 8.c for local volumeset and local volume respectively.
Example output:
sh-4.4# DISKS="/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 " or sh-4.2# DISKS="/dev/sdb /dev/sdc /dev/sdd /dev/sde ".Run
sgdisk --zap-allon all the disks.sh-4.4# for disk in $DISKS; do sgdisk --zap-all $disk;doneExample output:
Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.Exit the shell and repeat for the other nodes.
sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ...
Delete the
openshift-local-storagenamespace and wait till the deletion is complete. You will need to switch to another project if theopenshift-local-storagenamespace is the active project.For example:
$ oc project default $ oc delete project openshift-local-storage --wait=true --timeout=5mThe project is deleted if the following command returns a NotFound error.
$ oc get project openshift-local-storage
4.2. Removing monitoring stack from OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
Use this section to clean up the monitoring stack from OpenShift Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For more information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoringnamespace.$ oc get pod,pvc -n openshift-monitoringExample output:
NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8dEdit the monitoring
configmap.$ oc -n openshift-monitoring edit configmap cluster-monitoring-configRemove any
configsections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it.Before editing
Expand . . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .After editing
Expand . . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .In this example,
alertmanagerMainandprometheusK8smonitoring components are using the OpenShift Data Foundation PVCs.Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m<pvc-name>- Is the name of the PVC
4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see Image registry.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace.
Prerequisites
- The image registry must have been configured to use an OpenShift Data Foundation PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.$ oc edit configs.imageregistry.operator.openshift.ioExpand Before editing
. . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .After editing
. . . storage: emptyDir: {} . . .In this example, the PVC is called
registry-cephfs-rwx-pvc, which is now safe to delete.Delete the PVC.
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m<pvc-name>- Is the name of the PVC
4.4. Removing the cluster logging operator from OpenShift Data Foundation Copier lienLien copié sur presse-papiers!
Use this section to clean up the cluster logging operator from OpenShift Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs.
Procedure
Remove the
ClusterLogginginstance in the namespace.$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5mThe PVCs in the
openshift-loggingnamespace are now safe to delete.Delete the PVCs.
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m<pvc-name>- Is the name of the PVC