OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
此内容没有您所选择的语言版本。
Chapter 3. Uninstalling OpenShift Container Storage
3.1. Uninstalling OpenShift Container Storage on Internal mode
Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.
Prerequisites
- Make sure that the OpenShift Container Storage cluster is in a healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
- Make sure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. PVCs and OBCs will be deleted during the uninstall process.
Procedure
Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.
For example :
oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
Copy to Clipboard Copied! oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Copy to Clipboard Copied! oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Copy to Clipboard Copied! Follow these instructions to ensure that the PVCs and OBCs listed in the previous step are deleted.
If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:
- Section 3.2, “Removing monitoring stack from OpenShift Container Storage”
- Section 3.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
Section 3.4, “Removing the cluster logging operator from OpenShift Container Storage”
For each of the remaining PVCs or OBCs, follow the steps mentioned below :
- Determine the pod that is consuming the PVC or OBC.
Identify the controlling API object such as a
Deployment
,StatefulSet
,DaemonSet
,Job
, or a custom controller.Each API object has a metadata field known as
OwnerReference
. This is a list of associated objects. TheOwnerReference
with thecontroller
field set to true will point to controlling objects such asReplicaSet
,StatefulSet
,DaemonSet
and so on.Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.
NoteYou can ignore the
noobaa
pods.Delete the OBCs.
oc delete obc <obc name> -n <project name>
$ oc delete obc <obc name> -n <project name>
Copy to Clipboard Copied! Delete any custom Bucket Class you have created.
oc get bucketclass -A | grep -v noobaa-default-bucket-class
$ oc get bucketclass -A | grep -v noobaa-default-bucket-class
Copy to Clipboard Copied! oc delete bucketclass <bucketclass name> -n <project-name>
$ oc delete bucketclass <bucketclass name> -n <project-name>
Copy to Clipboard Copied! If you have created any custom Multi Cloud Gateway backingstores, delete them.
List and note the backingstores.
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
Copy to Clipboard Copied! Delete each of the backingstores listed above and confirm that the dependent resources also get deleted.
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
Copy to Clipboard Copied! If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.
oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
$ oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
Copy to Clipboard Copied! oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
$ oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
Copy to Clipboard Copied!
Delete the remaining PVCs listed in Step 1.
oc delete pvc <pvc name> -n <project-name>
$ oc delete pvc <pvc name> -n <project-name>
Copy to Clipboard Copied!
Delete the
StorageCluster
object and wait for the removal of the associated resources.oc delete -n openshift-storage storagecluster --all --wait=true
$ oc delete -n openshift-storage storagecluster --all --wait=true
Copy to Clipboard Copied! Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.
Switch to another namespace if openshift-storage is the active namespace.
For example :
oc project default
$ oc project default
Copy to Clipboard Copied! Delete the openshift-storage namespace.
oc delete project openshift-storage --wait=true --timeout=5m
$ oc delete project openshift-storage --wait=true --timeout=5m
Copy to Clipboard Copied! Wait for approximately five minutes and confirm if the project is deleted successfully.
oc get project openshift-storage
$ oc get project openshift-storage
Copy to Clipboard Copied! Output:
Error from server (NotFound): namespaces "openshift-storage" not found
Error from server (NotFound): namespaces "openshift-storage" not found
Copy to Clipboard Copied! NoteWhile uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.
Clean up the storage operator artifacts on each node.
for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done
Copy to Clipboard Copied! Ensure you can see removed directory
/var/lib/rook
in the output.Confirm that the directory no longer exists
for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done
Copy to Clipboard Copied! Delete the
openshift-storage.noobaa.io
storage class.oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
$ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
Copy to Clipboard Copied! Unlabel the storage nodes.
oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
Copy to Clipboard Copied! oc label nodes --all topology.rook.io/rack-
$ oc label nodes --all topology.rook.io/rack-
Copy to Clipboard Copied! NoteYou can ignore the warnings displayed for the unlabeled nodes such as label <label> not found.
Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.
oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
# oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
Copy to Clipboard Copied! oc delete pv <pv name>
# oc delete pv <pv name>
Copy to Clipboard Copied! Remove
CustomResourceDefinitions
.oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
Copy to Clipboard Copied! NoteUninstalling OpenShift Container Storage clusters on Microsoft Azure deletes all the OpenShift Container Storage data stored on the target buckets, however, neither the target buckets created by the user nor the ones that were automatically created during the OpenShift Container Storage installation gets deleted and the data that does not belong to OpenShift Container Storage remains on these target buckets.
To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,
-
Click Home
Overview to access the dashboard. - Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.
-
Click Home
3.2. Removing monitoring stack from OpenShift Container Storage
Use this section to clean up monitoring stack from OpenShift Container Storage.
The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring
namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoring
namespace.oc get pod,pvc -n openshift-monitoring
$ oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d
Copy to Clipboard Copied! Edit the monitoring
configmap
.oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Remove any
config
sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.Before editing
. . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .
. . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .
Copy to Clipboard Copied! After editing
. . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .
. . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .
Copy to Clipboard Copied! In this example,
alertmanagerMain
andprometheusK8s
monitoring components are using the OpenShift Container Storage PVCs.Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
Copy to Clipboard Copied!
3.3. Removing OpenShift Container Platform registry from OpenShift Container Storage
Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry
The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry
namespace.
Prerequisites
- The image registry should have been configured to use an OpenShift Container Storage PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.io
object and remove the content in the storage section.oc edit configs.imageregistry.operator.openshift.io
$ oc edit configs.imageregistry.operator.openshift.io
Copy to Clipboard Copied! Before editing
. . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .
. . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .
Copy to Clipboard Copied! After editing
. . . storage: . . .
. . . storage: . . .
Copy to Clipboard Copied! In this example, the PVC is called
registry-cephfs-rwx-pvc
, which is now safe to delete.Delete the PVC.
oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
Copy to Clipboard Copied!
3.4. Removing the cluster logging operator from OpenShift Container Storage
Use this section to clean up the cluster logging operator from OpenShift Container Storage.
The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging
namespace.
Prerequisites
- The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.
Procedure
Remove the
ClusterLogging
instance in the namespace.oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
Copy to Clipboard Copied! The PVCs in the
openshift-logging
namespace are now safe to delete.Delete PVCs.
oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
Copy to Clipboard Copied!