OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
此内容没有您所选择的语言版本。
Chapter 4. Uninstalling OpenShift Container Storage
4.1. Uninstalling OpenShift Container Storage on Internal mode
Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.
Prerequisites
- Make sure that the OpenShift Container Storage cluster is in a healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
- Make sure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. PVCs and OBCs will be deleted during the uninstall process.
Procedure
Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.
For example :
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rgw")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rgw")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
$ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
Follow these instructions to ensure that the PVCs and OBCs listed in the previous step are deleted.
If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:
- Section 4.2, “Removing monitoring stack from OpenShift Container Storage”
- Section 4.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
Section 4.4, “Removing the cluster logging operator from OpenShift Container Storage”
For each of the remaining PVCs or OBCs, follow the steps mentioned below :
- Determine the pod that is consuming the PVC or OBC.
Identify the controlling API object such as a
Deployment
,StatefulSet
,DaemonSet
,Job
, or a custom controller.Each API object has a metadata field known as
OwnerReference
. This is a list of associated objects. TheOwnerReference
with thecontroller
field set to true will point to controlling objects such asReplicaSet
,StatefulSet
,DaemonSet
and so on.Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.
NoteYou can ignore the
noobaa
pods.Delete the OBCs.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete obc <obc name> -n <project name>
$ oc delete obc <obc name> -n <project name>
Delete any custom Bucket Class you have created.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get bucketclass -A | grep -v noobaa-default-bucket-class
$ oc get bucketclass -A | grep -v noobaa-default-bucket-class
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete bucketclass <bucketclass name> -n <project-name>
$ oc delete bucketclass <bucketclass name> -n <project-name>
If you have created any custom Multi Cloud Gateway backingstores, delete them.
List and note the backingstores.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
Delete each of the backingstores listed above and confirm that the dependent resources also get deleted.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
$ oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
$ oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
Delete the remaining PVCs listed in Step 1.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc <pvc name> -n <project-name>
$ oc delete pvc <pvc name> -n <project-name>
List and note the backing local volume objects. If there are no results, skip steps 7 and 8.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for sc in $(oc get storageclass|grep 'kubernetes.io/no-provisioner' |grep -E $(oc get storagecluster -n openshift-storage -o jsonpath='{ .items[*].spec.storageDeviceSets[*].dataPVCTemplate.spec.storageClassName}' | sed 's/ /|/g')| awk '{ print $1 }');
$ for sc in $(oc get storageclass|grep 'kubernetes.io/no-provisioner' |grep -E $(oc get storagecluster -n openshift-storage -o jsonpath='{ .items[*].spec.storageDeviceSets[*].dataPVCTemplate.spec.storageClassName}' | sed 's/ /|/g')| awk '{ print $1 }'); do echo -n "StorageClass: $sc "; oc get storageclass $sc -o jsonpath=" { 'LocalVolume: ' }{ .metadata.labels['local\.storage\.openshift\.io/owner-name'] } { '\n' }"; done
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow StorageClass: localblock LocalVolume: local-block
StorageClass: localblock LocalVolume: local-block
Delete the
StorageCluster
object and wait for the removal of the associated resources.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete -n openshift-storage storagecluster --all --wait=true
$ oc delete -n openshift-storage storagecluster --all --wait=true
Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.
Switch to another namespace if openshift-storage is the active namespace.
For example :
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project default
$ oc project default
Delete the openshift-storage namespace.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete project openshift-storage --wait=true --timeout=5m
$ oc delete project openshift-storage --wait=true --timeout=5m
Wait for approximately five minutes and confirm if the project is deleted successfully.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get project openshift-storage
$ oc get project openshift-storage
Output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Error from server (NotFound): namespaces "openshift-storage" not found
Error from server (NotFound): namespaces "openshift-storage" not found
NoteWhile uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.
Clean up the storage operator artifacts on each node.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done
Ensure you can see removed directory
/var/lib/rook
in the output.Confirm that the directory no longer exists
Copy to Clipboard Copied! Toggle word wrap Toggle overflow for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done
$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done
Delete the local volume created during the deployment and repeat for each of the local volumes listed in step 3.
For each of the local volumes, do the following:
Set the variable
LV
to the name of the LocalVolume and variableSC
to the name of the StorageClass listed in Step 3.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow LV=local-block
$ LV=local-block
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SC=localblock
$ SC=localblock
List and note the devices to be cleaned up later.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get localvolume -n local-storage $LV -o jsonpath='{ .spec.storageClassDevices[*].devicePaths[*] }'
$ oc get localvolume -n local-storage $LV -o jsonpath='{ .spec.storageClassDevices[*].devicePaths[*] }'
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /dev/disk/by-id/nvme-xxxxxx /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz
/dev/disk/by-id/nvme-xxxxxx /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz
Delete the local volume resource.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete localvolume -n local-storage --wait=true $LV
$ oc delete localvolume -n local-storage --wait=true $LV
Delete the remaining PVs and StorageClasses if they exist.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m
$ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete storageclass $SC --wait --timeout=5m
$ oc delete storageclass $SC --wait --timeout=5m
Clean up the artifacts from the storage nodes for that resource.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done
$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done
Example output :
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-yyy-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-zzz-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ...
Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-yyy-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-zzz-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ...
Wipe the disks for each of the local volumes listed in step 3 so that they can be reused.
List the storage nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8
NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8
Obtain the node console and execute
chroot /host
command when the prompt appears.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc debug node/node-xxx
$ oc debug node/node-xxx Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host
Store the disk paths gathered in step 7(ii) in the
DISKS
variable within quotes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow DISKS="/dev/disk/by-id/nvme-xxxxxx
sh-4.2# DISKS="/dev/disk/by-id/nvme-xxxxxx /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz"
Run
sgdisk --zap-all
on all the disks.Copy to Clipboard Copied! Toggle word wrap Toggle overflow for disk in $DISKS; do sgdisk --zap-all $disk;done
sh-4.4# for disk in $DISKS; do sgdisk --zap-all $disk;done
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Problem opening /dev/disk/by-id/nvme-xxxxxx for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Problem opening /dev/disk/by-id/nvme-yyyyy for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. NOTE Ignore file-not-found warnings as they refer to disks that are on other machines.
Problem opening /dev/disk/by-id/nvme-xxxxxx for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Problem opening /dev/disk/by-id/nvme-yyyyy for reading! Error is 2. The specified file does not exist! Problem opening '' for writing! Program will now terminate. Warning! MBR not overwritten! Error is 2! Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. NOTE Ignore file-not-found warnings as they refer to disks that are on other machines.
Exit the shell and repeat for the other nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow exit exit
sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ...
Delete the
openshift-storage.noobaa.io
storage class.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
$ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
Unlabel the storage nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label nodes --all topology.rook.io/rack-
$ oc label nodes --all topology.rook.io/rack-
NoteYou can ignore the warnings displayed for the unlabeled nodes such as label <label> not found.
Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
# oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pv <pv name>
# oc delete pv <pv name>
Remove
CustomResourceDefinitions
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,
-
Click Home
Overview to access the dashboard. - Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.
-
Click Home
4.2. Removing monitoring stack from OpenShift Container Storage
Use this section to clean up monitoring stack from OpenShift Container Storage.
The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring
namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoring
namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pod,pvc -n openshift-monitoring
$ oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d
Edit the monitoring
configmap
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Remove any
config
sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow . . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .
. . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .
After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow . . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .
. . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .
In this example,
alertmanagerMain
andprometheusK8s
monitoring components are using the OpenShift Container Storage PVCs.Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
4.3. Removing OpenShift Container Platform registry from OpenShift Container Storage
Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry
The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry
namespace.
Prerequisites
- The image registry should have been configured to use an OpenShift Container Storage PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.io
object and remove the content in the storage section.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit configs.imageregistry.operator.openshift.io
$ oc edit configs.imageregistry.operator.openshift.io
Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow . . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .
. . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .
After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow . . . storage: emptyDir: {} . . .
. . . storage: emptyDir: {} . . .
In this example, the PVC is called
registry-cephfs-rwx-pvc
, which is now safe to delete.Delete the PVC.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
4.4. Removing the cluster logging operator from OpenShift Container Storage
Use this section to clean up the cluster logging operator from OpenShift Container Storage.
The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging
namespace.
Prerequisites
- The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.
Procedure
Remove the
ClusterLogging
instance in the namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
The PVCs in the
openshift-logging
namespace are now safe to delete.Delete PVCs.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m