Chapter 7. Uninstalling OpenShift Data Foundation
7.1. Uninstalling OpenShift Data Foundation in Internal-attached devices mode Copy linkLink copied to clipboard!
Use the steps in this section to uninstall OpenShift Data Foundation.
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
-
uninstall.ocs.openshift.io/cleanup-policy: delete -
uninstall.ocs.openshift.io/mode: graceful
The following table provides information on the different values that can used with these annotations:
| Annotation | Value | Default | Behavior |
|---|---|---|---|
| cleanup-policy | delete | Yes |
Rook cleans up the physical drives and the |
| cleanup-policy | retain | No |
Rook does not clean up the physical drives and the |
| mode | graceful | Yes | Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs) |
| mode | forced | No | Rook and NooBaa proceeds with uninstall even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively |
Edit the value of the annotation to change the cleanup policy or the uninstall mode.
oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
$ oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite
$ oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite
Expected output for both commands:
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
Prerequisites
- Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation.
- Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation.
- If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them.
Procedure
Delete the volume snapshots that are using OpenShift Data Foundation.
List the volume snapshots from all the namespaces.
oc get volumesnapshot --all-namespaces
$ oc get volumesnapshot --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Data Foundation.
oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>
$ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <VOLUME-SNAPSHOT-NAME>- Is the name of the volume snapshot
<NAMESPACE>- Is the project namespace
Delete PVCs and OBCs that are using OpenShift Data Foundation.
In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted.
If you want to delete the Storage Cluster without deleting the PVCs, you can set the uninstall mode annotation to
forcedand skip this step. Doing so results in orphan PVCs and OBCs in the system.Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation.
See Removing monitoring stack from OpenShift Data Foundation
Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation.
Removing OpenShift Container Platform registry from OpenShift Data Foundation
Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation.
Removing the cluster logging operator from OpenShift Data Foundation
Delete the other PVCs and OBCs provisioned using OpenShift Data Foundation.
Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs that are used internally by OpenShift Data Foundation.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOmit
RGW_PROVISIONERfor cloud platforms.Delete the OBCs.
oc delete obc <obc-name> -n <project-name>
$ oc delete obc <obc-name> -n <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <obc-name>- Is the name of the OBC
<project-name>- Is the name of the project
Delete the PVCs.
oc delete pvc <pvc-name> -n <project-name>
$ oc delete pvc <pvc-name> -n <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <pvc-name>- Is the name of the PVC
<project-name>Is the name of the project
NoteEnsure that you have removed any custom backing stores, bucket classes, etc., created in the cluster.
Delete the Storage System object and wait for the removal of the associated resources.
oc delete -n openshift-storage storagesystem --all --wait=true
$ oc delete -n openshift-storage storagesystem --all --wait=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the cleanup pods if the
uninstall.ocs.openshift.io/cleanup-policywas set todelete(default) and ensure that their status isCompleted.oc get pods -n openshift-storage | grep -i cleanup
$ oc get pods -n openshift-storage | grep -i cleanupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s
NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the directory
/var/lib/rookis now empty. This directory is empty only if theuninstall.ocs.openshift.io/cleanup-policyannotation was set todelete(default).for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow If encryption was enabled at the time of install, remove
dm-cryptmanageddevice-mappermapping from the OSDs on all the OpenShift Data Foundation nodes.Create a
debugpod andchrootto the host on the storage node.oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow <node-name>- Is the name of the node
Get Device names and make note of the OpenShift Data Foundation devices.
dmsetup ls
$ dmsetup lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)
ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the mapped device.
cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt
$ cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcryptCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the above command gets stuck due to insufficient privileges, run the following commands:
-
Press
CTRL+Zto exit the above command. Find PID of the process which was stuck.
ps -ef | grep crypt
$ ps -ef | grep cryptCopy to Clipboard Copied! Toggle word wrap Toggle overflow Terminate the process using
killcommand.kill -9 <PID>
$ kill -9 <PID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <PID>- Is the process ID
Verify that the device name is removed.
dmsetup ls
$ dmsetup lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Press
Delete the namespace and wait till the deletion is complete. You need to switch to another project if
openshift-storageis the active project.For example:
oc project default
$ oc project defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete project openshift-storage --wait=true --timeout=5m
$ oc delete project openshift-storage --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The project is deleted if the following command returns a NotFound error.
oc get project openshift-storage
$ oc get project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile uninstalling OpenShift Data Foundation, if
namespaceis not deleted completely and remains inTerminatingstate, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.- Delete local storage operator configurations if you have deployed OpenShift Data Foundation using local storage devices. See Removing local storage operator configurations.
Unlabel the storage nodes.
oc label nodes --all cluster.ocs.openshift.io/openshift-storage-
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage-Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc label nodes --all topology.rook.io/rack-
$ oc label nodes --all topology.rook.io/rack-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OpenShift Data Foundation taint if the nodes were tainted.
oc adm taint nodes --all node.ocs.openshift.io/storage-
$ oc adm taint nodes --all node.ocs.openshift.io/storage-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm all the Persistent volumes (PVs) provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the
Releasedstate, delete it.oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pv <pv-name>
$ oc delete pv <pv-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <pv-name>- Is the name of the PV
Remove the
CustomResourceDefinitions.oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m
$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that OpenShift Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console,
- Click Storage.
- Verify that OpenShift Data Foundation no longer appears under Storage.
7.1.1. Removing local storage operator configurations Copy linkLink copied to clipboard!
Use the instructions in this section only if you have deployed OpenShift Data Foundation using local storage devices.
For OpenShift Data Foundation deployments only using localvolume resources, go directly to step 8.
Procedure
Identify the
LocalVolumeSetand the correspondingStorageClassNamebeing used by OpenShift Data Foundation.oc get localvolumesets.local.storage.openshift.io -n openshift-local-storage
$ oc get localvolumesets.local.storage.openshift.io -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the variable SC to the
StorageClassproviding theLocalVolumeSet.export SC="<StorageClassName>"
$ export SC="<StorageClassName>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List and note the devices to be cleaned up later. Inorder to list the device ids of the disks, follow the procedure mentioned here, See Find the available storage devices.
Example output:
/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3
/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
LocalVolumeSet.oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage
$ oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the local storage PVs for the given
StorageClassName.oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pv$ oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
StorageClassName.oc delete sc $SC
$ oc delete sc $SCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the symlinks created by the
LocalVolumeSet.[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete
LocalVolumeDiscovery.oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage
$ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
LocalVolumeresources (if any).Use the following steps to remove the
LocalVolumeresources that were used to provision PVs in the current or previous OpenShift Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster.For each of the local volumes, do the following:
Identify the
LocalVolumeand the correspondingStorageClassNamebeing used by OpenShift Data Foundation.oc get localvolume.local.storage.openshift.io -n openshift-local-storage
$ oc get localvolume.local.storage.openshift.io -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass
For example:
LV=local-block SC=localblock
$ LV=local-block $ SC=localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow List and note the devices to be cleaned up later.
oc get localvolume -n openshift-local-storage $LV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{"\n"}'$ oc get localvolume -n openshift-local-storage $LV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
/dev/sdb /dev/sdc /dev/sdd /dev/sde
/dev/sdb /dev/sdc /dev/sdd /dev/sdeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the local volume resource.
oc delete localvolume -n openshift-local-storage --wait=true $LV
$ oc delete localvolume -n openshift-local-storage --wait=true $LVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the remaining PVs and StorageClasses if they exist.
oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m oc delete storageclass $SC --wait --timeout=5m$ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m $ oc delete storageclass $SC --wait --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the artifacts from the storage nodes for that resource.
[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Wipe the disks for each of the local volumesets or local volumes listed in step 1 and 8 respectively so that they can be reused.
List the storage nodes.
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8
NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the node console and execute
chroot /hostcommand when the prompt appears.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Store the disk paths in the DISKS variable within quotes. For the list of disk paths, see step 3 and step 8.c for local volumeset and local volume respectively.
Example output:
DISKS="/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 " or DISKS="/dev/sdb /dev/sdc /dev/sdd /dev/sde ".
sh-4.4# DISKS="/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 " or sh-4.2# DISKS="/dev/sdb /dev/sdc /dev/sdd /dev/sde ".Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
sgdisk --zap-allon all the disks.for disk in $DISKS; do sgdisk --zap-all $disk;done
sh-4.4# for disk in $DISKS; do sgdisk --zap-all $disk;doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the shell and repeat for the other nodes.
exit exit exit exit Removing debug pod ...
sh-4.4# exit exit sh-4.2# exit exit Removing debug pod ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
openshift-local-storagenamespace and wait till the deletion is complete. You will need to switch to another project if theopenshift-local-storagenamespace is the active project.For example:
oc project default oc delete project openshift-local-storage --wait=true --timeout=5m
$ oc project default $ oc delete project openshift-local-storage --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The project is deleted if the following command returns a NotFound error.
oc get project openshift-local-storage
$ oc get project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Removing monitoring stack from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up the monitoring stack from OpenShift Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
Prerequisites
PVCs are configured to use the OpenShift Container Platform monitoring stack.
For more information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoringnamespace.oc get pod,pvc -n openshift-monitoring
$ oc get pod,pvc -n openshift-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the monitoring
configmap.oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any
configsections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it.Before editing
Expand Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Expand Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
alertmanagerMainandprometheusK8smonitoring components are using the OpenShift Data Foundation PVCs.Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow <pvc-name>- Is the name of the PVC
7.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see Image registry.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace.
Prerequisites
- The image registry must have been configured to use an OpenShift Data Foundation PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.oc edit configs.imageregistry.operator.openshift.io
$ oc edit configs.imageregistry.operator.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the PVC is called
registry-cephfs-rwx-pvc, which is now safe to delete.Delete the PVC.
oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow <pvc-name>- Is the name of the PVC
7.4. Removing the cluster logging operator from OpenShift Data Foundation Copy linkLink copied to clipboard!
Use this section to clean up the cluster logging operator from OpenShift Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs.
Procedure
Remove the
ClusterLogginginstance in the namespace.oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The PVCs in the
openshift-loggingnamespace are now safe to delete.Delete the PVCs.
oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow <pvc-name>- Is the name of the PVC