此内容没有您所选择的语言版本。

Chapter 4. Uninstalling OpenShift Container Storage


4.1. Uninstalling OpenShift Container Storage on Internal mode

Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface.

Prerequisites

  • Make sure that the OpenShift Container Storage cluster is in a healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
  • Make sure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. PVCs and OBCs will be deleted during the uninstall process.

Procedure

  1. Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.

    For example :

    Copy to Clipboard Toggle word wrap
    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
    Copy to Clipboard Toggle word wrap
    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-storagecluster-ceph-rgw")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
  2. Follow these instructions to ensure that the PVCs and OBCs listed in the previous step are deleted.

    If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:

    • Section 4.2, “Removing monitoring stack from OpenShift Container Storage”
    • Section 4.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
    • Section 4.4, “Removing the cluster logging operator from OpenShift Container Storage”

      For each of the remaining PVCs or OBCs, follow the steps mentioned below :

      1. Determine the pod that is consuming the PVC or OBC.
      2. Identify the controlling API object such as a Deployment, StatefulSet, DaemonSet , Job, or a custom controller.

        Each API object has a metadata field known as OwnerReference. This is a list of associated objects. The OwnerReference with the controller field set to true will point to controlling objects such as ReplicaSet, StatefulSet,DaemonSet and so on.

      3. Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.

        Note

        You can ignore the noobaa pods.

      4. Delete the OBCs.

        Copy to Clipboard Toggle word wrap
        $ oc delete obc <obc name> -n <project name>
      5. Delete any custom Bucket Class you have created.

        Copy to Clipboard Toggle word wrap
        $ oc get bucketclass -A  | grep -v noobaa-default-bucket-class
        Copy to Clipboard Toggle word wrap
        $ oc delete bucketclass <bucketclass name> -n <project-name>
      6. If you have created any custom Multi Cloud Gateway backingstores, delete them.

        • List and note the backingstores.

          Copy to Clipboard Toggle word wrap
          for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
        • Delete each of the backingstores listed above and confirm that the dependent resources also get deleted.

          Copy to Clipboard Toggle word wrap
          for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
        • If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.

          Copy to Clipboard Toggle word wrap
          $ oc get pods -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
          Copy to Clipboard Toggle word wrap
          $ oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep noobaa-pvc | grep -v noobaa-default-backing-store-noobaa-pvc
      7. Delete the remaining PVCs listed in Step 1.

        Copy to Clipboard Toggle word wrap
        $ oc delete pvc <pvc name> -n <project-name>
  3. List and note the backing local volume objects. If there are no results, skip steps 7 and 8.

    Copy to Clipboard Toggle word wrap
    $ for sc in $(oc get storageclass|grep 'kubernetes.io/no-provisioner' |grep -E $(oc get storagecluster -n openshift-storage -o jsonpath='{ .items[*].spec.storageDeviceSets[*].dataPVCTemplate.spec.storageClassName}' | sed 's/ /|/g')| awk '{ print $1 }');
    do
        echo -n "StorageClass: $sc ";
        oc get storageclass $sc -o jsonpath=" { 'LocalVolume: ' }{ .metadata.labels['local\.storage\.openshift\.io/owner-name'] } { '\n' }";
    done

    Example output:

    Copy to Clipboard Toggle word wrap
    StorageClass: localblock  LocalVolume: local-block
  4. Delete the StorageCluster object and wait for the removal of the associated resources.

    Copy to Clipboard Toggle word wrap
    $ oc delete -n openshift-storage storagecluster --all --wait=true
  5. Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.

    1. Switch to another namespace if openshift-storage is the active namespace.

      For example :

      Copy to Clipboard Toggle word wrap
      $ oc project default
    2. Delete the openshift-storage namespace.

      Copy to Clipboard Toggle word wrap
      $ oc delete project openshift-storage --wait=true --timeout=5m
    3. Wait for approximately five minutes and confirm if the project is deleted successfully.

      Copy to Clipboard Toggle word wrap
      $ oc get project  openshift-storage

      Output:

      Copy to Clipboard Toggle word wrap
      Error from server (NotFound): namespaces "openshift-storage" not found
      Note

      While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.

  6. Clean up the storage operator artifacts on each node.

    Copy to Clipboard Toggle word wrap
    $ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /var/lib/rook; done

    Ensure you can see removed directory /var/lib/rook in the output.

    Confirm that the directory no longer exists

    Copy to Clipboard Toggle word wrap
    $ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host  ls -l /var/lib/rook; done
  7. Delete the local volume created during the deployment and repeat for each of the local volumes listed in step 3.

    For each of the local volumes, do the following:

    1. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass listed in Step 3.

      For example:

      Copy to Clipboard Toggle word wrap
      $ LV=local-block
      Copy to Clipboard Toggle word wrap
      $ SC=localblock
    2. List and note the devices to be cleaned up later.

      Copy to Clipboard Toggle word wrap
      $ oc get localvolume -n local-storage $LV -o jsonpath='{ .spec.storageClassDevices[*].devicePaths[*] }'

      Example output:

      Copy to Clipboard Toggle word wrap
      /dev/disk/by-id/nvme-xxxxxx
      /dev/disk/by-id/nvme-yyyyyy
      /dev/disk/by-id/nvme-zzzzzz
    3. Delete the local volume resource.

      Copy to Clipboard Toggle word wrap
      $ oc delete localvolume -n local-storage --wait=true $LV
    4. Delete the remaining PVs and StorageClasses if they exist.

      Copy to Clipboard Toggle word wrap
      $ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m
      Copy to Clipboard Toggle word wrap
      $ oc delete storageclass $SC --wait --timeout=5m
    5. Clean up the artifacts from the storage nodes for that resource.

      Copy to Clipboard Toggle word wrap
      $ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done

      Example output :

      Copy to Clipboard Toggle word wrap
      Starting pod/node-xxx-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/node-yyy-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
      Starting pod/node-zzz-debug ...
      To use host binaries, run `chroot /host`
      removed '/mnt/local-storage/localblock/nvme2n1'
      removed directory '/mnt/local-storage/localblock'
      
      Removing debug pod ...
  8. Wipe the disks for each of the local volumes listed in step 3 so that they can be reused.

    1. List the storage nodes.

      Copy to Clipboard Toggle word wrap
      $ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

      Example output:

      Copy to Clipboard Toggle word wrap
      NAME      STATUS   ROLES    AGE     VERSION
      node-xxx  Ready    worker   4h45m  v1.18.3+6c42de8
      node-yyy  Ready    worker   4h46m  v1.18.3+6c42de8
      node-zzz  Ready    worker   4h45m  v1.18.3+6c42de8
    2. Obtain the node console and execute chroot /host command when the prompt appears.

      Copy to Clipboard Toggle word wrap
      $ oc debug node/node-xxx
      Starting pod/node-xxx-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: w.x.y.z
      If you don't see a command prompt, try pressing enter.
      sh-4.2# chroot /host
    3. Store the disk paths gathered in step 7(ii) in the DISKS variable within quotes.

      Copy to Clipboard Toggle word wrap
      sh-4.2# DISKS="/dev/disk/by-id/nvme-xxxxxx
      /dev/disk/by-id/nvme-yyyyyy /dev/disk/by-id/nvme-zzzzzz"
    4. Run sgdisk --zap-all on all the disks.

      Copy to Clipboard Toggle word wrap
      sh-4.4# for disk in $DISKS; do sgdisk --zap-all $disk;done

      Example output:

      Copy to Clipboard Toggle word wrap
      Problem opening /dev/disk/by-id/nvme-xxxxxx for reading! Error is 2.
      The specified file does not exist!
      Problem opening '' for writing! Program will now terminate.
      Warning! MBR not overwritten! Error is 2!
      Problem opening /dev/disk/by-id/nvme-yyyyy for reading! Error is 2.
      The specified file does not exist!
      Problem opening '' for writing! Program will now terminate.
      Warning! MBR not overwritten! Error is 2!
      Creating new GPT entries.
      GPT data structures destroyed! You may now partition the disk using fdisk or
      other utilities.
      NOTE
      Ignore file-not-found warnings as they refer to disks that are on other machines.
    5. Exit the shell and repeat for the other nodes.

      Copy to Clipboard Toggle word wrap
      sh-4.4# exit
      exit
      sh-4.2# exit
      exit
      
      Removing debug pod ...
  9. Delete the openshift-storage.noobaa.io storage class.

    Copy to Clipboard Toggle word wrap
    $ oc delete storageclass  openshift-storage.noobaa.io --wait=true --timeout=5m
  10. Unlabel the storage nodes.

    Copy to Clipboard Toggle word wrap
    $ oc label nodes  --all cluster.ocs.openshift.io/openshift-storage-
    Copy to Clipboard Toggle word wrap
    $ oc label nodes  --all topology.rook.io/rack-
    Note

    You can ignore the warnings displayed for the unlabeled nodes such as label <label> not found.

  11. Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.

    Copy to Clipboard Toggle word wrap
    # oc get pv | egrep 'ocs-storagecluster-ceph-rbd|ocs-storagecluster-cephfs'
    Copy to Clipboard Toggle word wrap
    # oc delete pv <pv name>
  12. Remove CustomResourceDefinitions.

    Copy to Clipboard Toggle word wrap
    $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io  storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
  13. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,

    1. Click Home Overview to access the dashboard.
    2. Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.

4.2. Removing monitoring stack from OpenShift Container Storage

Use this section to clean up monitoring stack from OpenShift Container Storage.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

Prerequisites

Procedure

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

    Copy to Clipboard Toggle word wrap
    $ oc get pod,pvc -n openshift-monitoring
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/alertmanager-main-0         3/3     Running   0          8d
    pod/alertmanager-main-1         3/3     Running   0          8d
    pod/alertmanager-main-2         3/3     Running   0          8d
    pod/cluster-monitoring-
    operator-84457656d-pkrxm        1/1     Running   0          8d
    pod/grafana-79ccf6689f-2ll28    2/2     Running   0          8d
    pod/kube-state-metrics-
    7d86fb966-rvd9w                 3/3     Running   0          8d
    pod/node-exporter-25894         2/2     Running   0          8d
    pod/node-exporter-4dsd7         2/2     Running   0          8d
    pod/node-exporter-6p4zc         2/2     Running   0          8d
    pod/node-exporter-jbjvg         2/2     Running   0          8d
    pod/node-exporter-jj4t5         2/2     Running   0          6d18h
    pod/node-exporter-k856s         2/2     Running   0          6d18h
    pod/node-exporter-rf8gn         2/2     Running   0          8d
    pod/node-exporter-rmb5m         2/2     Running   0          6d18h
    pod/node-exporter-zj7kx         2/2     Running   0          8d
    pod/openshift-state-metrics-
    59dbd4f654-4clng                3/3     Running   0          8d
    pod/prometheus-adapter-
    5df5865596-k8dzn                1/1     Running   0          7d23h
    pod/prometheus-adapter-
    5df5865596-n2gj9                1/1     Running   0          7d23h
    pod/prometheus-k8s-0            6/6     Running   1          8d
    pod/prometheus-k8s-1            6/6     Running   1          8d
    pod/prometheus-operator-
    55cfb858c9-c4zd9                1/1     Running   0          6d21h
    pod/telemeter-client-
    78fc8fc97d-2rgfp                3/3     Running   0          8d
    
    NAME                                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0   Bound    pvc-0d519c4f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1   Bound    pvc-0d5a9825-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2   Bound    pvc-0d6413dc-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0        Bound    pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1        Bound    pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-storagecluster-ceph-rbd   8d
  2. Edit the monitoring configmap.

    Copy to Clipboard Toggle word wrap
    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  3. Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.

    Before editing

    Copy to Clipboard Toggle word wrap
    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: my-alertmanager-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: my-prometheus-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-storagecluster-ceph-rbd
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-12-02T07:47:29Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "22110"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
    .
    .
    .

    After editing

    Copy to Clipboard Toggle word wrap
    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-11-21T13:07:05Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "404352"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: d12c796a-0c5f-11ea-9832-063cd735b81c
    .
    .
    .

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs.

  4. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

    Copy to Clipboard Toggle word wrap
    $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m

4.3. Removing OpenShift Container Platform registry from OpenShift Container Storage

Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

Prerequisites

  • The image registry should have been configured to use an OpenShift Container Storage PVC.

Procedure

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section.

    Copy to Clipboard Toggle word wrap
    $ oc edit configs.imageregistry.operator.openshift.io

    Before editing

    Copy to Clipboard Toggle word wrap
    .
    .
    .
    storage:
      pvc:
        claim: registry-cephfs-rwx-pvc
    .
    .
    .

    After editing

    Copy to Clipboard Toggle word wrap
    .
    .
    .
    storage:
      emptyDir: {}
    .
    .
    .

    In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

  2. Delete the PVC.

    Copy to Clipboard Toggle word wrap
    $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m

4.4. Removing the cluster logging operator from OpenShift Container Storage

Use this section to clean up the cluster logging operator from OpenShift Container Storage.

The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging namespace.

Prerequisites

  • The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.

Procedure

  1. Remove the ClusterLogging instance in the namespace.

    Copy to Clipboard Toggle word wrap
    $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m

    The PVCs in the openshift-logging namespace are now safe to delete.

  2. Delete PVCs.

    Copy to Clipboard Toggle word wrap
    $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat, Inc.