Questo contenuto non è disponibile nella lingua selezionata.

Chapter 6. Uninstalling OpenShift Container Storage


6.1. Uninstalling OpenShift Container Storage on External mode

Use the steps in this section to uninstall OpenShift Container Storage instead of the Uninstall option from the user interface. Uninstalling OpenShift Container Storage will neither remove the RBD pool from the external cluster nor uninstall the external RedHat Ceph Storage cluster.

Prerequisites

  • Make sure that the OpenShift Container Storage cluster is in a healthy state. The deletion might fail if some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, you should contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
  • Make sure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage. PVCs and OBCs will be deleted during the uninstall process.

Procedure

  1. Query for PVCs and OBCs that use the OpenShift Container Storage based storage class provisioners.

    For example :

    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-external-storagecluster-ceph-rbd")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{" Labels: "}{@.metadata.labels}{"\n"}{end}' --all-namespaces|awk '! ( /Namespace: openshift-storage/ && /app:noobaa/ )' | grep -v noobaa-default-backing-store-noobaa-pvc
    Copy to Clipboard Toggle word wrap
    Note

    If the external RedHat Ceph Storage cluster is not configured for CephFS, you can ignore the following query command for ocs-external-storagecluster-cephfs.

    $ oc get pvc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-external-storagecluster-cephfs")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Copy to Clipboard Toggle word wrap
    Note

    If the external RedHat Ceph Storage cluster is not configured for Object Storage, you can ignore the following query command for ocs-external-storagecluster-ceph-rgw.

    $ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="ocs-external-storagecluster-ceph-rgw")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Copy to Clipboard Toggle word wrap
    $ oc get obc -o=jsonpath='{range .items[?(@.spec.storageClassName=="openshift-storage.noobaa.io")]}{"Name: "}{@.metadata.name}{" Namespace: "}{@.metadata.namespace}{"\n"}{end}' --all-namespaces
    Copy to Clipboard Toggle word wrap
  2. Follow these instructions to ensure the PVCs and OBCs listed in the previous step are deleted.

    If you have created PVCs as a part of configuring the monitoring stack, cluster logging operator, or image registry, then you must perform the clean up steps provided in the following sections as required:

    • Section 6.2, “Removing monitoring stack from OpenShift Container Storage”
    • Section 6.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
    • Section 6.4, “Removing the cluster logging operator from OpenShift Container Storage”

      For each of the remaining PVCs or OBCs, follow the steps mentioned below:

      1. Determine the pod that is consuming the PVC or OBC.
      2. Identify the controlling API object such as a Deployment, StatefulSet, DaemonSet , Job, or a custom controller.

        Each API object has a metadata field known as OwnerReference. This is a list of associated objects. The OwnerReference with the controller field set to true will point to controlling objects such as ReplicaSet, StatefulSet,DaemonSet and so on.

      3. Ensure that the API object is not consuming PVC or OBC provided by OpenShift Container Storage. Either the object should be deleted or the storage should be replaced. Ask the owner of the project to make sure that it is safe to delete or modify the object.

        Note

        You can ignore the noobaa pods.

      4. Delete the OBCs.

        $ oc delete obc <obc name> -n <project name>
        Copy to Clipboard Toggle word wrap
      5. Delete any custom Bucket Class you have created.

        $ oc get bucketclass -A | grep -v noobaa-default-bucket-class
        Copy to Clipboard Toggle word wrap
        $ oc delete bucketclass <bucketclass name> -n <project-name>
        Copy to Clipboard Toggle word wrap
      6. If you have created any custom Multi Cloud Gateway backingstores, delete each of them.

        1. List and note the backingstores.

          for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Found backingstore $bs"; echo "Its has the following pods running :"; echo "$(oc get pods -o name -n openshift-storage | grep $(echo ${bs} | cut -f2 -d/))"; done
          Copy to Clipboard Toggle word wrap
        2. Delete each of the backingstores listed above and confirm that the corresponding pods and PVCs are deleted.

          for bs in $(oc get backingstore -o name -n openshift-storage | grep -v noobaa-default-backing-store); do echo "Deleting Backingstore $bs"; oc delete -n openshift-storage $bs; done
          Copy to Clipboard Toggle word wrap
        3. If any of the backingstores listed above were based on the pv-pool, ensure that the corresponding pod and PVC are also deleted.

          $ oc get pods  -n openshift-storage | grep noobaa-pod | grep -v noobaa-default-backing-store-noobaa-pod
          Copy to Clipboard Toggle word wrap
          $ oc get pvc -n openshift-storage --no-headers | grep -v noobaa-db | grep -v noobaa-default-backing-store-noobaa-pvc
          Copy to Clipboard Toggle word wrap
      7. Delete the remaining PVCs listed in Step 1.

        $ oc delete pvc <pvc name> -n <project-name>
        Copy to Clipboard Toggle word wrap
  3. Delete the StorageCluster object and wait for the removal of the associated resources.

    $ oc delete -n openshift-storage storagecluster --all --wait=true
    Copy to Clipboard Toggle word wrap
  4. Delete the namespace and wait till the deletion is complete. You will need to switch to another project if openshift-storage is the active project.

    1. Switch to another namespace if openshift-storage is the active namespace.

      For example :

      $ oc project default
      Copy to Clipboard Toggle word wrap
    2. Delete the openshift-storage namespace.

      $ oc delete project openshift-storage --wait=true --timeout=5m
      Copy to Clipboard Toggle word wrap
    3. Wait for approximately five minutes and confirm if the project is deleted successfully.

      $ oc get project  openshift-storage
      Copy to Clipboard Toggle word wrap

      Output:

      Error from server (NotFound): namespaces "openshift-storage" not found
      Copy to Clipboard Toggle word wrap
      Note

      While uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in Terminating state, perform the steps in the article Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.

  5. Delete the openshift-storage.noobaa.io storage class.

    $ oc delete storageclass  openshift-storage.noobaa.io --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap
  6. Confirm all PVs are deleted. If there is any PV left in the Released state, delete it.

    # oc get pv|egrep 'ocs-external-storagecluster-ceph-rbd|ocs-external-storagecluster-cephfs'
    Copy to Clipboard Toggle word wrap
    # oc  delete pv <pv name>
    Copy to Clipboard Toggle word wrap
  7. Remove CustomResourceDefinitions.

    $ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io  storageclusterinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap
  8. To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,

    1. Click Home Overview to access the dashboard.
    2. Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.

6.2. Removing monitoring stack from OpenShift Container Storage

Use this section to clean up monitoring stack from OpenShift Container Storage.

The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.

Prerequisites

Procedure

  1. List the pods and PVCs that are currently running in the openshift-monitoring namespace.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                           READY   STATUS    RESTARTS   AGE
    pod/alertmanager-main-0         3/3     Running   0          8d
    pod/alertmanager-main-1         3/3     Running   0          8d
    pod/alertmanager-main-2         3/3     Running   0          8d
    pod/cluster-monitoring-
    operator-84457656d-pkrxm        1/1     Running   0          8d
    pod/grafana-79ccf6689f-2ll28    2/2     Running   0          8d
    pod/kube-state-metrics-
    7d86fb966-rvd9w                 3/3     Running   0          8d
    pod/node-exporter-25894         2/2     Running   0          8d
    pod/node-exporter-4dsd7         2/2     Running   0          8d
    pod/node-exporter-6p4zc         2/2     Running   0          8d
    pod/node-exporter-jbjvg         2/2     Running   0          8d
    pod/node-exporter-jj4t5         2/2     Running   0          6d18h
    pod/node-exporter-k856s         2/2     Running   0          6d18h
    pod/node-exporter-rf8gn         2/2     Running   0          8d
    pod/node-exporter-rmb5m         2/2     Running   0          6d18h
    pod/node-exporter-zj7kx         2/2     Running   0          8d
    pod/openshift-state-metrics-
    59dbd4f654-4clng                3/3     Running   0          8d
    pod/prometheus-adapter-
    5df5865596-k8dzn                1/1     Running   0          7d23h
    pod/prometheus-adapter-
    5df5865596-n2gj9                1/1     Running   0          7d23h
    pod/prometheus-k8s-0            6/6     Running   1          8d
    pod/prometheus-k8s-1            6/6     Running   1          8d
    pod/prometheus-operator-
    55cfb858c9-c4zd9                1/1     Running   0          6d21h
    pod/telemeter-client-
    78fc8fc97d-2rgfp                3/3     Running   0          8d
    
    NAME                                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0   Bound    pvc-0d519c4f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1   Bound    pvc-0d5a9825-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2   Bound    pvc-0d6413dc-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0        Bound    pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1        Bound    pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   8d
    Copy to Clipboard Toggle word wrap
  2. Edit the monitoring configmap.

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
    Copy to Clipboard Toggle word wrap
  3. Remove any config sections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.

    Expand

    Before editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: my-alertmanager-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-external-storagecluster-ceph-rbd
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: my-prometheus-claim
            spec:
              resources:
                requests:
                  storage: 40Gi
              storageClassName: ocs-external-storagecluster-ceph-rbd
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-12-02T07:47:29Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "22110"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: fd6d988b-14d7-11ea-84ff-066035b9efa8
    
    
    .
    .
    .
    Copy to Clipboard Toggle word wrap

    After editing

    .
    .
    .
    apiVersion: v1
    data:
      config.yaml: |
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-11-21T13:07:05Z"
      name: cluster-monitoring-config
      namespace: openshift-monitoring
      resourceVersion: "404352"
      selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config
      uid: d12c796a-0c5f-11ea-9832-063cd735b81c
    .
    .
    .
    Copy to Clipboard Toggle word wrap

    In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Container Storage PVCs.

  4. List the pods consuming the PVC.

    In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Container Storage PVC.

    $ oc get pod,pvc -n openshift-monitoring
    NAME                                               READY   STATUS      RESTARTS AGE
    pod/alertmanager-main-0                            3/3     Terminating   0      10h
    pod/alertmanager-main-1                            3/3     Terminating   0      10h
    pod/alertmanager-main-2                            3/3     Terminating   0      10h
    pod/cluster-monitoring-operator-84cd9df668-zhjfn   1/1     Running       0      18h
    pod/grafana-5db6fd97f8-pmtbf                       2/2     Running       0      10h
    pod/kube-state-metrics-895899678-z2r9q             3/3     Running       0      10h
    pod/node-exporter-4njxv                            2/2     Running       0      18h
    pod/node-exporter-b8ckz                            2/2     Running       0      11h
    pod/node-exporter-c2vp5                            2/2     Running       0      18h
    pod/node-exporter-cq65n                            2/2     Running       0      18h
    pod/node-exporter-f5sm7                            2/2     Running       0      11h
    pod/node-exporter-f852c                            2/2     Running       0      18h
    pod/node-exporter-l9zn7                            2/2     Running       0      11h
    pod/node-exporter-ngbs8                            2/2     Running       0      18h
    pod/node-exporter-rv4v9                            2/2     Running       0      18h
    pod/openshift-state-metrics-77d5f699d8-69q5x       3/3     Running       0      10h
    pod/prometheus-adapter-765465b56-4tbxx             1/1     Running       0      10h
    pod/prometheus-adapter-765465b56-s2qg2             1/1     Running       0      10h
    pod/prometheus-k8s-0                               6/6     Terminating   1      9m47s
    pod/prometheus-k8s-1                               6/6     Terminating   1      9m47s
    pod/prometheus-operator-cbfd89f9-ldnwc             1/1     Running       0      43m
    pod/telemeter-client-7b5ddb4489-2xfpz              3/3     Running       0      10h
    
    NAME                                                      STATUS  VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0   Bound    pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1   Bound    pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2   Bound    pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0        Bound    pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1        Bound    pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64   40Gi       RWO            ocs-external-storagecluster-ceph-rbd   19h
    Copy to Clipboard Toggle word wrap
  5. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.

    $ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap

Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry

The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.

Prerequisites

  • The image registry should have been configured to use an OpenShift Container Storage PVC.

Procedure

  1. Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section.

    $ oc edit configs.imageregistry.operator.openshift.io
    Copy to Clipboard Toggle word wrap
    Expand

    Before editing

    .
    .
    .
    storage:
      pvc:
        claim: registry-cephfs-rwx-pvc
    .
    .
    .
    Copy to Clipboard Toggle word wrap

    After editing

    .
    .
    .
    storage:
      emptyDir: {}
    .
    .
    .
    Copy to Clipboard Toggle word wrap

    In this example, the PVC is called registry-cephfs-rwx-pvc, which is now safe to delete.

  2. Delete the PVC.

    $ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap

6.4. Removing the cluster logging operator from OpenShift Container Storage

Use this section to clean up the cluster logging operator from OpenShift Container Storage.

The PVCs that are created as a part of configuring cluster logging operator are in openshift-logging namespace.

Prerequisites

  • The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.

Procedure

  1. Remove the ClusterLogging instance in the namespace.

    $ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap

    The PVCs in the openshift-logging namespace are now safe to delete.

  2. Delete PVCs.

    $ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
    Copy to Clipboard Toggle word wrap
Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat