OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Deploying OpenShift Container Storage using Amazon Web Services
How to install and set up OpenShift Container Storage on OpenShift Container Platform AWS Clusters
Abstract
Preface Copy linkLink copied to clipboard!
Red Hat OpenShift Container Storage 4.6 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments.
Only internal Openshift Container Storage clusters are supported on AWS. See Planning your deployment for more information about deployment requirements.
To deploy OpenShift Container Storage in internal mode, follow the appropriate deployment process for your environment:
- Deploy using dynamic storage devices
- Deploy using local storage devices [Technology Preview]
Chapter 1. Deploy using dynamic storage devices Copy linkLink copied to clipboard!
Deploying OpenShift Container Storage on OpenShift Container Platform using dynamic storage devices provided by AWS EBS (type: gp2) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Only internal Openshift Container Storage clusters are supported on AWS. See Planning your deployment for more information about deployment requirements.
For Red Hat Enterprise Linux based hosts for worker nodes in a user provisioned infrastructure (UPI), enable the container access to the underlying file system. Follow the instructions on enable file system access for containers on Red Hat Enterprise Linux based nodes.
NoteSkip this step for Red Hat Enterprise Linux CoreOS (RHCOS).
- Install the Red Hat OpenShift Container Storage Operator.
- Create the OpenShift Container Storage Cluster Service.
1.1. Enabling file system access for containers on Red Hat Enterprise Linux based nodes Copy linkLink copied to clipboard!
Deploying OpenShift Container Storage on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system.
This process is not necessary for hosts based on Red Hat Enterprise Linux CoreOS.
Procedure
Perform the following steps on each node in your cluster.
- Log in to the Red Hat Enterprise Linux based node and open a terminal.
Verify that the node has access to the rhel-7-server-extras-rpms repository.
subscription-manager repos --list-enabled | grep rhel-7-server
# subscription-manager repos --list-enabled | grep rhel-7-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not see both
rhel-7-server-rpmsandrhel-7-server-extras-rpmsin the output, or if there is no output, run the following commands to enable each repository.subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the required packages.
yum install -y policycoreutils container-selinux
# yum install -y policycoreutils container-selinuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow Persistently enable container use of the Ceph file system in SELinux.
setsebool -P container_use_cephfs on
# setsebool -P container_use_cephfs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Installing Red Hat OpenShift Container Storage Operator Copy linkLink copied to clipboard!
You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment.
Prerequisites
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- You must have at least three worker nodes in the RHOCP cluster.
When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command in command line interface to specify a blank node selector for the
openshift-storagenamespace:oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Taint a node as
infrato ensure only Red Hat OpenShift Container Storage resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Container Storage chapter in Managing and Allocating Storage Resources guide.
Procedure
- Click Operators → OperatorHub in the left pane of the OpenShift Web Console.
- Use Filter by keyword text box or the filter list to search for OpenShift Container Storage from the list of operators.
- Click OpenShift Container Storage.
- On the OpenShift Container Storage operator page, click Install.
On the Install Operator page, ensure the following options are selected by default::
- Update Channel as stable-4.6
- Installation Mode as A specific namespace on the cluster
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it will be created during the operator installation. - Select Enable operator recommended cluster monitoring on this namespace checkbox as this is required for cluster monitoring.
Select Approval Strategy as Automatic or Manual. Approval Strategy is set to Automatic by default.
Approval Strategy as Automatic.
NoteWhen you select the Approval Strategy as Automatic, approval is not required either during fresh installation or when updating to the latest version of OpenShift Container Storage.
- Click Install
- Wait for the install to initiate. This may take up to 20 minutes.
- Click Operators → Installed Operators
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Wait for the Status of OpenShift Container Storage to change to Succeeded.
Approval Strategy as Manual.
NoteWhen you select the Approval Strategy as Manual, approval is required during fresh installation or when updating to the latest version of OpenShift Container Storage.
- Click Install
On the Manual approval required page, you can either click Approve or View Installed Operators in namespace openshift-storage to install the operator.
ImportantBefore you click either of the options, wait for a few minutes on the Manual approval required page until the install plan gets loaded in the window.
ImportantIf you choose to click Approve, you must review the install plan before you proceed.
If you click Approve.
- Wait for a few minutes while the OpenShift Container Storage Operator is getting installed.
- On the Installed operator - ready for use page, click View Operator.
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Click Operators → Installed Operators
- Wait for the Status of OpenShift Container Storage to change to Succeeded.
If you click View Installed Operators in namespace openshift-storage .
- On the Installed Operators page, click ocs-operator.
- On the Subscription Details page, click the Install Plan link.
- On the InstallPlan Details page, click Preview Install Plan.
- Review the install plan and click Approve.
- Wait for the Status of the Components to change from Unknown to either Created or Present.
- Click Operators → Installed Operators
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Wait for the Status of OpenShift Container Storage to change to Succeeded.
Verification steps
- Verify that OpenShift Container Storage Operator shows a green tick indicating successful installation.
-
Click View Installed Operators in namespace openshift-storage link to verify that OpenShift Container Storage Operator shows the Status as
Succeededon the Installed Operators dashboard.
1.3. Creating an OpenShift Container Storage Cluster Service in internal mode Copy linkLink copied to clipboard!
Use this procedure to create an OpenShift Container Storage Cluster Service after you install the OpenShift Container Storage operator.
Prerequisites
- The OpenShift Container Storage operator must be installed from the Operator Hub. For more information, see Installing OpenShift Container Storage Operator using the Operator Hub.
Procedure
Click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is openshift-storage.
Figure 1.1. OpenShift Container Storage Operator page
Click OpenShift Container Storage.
Figure 1.2. Details tab of OpenShift Container Storage
Click Create Instance link of Storage Cluster.
Figure 1.3. Create Storage Cluster page
On the Create Storage Cluster page, ensure that the following options are selected:
-
In the Select Mode section,
Internalmode is selected by default. -
Storage Class is set by default to
gp2for AWS. Select OpenShift Container Storage Service Capacity from drop down list.
NoteOnce you select the initial storage capacity, cluster expansion will only be performed using the selected usable capacity (times 3 of raw storage).
-
(Optional) In the Encryption section, set the toggle to
Enabledto enable data encryption on the cluster. In the Nodes section, select at least three worker nodes from the available list for the use of OpenShift Container Storage service.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.
NoteTo find specific worker nodes in the cluster, you can filter nodes on the basis of Name or Label.
- Name allows you to search by name of the node
- Label allows you to search by selecting the predefined label
If the nodes selected do not match the OpenShift Container Storage cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster will be deployed. For minimum starting node requirements, see Resource requirements section in Planning guide.
-
In the Select Mode section,
Click Create.
The Create button is enabled only after you select three nodes. A new storage cluster with three storage devices will be created, one per selected node. The default configuration uses a replication factor of 3.
Verification steps
Verify that the final Status of the installed storage cluster shows as
Phase: Readywith a green tick mark.- Click Operators → Installed Operators → Storage Cluster link to view the storage cluster installation status.
- Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status.
- To verify that all components for OpenShift Container Storage are successfully installed, see Verifying your OpenShift Container Storage installation.
Chapter 2. Deploying using local storage devices Copy linkLink copied to clipboard!
Deploying OpenShift Container Storage on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Use this section to deploy OpenShift Container Storage on Amazon EC2 storage optimized I3 where OpenShift Container Platform is already installed.
Installing OpenShift Container Storage on Amazon EC2 storage optimized I3 instances using the Local Storage Operator is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Red Hat OpenShift Container Storage deployment assumes a new cluster, without any application or other workload running on the 3 worker nodes. Applications should run on additional worker nodes.
2.1. Overview of deploying with internal local storage Copy linkLink copied to clipboard!
To deploy Red Hat OpenShift Container Storage using local storage, follow these steps:
- Understand the requirements for installing OpenShift Container Storage using local storage devices.
For Red Hat Enterprise Linux based hosts for worker nodes, enable file system access for containers on Red Hat Enterprise Linux based nodes.
NoteSkip this step for Red Hat Enterprise Linux CoreOS (RHCOS).
- Install the Red Hat OpenShift Container Storage Operator.
- Install Local Storage Operator.
- Find the available storage devices.
- Create OpenShift Container Storage cluster service on Amazon EC2 storage optimized - i3en.2xlarge instance type.
2.2. Requirements for installing OpenShift Container Storage using local storage devices Copy linkLink copied to clipboard!
- You must upgrade to a latest version of OpenShift Container Platform 4.6 before deploying OpenShift Container Storage 4.6. For information, see Updating OpenShift Container Platform clusters guide.
- The Local Storage Operator version must match the Red Hat OpenShift Container Platform version in order to have the Local Storage Operator fully supported with Red Hat OpenShift Container Storage. The Local Storage Operator does not get upgraded when Red Hat OpenShift Container Platform is upgraded.
You must have at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them.
- Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Container Storage.
- The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk.
- For minimum starting node requirements, see Resource requirements section in Planning guide.
You must have a minimum of three labeled nodes.
- Ensure that the Nodes are spread across different Locations/Availability Zones for a multiple availability zones platform.
Each node that has local storage devices to be used by OpenShift Container Storage must have a specific label to deploy OpenShift Container Storage pods. To label the nodes, use the following command:
oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''
$ oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Enabling file system access for containers on Red Hat Enterprise Linux based nodes Copy linkLink copied to clipboard!
Deploying OpenShift Container Storage on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system.
This process is not necessary for hosts based on Red Hat Enterprise Linux CoreOS.
Procedure
Perform the following steps on each node in your cluster.
- Log in to the Red Hat Enterprise Linux based node and open a terminal.
Verify that the node has access to the rhel-7-server-extras-rpms repository.
subscription-manager repos --list-enabled | grep rhel-7-server
# subscription-manager repos --list-enabled | grep rhel-7-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not see both
rhel-7-server-rpmsandrhel-7-server-extras-rpmsin the output, or if there is no output, run the following commands to enable each repository.subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the required packages.
yum install -y policycoreutils container-selinux
# yum install -y policycoreutils container-selinuxCopy to Clipboard Copied! Toggle word wrap Toggle overflow Persistently enable container use of the Ceph file system in SELinux.
setsebool -P container_use_cephfs on
# setsebool -P container_use_cephfs onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Installing Red Hat OpenShift Container Storage Operator Copy linkLink copied to clipboard!
You can install Red Hat OpenShift Container Storage Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment.
Prerequisites
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- You must have at least three worker nodes in the RHOCP cluster.
When you need to override the cluster-wide default node selector for OpenShift Container Storage, you can use the following command in command line interface to specify a blank node selector for the
openshift-storagenamespace:oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Taint a node as
infrato ensure only Red Hat OpenShift Container Storage resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Container Storage chapter in Managing and Allocating Storage Resources guide.
Procedure
- Click Operators → OperatorHub in the left pane of the OpenShift Web Console.
- Use Filter by keyword text box or the filter list to search for OpenShift Container Storage from the list of operators.
- Click OpenShift Container Storage.
- On the OpenShift Container Storage operator page, click Install.
On the Install Operator page, ensure the following options are selected by default::
- Update Channel as stable-4.6
- Installation Mode as A specific namespace on the cluster
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it will be created during the operator installation. - Select Enable operator recommended cluster monitoring on this namespace checkbox as this is required for cluster monitoring.
Select Approval Strategy as Automatic or Manual. Approval Strategy is set to Automatic by default.
Approval Strategy as Automatic.
NoteWhen you select the Approval Strategy as Automatic, approval is not required either during fresh installation or when updating to the latest version of OpenShift Container Storage.
- Click Install
- Wait for the install to initiate. This may take up to 20 minutes.
- Click Operators → Installed Operators
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Wait for the Status of OpenShift Container Storage to change to Succeeded.
Approval Strategy as Manual.
NoteWhen you select the Approval Strategy as Manual, approval is required during fresh installation or when updating to the latest version of OpenShift Container Storage.
- Click Install
On the Manual approval required page, you can either click Approve or View Installed Operators in namespace openshift-storage to install the operator.
ImportantBefore you click either of the options, wait for a few minutes on the Manual approval required page until the install plan gets loaded in the window.
ImportantIf you choose to click Approve, you must review the install plan before you proceed.
If you click Approve.
- Wait for a few minutes while the OpenShift Container Storage Operator is getting installed.
- On the Installed operator - ready for use page, click View Operator.
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Click Operators → Installed Operators
- Wait for the Status of OpenShift Container Storage to change to Succeeded.
If you click View Installed Operators in namespace openshift-storage .
- On the Installed Operators page, click ocs-operator.
- On the Subscription Details page, click the Install Plan link.
- On the InstallPlan Details page, click Preview Install Plan.
- Review the install plan and click Approve.
- Wait for the Status of the Components to change from Unknown to either Created or Present.
- Click Operators → Installed Operators
-
Ensure the Project is
openshift-storage. By default, the Project isopenshift-storage. - Wait for the Status of OpenShift Container Storage to change to Succeeded.
Verification steps
- Verify that OpenShift Container Storage Operator shows a green tick indicating successful installation.
-
Click View Installed Operators in namespace openshift-storage link to verify that OpenShift Container Storage Operator shows the Status as
Succeededon the Installed Operators dashboard.
2.5. Installing Local Storage Operator Copy linkLink copied to clipboard!
Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Container Storage clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Type
local storagein the Filter by keyword… box to search forLocal Storageoperator from the list of operators and click on it. Click Install.
Figure 2.1. Install Operator page
Set the following options on the Install Operator page:
- Update Channel as 4.6
- Installation Mode as A specific namespace on the cluster
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Approval Strategy as Automatic
- Click Install.
-
Verify that the Local Storage Operator shows the Status as
Succeeded.
2.6. Finding available storage devices Copy linkLink copied to clipboard!
Use this procedure to identify the device names for each of the three or more nodes that you have labeled with the OpenShift Container Storage label cluster.ocs.openshift.io/openshift-storage='' before creating PVs.
Procedure
List and verify the name of the nodes with the OpenShift Container Storage label.
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME STATUS ROLES AGE VERSION ip-10-0-135-71.us-east-2.compute.internal Ready worker 6h45m v1.16.2 ip-10-0-145-125.us-east-2.compute.internal Ready worker 6h45m v1.16.2 ip-10-0-160-91.us-east-2.compute.internal Ready worker 6h45m v1.16.2
NAME STATUS ROLES AGE VERSION ip-10-0-135-71.us-east-2.compute.internal Ready worker 6h45m v1.16.2 ip-10-0-145-125.us-east-2.compute.internal Ready worker 6h45m v1.16.2 ip-10-0-160-91.us-east-2.compute.internal Ready worker 6h45m v1.16.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to each node that is used for OpenShift Container Storage resources and find the unique
by-iddevice name for each available raw block device.oc debug node/<node name>
$ oc debug node/<node name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, for the selected node, the local devices available are
nvme0n1andnvme1n1.Identify the unique ID for each of the devices selected in Step 2.
ls -l /dev/disk/by-id/ | grep Storage lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC -> ../../nvme0n1 lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS60382E5D7441494EC -> ../../nvme1n1
sh-4.4# ls -l /dev/disk/by-id/ | grep Storage lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC -> ../../nvme0n1 lrwxrwxrwx. 1 root root 13 Mar 17 16:24 nvme-Amazon_EC2_NVMe_Instance_Storage_AWS60382E5D7441494EC -> ../../nvme1n1Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the example above, the IDs for the two local devices are
- nvme0n1: nvme-Amazon_EC2_NVMe_Instance_Storage_AWS10382E5D7441494EC
- nvme1n1: nvme-Amazon_EC2_NVMe_Instance_Storage_AWS60382E5D7441494EC
- Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Container Storage. See this Knowledge Base article for more details.
2.7. Creating OpenShift Container Storage cluster on Amazon EC2 storage optimized - i3en.2xlarge instance type Copy linkLink copied to clipboard!
Use this procedure to create OpenShift Container Storage cluster on Amazon EC2 (storage optimized - i3en.2xlarge instance type) infrastructure, which will:
-
Create PVs by using the
LocalVolumeCR -
Create a new
StorageClass
The Amazon EC2 storage optimized - i3en.2xlarge instance type includes two non-volatile memory express (NVMe) disks. The example in this procedure illustrates the use of both the disks that the instance type comes with.
When you are using the ephemeral storage of Amazon EC2 I3
- Use three availability zones to decrease the risk of losing all the data.
- Limit the number of users with ec2:StopInstances permissions to avoid instance shutdown by mistake.
It is not recommended to use ephemeral storage of Amazon EC2 I3 for OpenShift Container Storage persistent data, because stopping all the three nodes can cause data loss.
It is recommended to use ephemeral storage of Amazon EC2 I3 only in following scenarios:
- Cloud burst where data is copied from another location for a specific data crunching, which is limited in time
- Development or testing environment
Installing OpenShift Container Storage on Amazon EC2 storage optimized - i3en.2xlarge instance using local storage operator is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- Ensure that all the requirements in the Requirements for installing OpenShift Container Storage using local storage devices section are met.
Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Container Storage, which is used as the
nodeSelector.oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ip-10-0-135-71.us-east-2.compute.internal ip-10-0-145-125.us-east-2.compute.internal ip-10-0-160-91.us-east-2.compute.internal
ip-10-0-135-71.us-east-2.compute.internal ip-10-0-145-125.us-east-2.compute.internal ip-10-0-160-91.us-east-2.compute.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create local persistent volumes (PVs) on the storage nodes using
LocalVolumecustom resource (CR).Example of
LocalVolumeCRlocal-storage-block.yamlusing OpenShift Storage Container label as node selector andby-iddevice identifier:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Each Amazon EC2 I3 instance has two disks and this example uses both disks on each node.
Create the
LocalVolumeCR.oc create -f local-storage-block.yaml
$ oc create -f local-storage-block.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
localvolume.local.storage.openshift.io/local-block created
localvolume.local.storage.openshift.io/local-block createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the pods are created.
oc -n openshift-local-storage get pods
$ oc -n openshift-local-storage get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the PVs are created.
You must see a new PV for each of the local storage devices on the three worker nodes. Refer to the example in the Finding available storage devices section that shows two available storage devices per worker node with a size 2.3 TiB for each node.
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the new
StorageClassthat is now present when theLocalVolumeCR is created. ThisStorageClassis used to provide theStorageClusterPVCs in the following steps.oc get sc | grep localblock
$ oc get sc | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 15m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 15mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
StorageClusterCR that uses thelocalblockStorageClass to consume the PVs created by the Local Storage Operator.Example of
StorageClusterCRocs-cluster-service.yamlusingmonDataDirHostPathandlocalblockStorageClass.Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo ensure that the OSDs have a guaranteed size across the nodes, the storage size for
storageDeviceSetsmust be specified as less than or equal to the size of the PVs created on the nodes.Create
StorageClusterCR.oc create -f ocs-cluster-service.yaml
$ oc create -f ocs-cluster-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
storagecluster.ocs.openshift.io/ocs-cluster-service created
storagecluster.ocs.openshift.io/ocs-cluster-service createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
See Verifying your OpenShift Container Storage installation.
Chapter 3. Verifying OpenShift Container Storage deployment for internal mode Copy linkLink copied to clipboard!
Use this section to verify that OpenShift Container Storage is deployed correctly.
3.1. Verifying the state of the pods Copy linkLink copied to clipboard!
To determine if OpenShift Container storage is deployed successfully, you can verify that the pods are in Running state.
Procedure
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
Select openshift-storage from the Project drop down list.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, “Pods corresponding to OpenShift Container storage cluster”.
Verify that the following pods are in running and completed state by clicking on the Running and the Completed tabs:
Expand Table 3.1. Pods corresponding to OpenShift Container storage cluster Component Corresponding pods OpenShift Container Storage Operator
-
ocs-operator-*(1 pod on any worker node) -
ocs-metrics-exporter-*
Rook-ceph Operator
rook-ceph-operator-*(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*(1 pod on any worker node) -
noobaa-core-*(1 pod on any storage node) -
noobaa-db-*(1 pod on any storage node) -
noobaa-endpoint-*(1 pod on any storage node)
MON
rook-ceph-mon-*(3 pods distributed across storage nodes)
MGR
rook-ceph-mgr-*(1 pod on any storage node)
MDS
rook-ceph-mds-ocs-storagecluster-cephfilesystem-*(2 pods distributed across storage nodes)
CSI
cephfs-
csi-cephfsplugin-*(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*(2 pods distributed across worker nodes)
-
rbd-
csi-rbdplugin-*(1 pod on each worker node) -
csi-rbdplugin-provisioner-*(2 pods distributed across worker nodes)
-
rook-ceph-crashcollector
rook-ceph-crashcollector-*(1 pod on each storage node)
OSD
-
rook-ceph-osd-*(1 pod for each device) -
rook-ceph-osd-prepare-ocs-deviceset-*(1 pod for each device)
-
3.2. Verifying the OpenShift Container Storage cluster is healthy Copy linkLink copied to clipboard!
- Click Home → Overview from the left pane of the OpenShift Web Console and click Persistent Storage tab.
In the Status card, verify that OCS Cluster and Data Resiliency has a green tick mark as shown in the following image:
Figure 3.1. Health status card in Persistent Storage Overview Dashboard
In the Details card, verify that the cluster information is displayed as follows:
- Service Name
- OpenShift Container Storage
- Cluster Name
- ocs-storagecluster
- Provider
- AWS
- Mode
- Internal
- Version
- ocs-operator-4.6.0
For more information on the health of OpenShift Container Storage cluster using the persistent storage dashboard, see Monitoring OpenShift Container Storage.
3.3. Verifying the Multicloud Object Gateway is healthy Copy linkLink copied to clipboard!
- Click Home → Overview from the left pane of the OpenShift Web Console and click the Object Service tab.
In the Status card, verify that both Object Service and Data Resiliency are in
Readystate (green tick).Figure 3.2. Health status card in Object Service Overview Dashboard
In the Details card, verify that the MCG information is displayed as follows:
- Service Name
- OpenShift Container Storage
- System Name
- Multicloud Object Gateway
- Provider
- AWS
- Version
- ocs-operator-4.6.0
For more information on the health of the OpenShift Container Storage cluster using the object service dashboard, see Monitoring OpenShift Container Storage.
3.4. Verifying that the OpenShift Container Storage specific storage classes exist Copy linkLink copied to clipboard!
To verify the storage classes exists in the cluster:
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Container Storage cluster creation:
-
ocs-storagecluster-ceph-rbd -
ocs-storagecluster-cephfs -
openshift-storage.noobaa.io
-
Chapter 4. Uninstalling OpenShift Container Storage Copy linkLink copied to clipboard!
4.1. Uninstalling OpenShift Container Storage in Internal mode Copy linkLink copied to clipboard!
Use the steps in this section to uninstall OpenShift Container Storage.
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
-
uninstall.ocs.openshift.io/cleanup-policy: delete -
uninstall.ocs.openshift.io/mode: graceful
The below table provides information on the different values that can used with these annotations:
| Annotation | Value | Default | Behavior |
|---|---|---|---|
| cleanup-policy | delete | Yes |
Rook cleans up the physical drives and the |
| cleanup-policy | retain | No |
Rook does not clean up the physical drives and the |
| mode | graceful | Yes | Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user |
| mode | forced | No | Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively. |
You can change the cleanup policy or the uninstall mode by editing the value of the annotation by using the following commands:
oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite storagecluster.ocs.openshift.io/ocs-storagecluster annotated
$ oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="retain" --overwrite
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite storagecluster.ocs.openshift.io/ocs-storagecluster annotated
$ oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/mode="forced" --overwrite
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
Prerequisites
- Ensure that the OpenShift Container Storage cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Container Storage.
- Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Container Storage.
- If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them.
Procedure
Delete the volume snapshots that are using OpenShift Container Storage.
List the volume snapshots from all the namespaces.
oc get volumesnapshot --all-namespaces
$ oc get volumesnapshot --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the output of the previous command, identify and delete the volume snapshots that are using OpenShift Container Storage.
oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>
$ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete PVCs and OBCs that are using OpenShift Container Storage.
In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Container Storage are deleted.
If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system.
Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Container Storage.
See Section 4.2, “Removing monitoring stack from OpenShift Container Storage”
Delete OpenShift Container Platform Registry PVCs using OpenShift Container Storage.
See Section 4.3, “Removing OpenShift Container Platform registry from OpenShift Container Storage”
Delete OpenShift Container Platform logging PVCs using OpenShift Container Storage.
See Section 4.4, “Removing the cluster logging operator from OpenShift Container Storage”
Delete other PVCs and OBCs provisioned using OpenShift Container Storage.
Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Container Storage. The script ignores the PVCs that are used internally by Openshift Container Storage.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOmit
RGW_PROVISIONERfor cloud platforms.Delete the OBCs.
oc delete obc <obc name> -n <project name>
$ oc delete obc <obc name> -n <project name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PVCs.
oc delete pvc <pvc name> -n <project-name>
$ oc delete pvc <pvc name> -n <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that you have removed any custom backing stores, bucket classes, etc., created in the cluster.
Delete the Storage Cluster object and wait for the removal of the associated resources.
oc delete -n openshift-storage storagecluster --all --wait=true
$ oc delete -n openshift-storage storagecluster --all --wait=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check for cleanup pods if the
uninstall.ocs.openshift.io/cleanup-policywas set todelete(default) and ensure that their status isCompleted.oc get pods -n openshift-storage | grep -i cleanup NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s
$ oc get pods -n openshift-storage | grep -i cleanup NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the directory
/var/lib/rookis now empty. This directory will be empty only if theuninstall.ocs.openshift.io/cleanup-policyannotation was set todelete(default).for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; done$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host ls -l /var/lib/rook; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow If encryption was enabled at the time of install, remove
dm-cryptmanageddevice-mappermapping from OSD devices on all the OpenShift Container Storage nodes.Create a
debugpod andchrootto the host on the storage node.oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get Device names and make note of the OpenShift Container Storage devices.
dmsetup ls ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)
$ dmsetup ls ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the mapped device.
cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt
$ cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcryptCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the above command gets stuck due to insufficient privileges, run the following commands:
-
Press
CTRL+Zto exit the above command. Find PID of the
cryptsetupprocess which was stuck.ps
$ psCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
PID TTY TIME CMD 778825 ? 00:00:00 cryptsetup
PID TTY TIME CMD 778825 ? 00:00:00 cryptsetupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Take a note of the
PIDnumber to kill. In this example,PIDis778825.Terminate the process using
killcommand.kill -9 <PID>
$ kill -9 <PID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the device name is removed.
dmsetup ls
$ dmsetup lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Press
Delete the namespace and wait till the deletion is complete. You will need to switch to another project if
openshift-storageis the active project.For example:
oc project default oc delete project openshift-storage --wait=true --timeout=5m
$ oc project default $ oc delete project openshift-storage --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The project is deleted if the following command returns a
NotFounderror.oc get project openshift-storage
$ oc get project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile uninstalling OpenShift Container Storage, if namespace is not deleted completely and remains in
Terminatingstate, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.- Delete local storage operator configurations if you have deployed OpenShift Container Storage using local storage devices. See Removing local storage operator configurations.
Unlabel the storage nodes.
oc label nodes --all cluster.ocs.openshift.io/openshift-storage- oc label nodes --all topology.rook.io/rack-
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage- $ oc label nodes --all topology.rook.io/rack-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OpenShift Container Storage taint if the nodes were tainted.
oc adm taint nodes --all node.ocs.openshift.io/storage-
$ oc adm taint nodes --all node.ocs.openshift.io/storage-Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm all PVs provisioned using OpenShift Container Storage are deleted. If there is any PV left in the
Releasedstate, delete it.oc get pv oc delete pv <pv name>
$ oc get pv $ oc delete pv <pv name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Multicloud Object Gateway storageclass.
oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
$ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove
CustomResourceDefinitions.oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io --wait=true --timeout=5m
$ oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow To ensure that OpenShift Container Storage is uninstalled completely, on the OpenShift Container Platform Web Console,
- Click Home → Overview to access the dashboard.
- Verify that the Persistent Storage and Object Service tabs no longer appear next to the Cluster tab.
4.1.1. Removing local storage operator configurations Copy linkLink copied to clipboard!
Use the instructions in this section only if you have deployed OpenShift Container Storage using local storage devices.
For OpenShift Container Storage deployments only using localvolume resources, go directly to step 8.
Procedure
-
Identify the
LocalVolumeSetand the correspondingStorageClassNamebeing used by OpenShift Container Storage. Set the variable SC to the
StorageClassproviding theLocalVolumeSet.export SC="<StorageClassName>"
$ export SC="<StorageClassName>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
LocalVolumeSet.oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage
$ oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the local storage PVs for the given
StorageClassName.oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pv$ oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
StorageClassName.oc delete sc $SC
$ oc delete sc $SCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the symlinks created by the
LocalVolumeSet.[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete
LocalVolumeDiscovery.oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage
$ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Removing
LocalVolumeresources (if any).Use the following steps to remove the
LocalVolumeresources that were used to provision PVs in the current or previous OpenShift Container Storage version. Also, ensure that these resources are not being used by other tenants on the cluster.For each of the local volumes, do the following:
-
Identify the
LocalVolumeand the correspondingStorageClassNamebeing used by OpenShift Container Storage. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass
For example:
LV=local-block SC=localblock
$ LV=local-block $ SC=localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the local volume resource.
oc delete localvolume -n openshift-local-storage --wait=true $LV
$ oc delete localvolume -n openshift-local-storage --wait=true $LVCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the remaining PVs and StorageClasses if they exist.
oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m oc delete storageclass $SC --wait --timeout=5m$ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m $ oc delete storageclass $SC --wait --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the artifacts from the storage nodes for that resource.
[[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; done$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Identify the
4.2. Removing monitoring stack from OpenShift Container Storage Copy linkLink copied to clipboard!
Use this section to clean up the monitoring stack from OpenShift Container Storage.
The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
Prerequisites
PVCs are configured to use OpenShift Container Platform monitoring stack.
For information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoringnamespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the monitoring
configmap.oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any
configsections that reference the OpenShift Container Storage storage classes as shown in the following example and save it.Before editing
Expand Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Expand Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
alertmanagerMainandprometheusK8smonitoring components are using the OpenShift Container Storage PVCs.Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Removing OpenShift Container Platform registry from OpenShift Container Storage Copy linkLink copied to clipboard!
Use this section to clean up OpenShift Container Platform registry from OpenShift Container Storage. If you want to configure an alternative storage, see image registry
The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace.
Prerequisites
- The image registry should have been configured to use an OpenShift Container Storage PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.oc edit configs.imageregistry.operator.openshift.io
$ oc edit configs.imageregistry.operator.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Before editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After editing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the PVC is called
registry-cephfs-rwx-pvc, which is now safe to delete.Delete the PVC.
oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Removing the cluster logging operator from OpenShift Container Storage Copy linkLink copied to clipboard!
Use this section to clean up the cluster logging operator from OpenShift Container Storage.
The PVCs that are created as a part of configuring cluster logging operator are in the openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been configured to use OpenShift Container Storage PVCs.
Procedure
Remove the
ClusterLogginginstance in the namespace.oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m
$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow The PVCs in the
openshift-loggingnamespace are now safe to delete.Delete PVCs.
oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5mCopy to Clipboard Copied! Toggle word wrap Toggle overflow