Deploying OpenShift Data Foundation using Red Hat Virtualization platform
Instructions on deploying OpenShift Data Foundation on Red Hat Virtualization Platform
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback:
For simple comments on specific passages:
- Make sure you are viewing the documentation in the Multi-page HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document.
- Use your mouse cursor to highlight the part of text that you want to comment on.
- Click the Add Feedback pop-up that appears below the highlighted text.
- Follow the displayed instructions.
For submitting more complex feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Preface
Red Hat OpenShift Data Foundation 4.9 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Red Hat Virtualization platform clusters.
Deploying OpenShift Data Foundation on OpenShift Container Platform using shared storage devices provided by Red Hat Virtualization installer-provisioned infrastructure (IPI) enables you to create internal cluster resources.
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation.
Only internal OpenShift Data Foundation clusters are supported on Red Hat Virtualization platform. See Planning your deployment for more information about deployment requirements.
Based on your requirement, perform one of the following methods of deployment:
- Deploy using dynamic storage devices for the full deployment of OpenShift Data Foundation using dynamic storage devices.
- Deploy using local storage devices for the full deployment of OpenShift Data Foundation using local storage devices.
- Deploy standalone Multicloud Object Gateway component for deploying only the Multicloud Object Gateway component with OpenShift Data Foundation.
Chapter 1. Preparing to deploy OpenShift Data Foundation using Red Hat Virtualization platform
Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See Planning your deployment.
Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS):
- Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault.
- Ensure that you are using signed certificates on your Vault servers.
Minimum starting node requirements [Technology Preview]
An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.
Regional-DR requirements [Developer Preview]
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed requirements, see Regional-DR requirements and RHACM requirements.
- Ensure that the requirements for installing OpenShift Data Foundation using local storage devices are met.
1.1. Enabling key value backend path and policy in Vault
Prerequisites
- Administrator access to Vault.
-
Carefully, choose a unique path name as the backend
path
that follows the naming convention since it cannot be changed later.
Procedure
Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For Vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict users to perform a write or delete operation on the secret using the following commands.
echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Create a token matching the above policy.
$ vault token create -policy=odf -format json
1.2. Requirements for installing OpenShift Data Foundation using local storage devices
Node requirements
The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them.
- Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation.
- The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk.
For more information, see the Resource requirements section in the Planning guide.
Regional-DR requirements [Developer Preview]
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed requirements, see Regional-DR requirements and RHACM requirements.
Arbiter stretch cluster requirements [Technology Preview]
In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises.
For detailed requirements and instructions, see Configuring OpenShift Data Foundation for Metro-DR stretch cluster.
Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones.
Minimum starting node requirements [Technology Preview]
An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met.
For more information, see Resource requirements section in the Planning guide.
Chapter 2. Deploy using dynamic storage devices
Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Red Hat Virtualization gives you the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices:
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storage
namespace (create openshift-storage namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.9.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available.
In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it.
For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin.
2.2. Creating an OpenShift Data Foundation cluster
Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click on the OpenShift Data Foundation operator, and then click Create StorageSystem.
In the Backing storage page, select the following:
- Select the Use an existing StorageClass option.
-
Expand Advanced and select
Full Deployment
for the Deployment type option. - Click Next.
In the Capacity and nodes page, provide the necessary information:
Select a value for Requested Capacity from the dropdown list. It is set to
2 TiB
by default.NoteOnce you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).
- In the Select Nodes section, select at least three available nodes.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirements:
- To enable encryption, select Enable data encryption for block and file storage.
Choose either one or both the encryption levels:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
-
Key Management Service Provider is set to
Vault
by default. - Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save.
-
Key Management Service Provider is set to
- Click Next.
In the Review and create page, review the configuration details.
To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources.
Verify that
Status
ofStorageCluster
isReady
and has a green tick mark next to it.- To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment.
Additional resources
To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
Chapter 3. Deploy using local storage devices
Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Use this section to deploy OpenShift Data Foundation on Red Hat Virtualization where OpenShift Container Platform is already installed.
Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the next steps.
3.1. Installing Local Storage Operator
Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Type
local storage
in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page:
-
Update channel as either
4.9
orstable
. - Installation mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Update approval as Automatic.
-
Update channel as either
- Click Install.
Verification steps
- Verify that the Local Storage Operator shows a green tick indicating successful installation.
3.2. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storage
namespace (create openshift-storage namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.9.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available.
In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it.
For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin.
3.3. Creating OpenShift Data Foundation cluster on Red Hat Virtualization platform
Use this procedure to create an OpenShift Data Foundation Cluster using local storage devices after you install the OpenShift Data Foundation operator.
Prerequisites
- The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator.
- Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
In the Backing storage page, perform the following:
- Select the Create a new StorageClass using the local storage devices option.
-
Expand Advanced and select
Full Deployment
for the Deployment type option. Click Next.
NoteYou are prompted to install the Local Storage Operator if it is not already installed. Click Install and follows procedure as described in Installing Local Storage Operator.
In the Create local volume set page, provide the following information:
Enter a name for the LocalVolumeSet and the StorageClass.
By default, the local volume set name appears for the storage class name. You can change the name.
Choose one of the following:
- Disks on all nodes to use the available disks that match the selected filters on all nodes.
Disks on selected nodes to use the available disks that match the selected filters only on selected nodes.
ImportantThe flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes are spread across fewer than the minimum requirement of 3 availability zones.
For information about flexible scaling, see Add capacity using YAML section in Scaling Storage guide.
If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.
For minimum starting node requirements, see Resource requirements section in Planning guide.
-
From the available list of Disk Type, select
SSD/NVMe
. Expand the Advanced section and set the following options:
Volume Mode
Block is selected by default.
Device Type
Select one or more device type from the dropdown list.
Disk Size
Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
Maximum Disks Limit
This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
Click Next.
A pop-up to confirm the creation of LocalVolumeSet is displayed.
- Click Yes to continue.
In the Capacity and nodes page, configure the following:
- Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirement:
- To enable encryption, select Enable data encryption for block and file storage.
Choose one of the following Encryption level:
- Cluster-wide encryption to encrypt the entire cluster (block and file).
- StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
-
Key Management Service Provider is set to
Vault
by default. - Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>''), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Provide CA Certificate, Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file.
- Click Save.
-
Key Management Service Provider is set to
- Click Next.
In the Review and create page, review the configuration details.
- To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources.
-
Verify that
Status
ofStorageCluster
isReady
and has a green tick mark next to it.
To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled):
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources.
In the YAML tab, search for the keys
flexibleScaling
inspec
section andfailureDomain
instatus
section. Ifflexible scaling
istrue
andfailureDomain
is set tohost
, flexible scaling feature is enabled.spec: flexibleScaling: true […] status: failureDomain: host
- To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
- To expand the capacity of the initial cluster, see Scaling Storage.
Chapter 4. Verifying OpenShift Data Foundation deployment
Use this section to verify that OpenShift Data Foundation is deployed correctly.
4.1. Verifying the state of the pods
Procedure
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 4.1, “Pods corresponding to OpenShift Data Foundation cluster”.
Click the Running and Completed tabs to verify that the following pods are in
Running
andCompleted
state:Table 4.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any worker node) -
ocs-metrics-exporter-*
(1 pod on any worker node) -
odf-operator-controller-manager-*
(1 pod on any worker node) -
odf-console-*
(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any worker node) -
noobaa-core-*
(1 pod on any storage node) -
noobaa-db-pg-*
(1 pod on any storage node) -
noobaa-endpoint-*
(1 pod on any storage node)
MON
rook-ceph-mon-*
(3 pods distributed across storage nodes)
MGR
rook-ceph-mgr-*
(1 pod on any storage node)
MDS
rook-ceph-mds-ocs-storagecluster-cephfilesystem-*
(2 pods distributed across storage nodes)
RGW
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-*
(1 pod on any storage node)CSI
cephfs
-
csi-cephfsplugin-*
(1 pod on each worker node) -
csi-cephfsplugin-provisioner-*
(2 pods distributed across worker nodes)
-
rbd
-
csi-rbdplugin-*
(1 pod on each worker node) -
csi-rbdplugin-provisioner-*
(2 pods distributed across worker nodes)
-
rook-ceph-crashcollector
rook-ceph-crashcollector-*
(1 pod on each storage node)
OSD
-
rook-ceph-osd-*
(1 pod for each device) -
rook-ceph-osd-prepare-ocs-deviceset-*
(1 pod for each device)
-
4.2. Verifying the OpenShift Data Foundation cluster is healthy
Procedure
- In the OpenShift Web Console, click Storage → OpenShift Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
4.3. Verifying the Multicloud Object Gateway is healthy
Procedure
- In the OpenShift Web Console, click Storage → OpenShift Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.
4.4. Verifying that the OpenShift Data Foundation specific storage classes exist
Procedure
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-storagecluster-ceph-rbd
-
ocs-storagecluster-cephfs
-
openshift-storage.noobaa.io
-
ocs-storagecluster-ceph-rgw
-
Chapter 5. Deploy standalone Multicloud Object Gateway
Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices.
5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices
Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps:
- Installing Red Hat OpenShift Data Foundation Operator
- Creating standalone Multicloud Object Gateway
5.1.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storage
namespace (create openshift-storage namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.9.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available.
In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it.
For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin.
5.1.2. Creating standalone Multicloud Object Gateway
Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation.
Prerequisites
- Ensure that OpenShift Data Foundation Operator is installed.
- (For deploying using local storage devices only) Ensure that Local Storage Operator is installed.
- Ensure that you have a storage class and is set as the default.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click OpenShift Data Foundation operator and then click Create StorageSystem.
- In the Backing storage page, expand Advanced.
- Select Multicloud Object Gateway for Deployment type.
- Click Next.
Optional: In the Security page, select Connect to an external key management service.
-
Key Management Service Provider is set to
Vault
by default. - Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
- Click Save.
- Click Next.
-
Key Management Service Provider is set to
In the Review and create page, review the configuration details:
To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
- Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → OpenShift Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
- Verify the state of the pods
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list and verify that the following pods are inRunning
state.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any worker node) -
ocs-metrics-exporter-*
(1 pod on any worker node) -
odf-operator-controller-manager-*
(1 pod on any worker node) -
odf-console-*
(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any worker node) -
noobaa-core-*
(1 pod on any worker node) -
noobaa-db-pg-*
(1 pod on any worker node) -
noobaa-endpoint-*
(1 pod on any worker node)
-
5.2. Deploy standalone Multicloud Object Gateway using local storage devices
Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps:
- Installing the Local Storage Operator
- Installing Red Hat OpenShift Data Foundation Operator
- Creating standalone Multicloud Object Gateway
5.2.1. Installing Local Storage Operator
Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Type
local storage
in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page:
-
Update channel as either
4.9
orstable
. - Installation mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Update approval as Automatic.
-
Update channel as either
- Click Install.
Verification steps
- Verify that the Local Storage Operator shows a green tick indicating successful installation.
5.2.2. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storage
namespace (create openshift-storage namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.9.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available.
In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it.
For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin.
5.2.3. Creating standalone Multicloud Object Gateway
Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation.
Prerequisites
- Ensure that OpenShift Data Foundation Operator is installed.
- (For deploying using local storage devices only) Ensure that Local Storage Operator is installed.
- Ensure that you have a storage class and is set as the default.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click OpenShift Data Foundation operator and then click Create StorageSystem.
- In the Backing storage page, expand Advanced.
- Select Multicloud Object Gateway for Deployment type.
- Click Next.
Optional: In the Security page, select Connect to an external key management service.
-
Key Management Service Provider is set to
Vault
by default. - Enter Vault Service Name, host Address of Vault server ('https://<hostname or ip>'), Port number, and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
- Click Save.
- Click Next.
-
Key Management Service Provider is set to
In the Review and create page, review the configuration details:
To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
- Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → OpenShift Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
- Verify the state of the pods
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list and verify that the following pods are inRunning
state.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any worker node) -
ocs-metrics-exporter-*
(1 pod on any worker node) -
odf-operator-controller-manager-*
(1 pod on any worker node) -
odf-console-*
(1 pod on any worker node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any worker node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any worker node) -
noobaa-core-*
(1 pod on any worker node) -
noobaa-db-pg-*
(1 pod on any worker node) -
noobaa-endpoint-*
(1 pod on any worker node)
-
Chapter 6. Uninstalling OpenShift Data Foundation
6.1. Uninstalling OpenShift Data Foundation in Internal mode
To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation.