OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Chapter 1. Preparing to deploy OpenShift Data Foundation using Red Hat Virtualization platform
Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See Planning your deployment.
Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS):
- Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault.
- Ensure that you are using signed certificates on your Vault servers.
Minimum starting node requirements [Technology Preview]
An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide.
Regional-DR requirements [Developer Preview]
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed requirements, see Regional-DR requirements and RHACM requirements.
- Ensure that the requirements for installing OpenShift Data Foundation using local storage devices are met.
1.1. Enabling key value backend path and policy in Vault
Prerequisites
- Administrator access to Vault.
-
Carefully, choose a unique path name as the backend
path
that follows the naming convention since it cannot be changed later.
Procedure
Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vault secrets enable -path=odf kv
$ vault secrets enable -path=odf kv
For Vault KV secret engine API, version 2:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vault secrets enable -path=odf kv-v2
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict users to perform a write or delete operation on the secret using the following commands.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Create a token matching the above policy.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vault token create -policy=odf -format json
$ vault token create -policy=odf -format json
1.2. Requirements for installing OpenShift Data Foundation using local storage devices
Node requirements
The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them.
- Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation.
- The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk.
For more information, see the Resource requirements section in the Planning guide.
Regional-DR requirements [Developer Preview]
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed requirements, see Regional-DR requirements and RHACM requirements.
Arbiter stretch cluster requirements [Technology Preview]
In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises.
For detailed requirements and instructions, see Configuring OpenShift Data Foundation for Metro-DR stretch cluster.
Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones.
Minimum starting node requirements [Technology Preview]
An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met.
For more information, see Resource requirements section in the Planning guide.