Deploying OpenShift Data Foundation using IBM Cloud
Instructions on deploying Red Hat OpenShift Data Foundation using IBM Cloud
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Red Hat OpenShift Data Foundation 4.10 supports deployment of Red Hat OpenShift on IBM Cloud clusters in connected environments.
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud
You can use Red Hat OpenShift Data Foundation for your workloads that run in IBM Cloud. These workloads might run in Red Hat OpenShift on IBM Cloud clusters that are in the public cloud or in your own IBM Cloud Satellite location.
1.1. Deploying on IBM Cloud public
When you create a Red Hat OpenShift on IBM Cloud cluster, you can choose between classic or Virtual Private Cloud (VPC) infrastructure. The Red Hat OpenShift Data Foundation managed cluster add-on supports both infrastructure providers. For classic clusters, the add-on deploys the OpenShift Data Foundation operator with the Local Storage operator. For VPC clusters, the add-on deploys the OpenShift Data Foundation operator which you can use with IBM Cloud Block Storage on VPC storage volumes.
Benefits of using the OpenShift Data Foundation managed cluster add-on to install OpenShift Data Foundation instead of installing from OperatorHub
- Deploy OpenShift Data Foundation from a single CRD instead of manually creating separate resources. For example, in the single CRD that add-on enables, you configure the namespaces, storagecluster, and other resources you need to run OpenShift Data Foundation.
- Classic - Automatically create PVs using the storage devices that you specify in your OpenShift Data Foundation CRD.
- VPC - Dynamically provision IBM Cloud Block Storage on VPC storage volumes for your OpenShift Data Foundation storage cluster.
- Get patch updates automatically for the managed add-on.
- Update the OpenShift Data Foundation version by modifying a single field in the CRD.
- Integrate with IBM Cloud Object Storage by providing credentials in the CRD.
1.1.1. Deploying on classic infrastructure in IBM Cloud
You can deploy OpenShift Data Foundation on IBM Cloud classic clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator and the Local Storage operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a single custom resource definition that contains your storage device configuration details.
For more information, see the Preparing your cluster for OpenShift Data Foundation.
1.1.2. Deploying on VPC infrastructure in IBM Cloud
You can deploy OpenShift Data Foundation on IBM Cloud VPC clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a custom resource definition that contains your worker node information and the IBM Cloud Block Storage for VPC storage classes that you want to use to dynamically provision the OpenShift Data Foundation storage devices.
For more information, see the Preparing your cluster OpenShift Data Foundation.
1.2. Deploying on IBM Cloud Satellite
With IBM Cloud Satellite, you can create a location with your own infrastructure, such as an on-premises data center or another cloud provider, to bring IBM Cloud services anywhere, including where your data resides. If you store your data by using Red Hat OpenShift Data Foundation, you can use Satellite storage templates to consistently install OpenShift Data Foundation across the clusters in your Satellite location. The templates help you create a Satellite configuration of the various OpenShift Data Foundation parameters, such as the device paths to your local disks or the storage classes that you want to use to dynamically provision volumes. Then, you assign the Satellite configuration to the clusters where you want to install OpenShift Data Foundation.
Benefits of using Satellite storage to install OpenShift Data Foundation instead of installing from OperatorHub
- Create versions your OpenShift Data Foundation configuration to install across multiple clusters or expand your existing configuration.
- Update OpenShift Data Foundation across multiple clusters consistently.
- Standardize storage classes that developers can use for persistent storage across clusters.
- Use a similar deployment pattern for your apps with Satellite Config.
- Choose from templates for an OpenShift Data Foundation cluster using local disks on your worker nodes or an OpenShift Data Foundation cluster that uses dynamically provisioned volumes from your storage provider.
- Integrate with IBM Cloud Object Storage by providing credentials in the template.
1.2.1. Using OpenShift Data Foundation with the local storage present on your worker nodes in IBM Cloud Satellite
For an OpenShift Data Foundation configuration that uses the local storage present on your worker nodes, you can use a Satellite template to configure your OpenShift Data Foundation configuration. Your cluster must meet certain requirements, such as CPU and memory requirements and size requirements of the available raw unformatted, unmounted disks. Choose a local OpenShift Data Foundation configuration when you want to use the local storage devices already present on your worker nodes, or statically provisioned raw volumes that you attach to your worker nodes.
For more information, see the IBM Cloud Satellite local OpenShift Data Foundation storage documentation.
1.2.2. Using OpenShift Data Foundation with remote, dynamically provisioned storage volumes in IBM Cloud Satellite
For an OpenShift Data Foundation configuration that uses remote, dynamically provisioned storage volumes from your preferred storage provider, you can use a Satellite storage template to create your storage configuration. In your OpenShift Data Foundation configuration, you specify the storage classes that you want use and the volume sizes that you want to provision. Your cluster must meet certain requirements, such as CPU and memory requirements. Choose the OpenShift Data Foundation-remote storage template when you want to use dynamically provisioned remote volumes from your storage provider in your OpenShift Data Foundation configuration.
For more information, see the IBM Cloud Satellite remote OpenShift Data Foundation storage documentation.