Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Deploying OpenShift Data Foundation on Azure Red Hat OpenShift
The Azure Red Hat OpenShift service enables you to deploy fully managed OpenShift clusters. Red Hat OpenShift Data Foundation can be deployed on Azure Red Hat OpenShift service.
OpenShift Data Foundation on Azure Red Hat OpenShift is not a managed service offering. Red Hat OpenShift Data Foundation subscriptions are required to have the installation supported by the Red Hat support team. Open support cases by choosing the product as Red Hat OpenShift Data Foundation with the Red Hat support team (and not Microsoft) if you need any assistance for Red Hat OpenShift Data Foundation on Azure Red Hat OpenShift.
To install OpenShift Data Foundation on Azure Red Hat OpenShift, follow sections:
- Getting a Red Hat pull secret for new deployment of Azure Red Hat OpenShift.
- Preparing a Red Hat pull secret for existing Azure Red Hat OpenShift clusters.
- Adding the pull secret to the cluster.
- Validating your Red Hat pull secret is working.
- Install the Red Hat OpenShift Data Foundation Operator.
- Create the OpenShift Data Foundation Cluster Service.
3.1. Getting a Red Hat pull secret for new deployment of Azure Red Hat OpenShift Copiar enlaceEnlace copiado en el portapapeles!
A Red Hat pull secret enables the cluster to access Red Hat container registries along with additional content.
Prerequisites
- A Red Hat portal account.
- OpenShift Data Foundation subscription.
Procedure
To get a Red Hat pull secret for a new deployment of Azure Red Hat OpenShift, follow the steps in the section Get a Red Hat pull secret in the official Microsoft Azure documentation.
Note that while creating the Azure Red Hat OpenShift cluster, you may need larger worker nodes, controlled by --worker-vm-size or more worker nodes, controlled by --worker-count. The recommended worker-vm-size is Standard_D16s_v3. You can also use dedicated worker nodes, for more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and allocating storage resources guide.
3.2. Preparing a Red Hat pull secret for existing Azure Red Hat OpenShift clusters Copiar enlaceEnlace copiado en el portapapeles!
When you create an Azure Red Hat OpenShift cluster without adding a Red Hat pull secret, a pull secret is still created on the cluster automatically. However, this pull secret is not fully populated.
Use this section to update the automatically created pull secret with the additional values from the Red Hat pull secret.
Prerequisites
- Existing Azure Red Hat OpenShift cluster without a Red Hat pull secret.
Procedure
To prepare a Red Hat pull secret for existing an existing Azure Red Hat OpenShift clusters, follow the steps in the section Prepare your pull secret in the official Mircosoft Azure documentation.
3.3. Adding the pull secret to the cluster Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
- A Red Hat pull secret.
Procedure
Run the following command to update your pull secret.
NoteRunning this command causes the cluster nodes to restart one by one as they are updated.
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.json
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After the secret is set, you can enable the Red Hat Certified Operators.
3.3.1. Modifying the configuration files to enable Red Hat operators Copiar enlaceEnlace copiado en el portapapeles!
To modify the configuration files to enable Red Hat operators, follow the steps in the section Modify the configuration files in the official Microsoft Azure documentation.
3.4. Validating your Red Hat pull secret is working Copiar enlaceEnlace copiado en el portapapeles!
After you add the pull secret and modify the configuration files, the cluster can take several minutes to get updated.
To check if the cluster has been updated, run the following command to show the Certified Operators and Red Hat Operators sources available:
If you do not see the Red Hat Operators, wait for a few minutes and try again.
To ensure that your pull secret has been updated and is working correctly, open Operator Hub and check for any Red Hat verified Operator. For example, check if the OpenShift Data Foundation Operator is available, and see if you have permissions to install it.
3.5. Installing Red Hat OpenShift Data Foundation Operator Copiar enlaceEnlace copiado en el portapapeles!
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand operator installation permissions. - You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storagenamespace (createopenshift-storagenamespace in this case):oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Taint a node as
infrato ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.19.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
3.6. Creating OpenShift Data Foundation cluster Copiar enlaceEnlace copiado en el portapapeles!
Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub.
If you want to use Azure Vault as the key management service provider, make sure to set up client authentication and fetch the client credentials from Azure using the following steps:
- Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation.
- Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation.
- Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault
Procedure
-
In the OpenShift Web Console, click Storage
Data Foundation Storage Systems Create StorageSystem. In the Backing storage page, select the following:
- Select Full Deployment for the Deployment type option.
- Select the Use an existing StorageClass option.
Select the Storage Class.
By default, it is set to
managed-csi.Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview].
This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure.
ImportantOpenShift Data Foundation ships PostgreSQL images maintained by Red Hat, which are used to store metadata for the Multicloud Object Gateway. This PostgreSQL usage is at the application level.
As a result, OpenShift Data Foundation does not perform database-level optimizations or in-depth insights.
If customers have their own PostgreSQL that is well-maintained and optimized, we recommend using it. OpenShift Data Foundation supports external PostgreSQL instances.
Any PostgreSQL-related issues requiring code changes or deep technical analysis may need to be addressed upstream. This could result in longer resolution times.
Provide the following connection details:
- Username
- Password
- Server name and Port
- Database name
- Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
- Click Next.
In the Capacity and nodes page, provide the necessary information:
Select a value for Requested Capacity from the dropdown list. It is set to
2 TiBby default.NoteOnce you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).
- In the Select Nodes section, select at least three available nodes.
In the Configure performance section, select one of the following performance profiles:
Lean
Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.
Balanced (default)
Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.
Performance
Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.
NoteYou have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab.
ImportantBefore selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.
For more information about resource requirements, see Resource requirement for performance profiles.
Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.
If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.
Optional: Select the Enable automatic capacity scaling for your cluster checkbox.
When automatic capacity scaling is enabled, additional raw capacity equivalent to the configured deployment size is automatically added to the cluster when used capacity reaches 70%. This ensures your deployment scales seamlessly to meet demand.
This option is disabled in lean profile mode, LSO deployment, and external mode deployment.
ImportantThis may incur additional costs for the underlying storage.
- Set the cluster expansion limit from the dropdown. This is the maximum the cluster can expand in the cloud. Automatic scaling is suspended if this limit is exceeded.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirements:
To enable encryption, select Enable data encryption for block and file storage.
Select either one or both the encryption levels:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details:
Vault
Select an Authentication Method.
Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key.
- Click Save.
Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name, Authentication Path, and Vault Enterprise Namespace if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
Click Save.
NoteIn case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created:
oc patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{"op": "add", "path":"/spec/encryption/keyRotation/enable", "value": true}]'$ oc patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{"op": "add", "path":"/spec/encryption/keyRotation/enable", "value": true}]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Thales CipherTrust Manager (using KMIP)
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local.
Azure Key Vault
For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure.
- Enter a unique Connection name for the key management service within the project.
- Enter Azure Vault URL.
- Enter Client ID.
- Enter Tenant ID.
-
Upload Certificate file in
.PEMformat and the certificate file must include a client certificate and a private key.
To enable in-transit encryption, select In-transit encryption.
- Select a Network.
- Click Next.
In the Review and create page, review the configuration details.
To modify any configuration settings, click Back.
- Click Create StorageSystem.
When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.
Verification steps
To verify the final Status of the installed storage cluster:
-
In the OpenShift Web Console, navigate to Storage
Data Foundation Storage System ocs-storagecluster. -
Verify that
StatusofStorageClusterisReadyand has a green tick mark next to it.
-
In the OpenShift Web Console, navigate to Storage
- To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.