Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed.
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.
Perform the following steps to deploy OpenShift Data Foundation:
2.1. Installing Local Storage Operator Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Type
local storagein the Filter by keyword… box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page:
-
Update channel as
stable. - Installation Mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Approval Strategy as Automatic.
-
Update channel as
- Click Install.
Verification steps
- Verify that the Local Storage Operator shows a green tick indicating successful installation.
2.2. Installing Red Hat OpenShift Data Foundation Operator Copia collegamentoCollegamento copiato negli appunti!
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
For information about the hardware and software requirements, see Planning your deployment.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
-
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storagenamespace (create openshift-storage namespace in this case):
oc annotate namespace openshift-storage openshift.io/node-selector=
$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infrato ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.14.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation.
Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Storage and verify if Data Foundation is available.
2.3. Enabling cluster-wide encryption with KMS using the Token authentication method Copia collegamentoCollegamento copiato negli appunti!
You can enable the key value backend path and policy in the vault for token authentication.
Prerequisites
- Administrator access to the vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
-
Carefully, select a unique path name as the backend
paththat follows the naming convention since you cannot change it later.
Procedure
Enable the Key/Value (KV) backend path in the vault.
For vault KV secret engine API, version 1:
vault secrets enable -path=odf kv
$ vault secrets enable -path=odf kvCopy to Clipboard Copied! Toggle word wrap Toggle overflow For vault KV secret engine API, version 2:
vault secrets enable -path=odf kv-v2
$ vault secrets enable -path=odf kv-v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a policy to restrict the users to perform a write or delete operation on the secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a token that matches the above policy:
vault token create -policy=odf -format json
$ vault token create -policy=odf -format jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method Copia collegamentoCollegamento copiato negli appunti!
You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).
Prerequisites
- Administrator access to Vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- The OpenShift Data Foundation operator must be installed from the Operator Hub.
-
Select a unique path name as the backend
paththat follows the naming convention carefully. You cannot change this path name later.
Procedure
Create a service account:
oc -n openshift-storage create serviceaccount <serviceaccount_name>
$ oc -n openshift-storage create serviceaccount <serviceaccount_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
<serviceaccount_name>specifies the name of the service account.For example:
oc -n openshift-storage create serviceaccount odf-vault-auth
$ oc -n openshift-storage create serviceaccount odf-vault-authCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create
clusterrolebindingsandclusterroles:oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-authCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret for the
serviceaccounttoken and CA certificate.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
<serviceaccount_name>is the service account created in the earlier step.Get the token and the CA certificate from the secret.
SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo) SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)$ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo) $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the OCP cluster endpoint.
OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")$ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")Copy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the service account issuer:
oc proxy & proxy_pid=$! issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)" kill $proxy_pid
$ oc proxy & $ proxy_pid=$! $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)" $ kill $proxy_pidCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:
vault auth enable kubernetes
$ vault auth enable kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT" \ issuer="$issuer"$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT" \ issuer="$issuer"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo configure the Kubernetes authentication method in Vault when the issuer is empty:
vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT"$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
vault secrets enable -path=odf kv
$ vault secrets enable -path=odf kvCopy to Clipboard Copied! Toggle word wrap Toggle overflow For Vault KV secret engine API, version 2:
vault secrets enable -path=odf kv-v2
$ vault secrets enable -path=odf kv-v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a policy to restrict the users to perform a
writeordeleteoperation on the secret:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the roles:
vault write auth/kubernetes/role/odf-rook-ceph-op \ bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h$ vault write auth/kubernetes/role/odf-rook-ceph-op \ bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440hCopy to Clipboard Copied! Toggle word wrap Toggle overflow The role
odf-rook-ceph-opis later used while you configure the KMS connection details during the creation of the storage system.vault write auth/kubernetes/role/odf-rook-ceph-osd \ bound_service_account_names=rook-ceph-osd \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h$ vault write auth/kubernetes/role/odf-rook-ceph-osd \ bound_service_account_names=rook-ceph-osd \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Finding available storage devices Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power.
Procedure
List and verify the name of the worker nodes with the OpenShift Data Foundation label.
oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf
NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform.
oc debug node/<node name>
$ oc debug node/<node name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, for worker-0, the available local devices of 500G are
sda,sdc,sde,sdg,sdi,sdk,sdm,sdo.- Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details.
2.6. Creating OpenShift Data Foundation cluster on IBM Power Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met.
- You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power.
Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation:
oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To identify storage devices on each node, refer to Finding available storage devices.
Procedure
- Log into the OpenShift Web Console.
-
In
openshift-local-storagenamespace Click OperatorsInstalled Operators to view the installed operators. - Click the Local Storage installed operator.
- On the Operator Details page, click the Local Volume link.
- Click Create Local Volume.
- Click on YAML view for configuring Local Volume.
Define a
LocalVolumecustom resource for block PVs using the following YAML.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The above definition selects
sdalocal device from theworker-0,worker-1andworker-2nodes. Thelocalblockstorage class is created and persistent volumes are provisioned fromsda.ImportantSpecify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths.
- Click Create.
Confirm whether
diskmaker-managerpods andPersistent Volumesare created.For Pods
-
Click Workloads
Pods from the left pane of the OpenShift Web Console. - Select openshift-local-storage from the Project drop-down list.
-
Check if there are
diskmaker-managerpods for each of the worker node that you used while creating LocalVolume CR.
-
Click Workloads
For Persistent Volumes
-
Click Storage
PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name
local-pv-*. Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR.ImportantThe flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones.
For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled.
- Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
-
Click Storage
In the OpenShift Web Console, click Operators
Installed Operators to view all the installed operators. Ensure that the Project selected is
openshift-storage.- Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
In the Backing storage page, perform the following:
- Select Full Deployment for the Deployment type option.
- Select the Use an existing StorageClass option.
Select the required Storage Class that you used while installing LocalVolume.
By default, it is set to
none.- Click Next.
In the Capacity and nodes page, configure the following:
Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up.
NoteOnce you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).
- The Selected nodes list shows the nodes based on the storage class.
- Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
Optional [Technology preview]: Select the Add replica-1 pool checkbox to deploy OpenShift Data Foundation with a single replica. This avoids redundant data copies and allows resiliency management on the application level. In order to enable replica-1 pool, atleast 2 storage disks should be attached to each of the storage nodes.
WarningEnabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication.
ImportantSingle replica deployment is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information, see Technology Preview Features Support Scope.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirements:
To enable encryption, select Enable data encryption for block and file storage.
Select either one or both the encryption levels:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
Select an Authentication Method.
- Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
- Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local.
Select a Network.
- Select Default (OVN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power.
- Click Next.
To enable in-transit encryption, select In-transit encryption.
- Select a Network.
- Click Next.
- In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery(Regional-DR only) checkbox, else click Next.
In the Review and create page::
- Review the configurations details. To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources. -
Verify that
StatusofStorageClusterisReadyand has a green tick mark next to it.
-
In the OpenShift Web Console, navigate to Installed Operators
To verify if flexible scaling is enabled on your storage cluster, perform the following steps:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster. In the YAML tab, search for the keys
flexibleScalinginspecsection andfailureDomaininstatussection. Ifflexible scalingis true andfailureDomainis set to host, flexible scaling feature is enabled.spec: flexibleScaling: true […] status: failureDomain: host
spec: flexibleScaling: true […] status: failureDomain: hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
In the OpenShift Web Console, navigate to Installed Operators
- To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
- To expand the capacity of the initial cluster, see the Scaling Storage guide.