Chapter 2. Deploying OpenShift Data Foundation on Google Cloud
You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications.
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see link: Deploy standalone Multicloud Object Gateway.
Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements.
Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices:
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storage
namespace (createopenshift-storage
namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. -
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if Data Foundation dashboard is available.
2.2. Enabling cluster-wide encryption with KMS using the Token authentication method
You can enable the key value backend path and policy in the vault for token authentication.
Prerequisites
- Administrator access to the vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
-
Carefully, select a unique path name as the backend
path
that follows the naming convention since you cannot change it later.
Procedure
Enable the Key/Value (KV) backend path in the vault.
For vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a write or delete operation on the secret:
echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Create a token that matches the above policy:
$ vault token create -policy=odf -format json
2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method
You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).
Prerequisites
- Administrator access to Vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- The OpenShift Data Foundation operator must be installed from the Operator Hub.
-
Select a unique path name as the backend
path
that follows the naming convention carefully. You cannot change this path name later.
Procedure
Create a service account:
$ oc -n openshift-storage create serviceaccount <serviceaccount_name>
where,
<serviceaccount_name>
specifies the name of the service account.For example:
$ oc -n openshift-storage create serviceaccount odf-vault-auth
Create
clusterrolebindings
andclusterroles
:$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
For example:
$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
Create a secret for the
serviceaccount
token and CA certificate.$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF
where,
<serviceaccount_name>
is the service account created in the earlier step.Get the token and the CA certificate from the secret.
$ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo) $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
Retrieve the OCP cluster endpoint.
$ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
Fetch the service account issuer:
$ oc proxy & $ proxy_pid=$! $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)" $ kill $proxy_pid
Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:
$ vault auth enable kubernetes
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT" \ issuer="$issuer"
ImportantTo configure the Kubernetes authentication method in Vault when the issuer is empty:
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT"
Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For Vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a
write
ordelete
operation on the secret:echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Generate the roles:
$ vault write auth/kubernetes/role/odf-rook-ceph-op \ bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h
The role
odf-rook-ceph-op
is later used while you configure the KMS connection details during the creation of the storage system.$ vault write auth/kubernetes/role/odf-rook-ceph-osd \ bound_service_account_names=rook-ceph-osd \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h
2.4. Creating an OpenShift Data Foundation cluster
Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator.
Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using
pd-ssd
as shown in the followingssd-storeageclass.yaml
example:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete
Procedure
In the OpenShift Web Console, click Operators
Installed Operators to view all the installed operators. Ensure that the Project selected is
openshift-storage
.- Click on the OpenShift Data Foundation operator, and then click Create StorageSystem.
In the Backing storage page, select the following:
- Select Full Deployment for the Deployment type option.
- Select the Use an existing StorageClass option.
Select the Storage Class.
By default, it is set as
standard
. However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class.- Click Next.
In the Capacity and nodes page, provide the necessary information:
Select a value for Requested Capacity from the dropdown list. It is set to
2 TiB
by default.NoteOnce you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).
- In the Select Nodes section, select at least three available nodes.
Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.
If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirements:
- To enable encryption, select Enable data encryption for block and file storage.
Select either one or both the encryption levels:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
Select an Authentication Method.
- Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
- Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
- Select a Network.
- Click Next.
In the Review and create page, review the configuration details.
To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
-
In the OpenShift Web Console, navigate to Installed Operators
OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources. -
Verify that
Status
ofStorageCluster
isReady
and has a green tick mark next to it.
-
In the OpenShift Web Console, navigate to Installed Operators
- To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
2.5. Verifying OpenShift Data Foundation deployment
Use this section to verify that OpenShift Data Foundation is deployed correctly.
2.5.1. Verifying the state of the pods
Procedure
-
Click Workloads
Pods from the OpenShift Web Console. Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, “Pods corresponding to OpenShift Data Foundation cluster”.
Set filter for Running and Completed pods to verify that the following pods are in
Running
andCompleted
state:Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any storage node) -
ocs-metrics-exporter-*
(1 pod on any storage node) -
odf-operator-controller-manager-*
(1 pod on any storage node) -
odf-console-*
(1 pod on any storage node) -
csi-addons-controller-manager-*
(1 pod on any storage node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any storage node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any storage node) -
noobaa-core-*
(1 pod on any storage node) -
noobaa-db-pg-*
(1 pod on any storage node) -
noobaa-endpoint-*
(1 pod on any storage node)
MON
rook-ceph-mon-*
(3 pods distributed across storage nodes)
MGR
rook-ceph-mgr-*
(1 pod on any storage node)
MDS
rook-ceph-mds-ocs-storagecluster-cephfilesystem-*
(2 pods distributed across storage nodes)
CSI
cephfs
-
csi-cephfsplugin-*
(1 pod on each storage node) -
csi-cephfsplugin-provisioner-*
(2 pods distributed across storage nodes)
-
rbd
-
csi-rbdplugin-*
(1 pod on each storage node) -
csi-rbdplugin-provisioner-*
(2 pods distributed across storage nodes)
-
rook-ceph-crashcollector
rook-ceph-crashcollector-*
(1 pod on each storage node)
OSD
-
rook-ceph-osd-*
(1 pod for each device) -
rook-ceph-osd-prepare-ocs-deviceset-*
(1 pod for each device)
-
2.5.2. Verifying the OpenShift Data Foundation cluster is healthy
Procedure
-
In the OpenShift Web Console, click Storage
Data Foundation. - In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.5.3. Verifying the Multicloud Object Gateway is healthy
Procedure
-
In the OpenShift Web Console, click Storage
Data Foundation. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see link: Monitoring OpenShift Data Foundation.
2.5.4. Verifying that the specific storage classes exist
Procedure
-
Click Storage
Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-storagecluster-ceph-rbd
-
ocs-storagecluster-cephfs
-
openshift-storage.noobaa.io
-