Deploying OpenShift Data Foundation using Amazon Web Services
Instructions for deploying OpenShift Data Foundation using Amazon Web Services for cloud storage
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Jira ticket:
- Log in to the Jira.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Select Documentation in the Components field.
- Click Create at the bottom of the dialogue.
Preface
Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments.
Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements.
To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement:
Chapter 1. Preparing to deploy OpenShift Data Foundation
Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources.
Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps:
Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps:
- Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
- When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS.
- When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS.
- Ensure that you are using signed certificates on your Vault servers.
Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server.
Create a KMIP client if one does not exist. From the user interface, select KMIP → Client Profile → Add Profile.
-
Add the
CipherTrust
username to the Common Name field during profile creation.
-
Add the
- Create a token by navigating to KMIP → Registration Token → New Registration Token. Copy the token for the next step.
- To register the client, navigate to KMIP → Registered Clients → Add Client. Specify the Name. Paste the Registration Token from the previous step, then click Save.
- Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively.
To create a new KMIP interface, navigate to Admin Settings → Interfaces → Add Interface.
- Select KMIP Key Management Interoperability Protocol and click Next.
- Select a free Port.
- Select Network Interface as all.
- Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional.
- (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default.
- Select the certificate authority (CA) to be used, and click Save.
- To get the server CA certificate, click on the Action menu (⋮) on the right of the newly created interface, and click Download Certificate.
Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK):
- Navigate to Keys → Add Key.
- Enter Key Name.
- Set the Algorithm and Size to AES and 256 respectively.
- Enable Create a key in Pre-Active state and set the date and time for activation.
- Ensure that Encrypt and Decrypt are enabled under Key Usage.
- Copy the ID of the newly created Key to be used as the Unique Identifier during deployment.
Minimum starting node requirements
An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide.
Disaster recovery requirements
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced subscription
A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices
You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi
or gp3-csi
) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications.
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.
Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment for more information about deployment requirements.
Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices:
2.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and operator installation permissions. - You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storage
namespace (createopenshift-storage
namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.18.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
2.2. Enabling cluster-wide encryption with KMS using the Token authentication method
You can enable the key value backend path and policy in the vault for token authentication.
Prerequisites
- Administrator access to the vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
-
Carefully, select a unique path name as the backend
path
that follows the naming convention since you cannot change it later.
Procedure
Enable the Key/Value (KV) backend path in the vault.
For vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a write or delete operation on the secret:
echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Create a token that matches the above policy:
$ vault token create -policy=odf -format json
2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method
You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).
Prerequisites
- Administrator access to Vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- The OpenShift Data Foundation operator must be installed from the Operator Hub.
-
Select a unique path name as the backend
path
that follows the naming convention carefully. You cannot change this path name later.
Procedure
Create a service account:
$ oc -n openshift-storage create serviceaccount <serviceaccount_name>
where,
<serviceaccount_name>
specifies the name of the service account.For example:
$ oc -n openshift-storage create serviceaccount odf-vault-auth
Create
clusterrolebindings
andclusterroles
:$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
For example:
$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
Create a secret for the
serviceaccount
token and CA certificate.$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF
where,
<serviceaccount_name>
is the service account created in the earlier step.Get the token and the CA certificate from the secret.
$ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo) $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
Retrieve the OCP cluster endpoint.
$ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")
Fetch the service account issuer:
$ oc proxy & $ proxy_pid=$! $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)" $ kill $proxy_pid
Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:
$ vault auth enable kubernetes
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT" \ issuer="$issuer"
ImportantTo configure the Kubernetes authentication method in Vault when the issuer is empty:
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT"
Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For Vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a
write
ordelete
operation on the secret:echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -
Generate the roles:
$ vault write auth/kubernetes/role/odf-rook-ceph-op \ bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h
The role
odf-rook-ceph-op
is later used while you configure the KMS connection details during the creation of the storage system.$ vault write auth/kubernetes/role/odf-rook-ceph-osd \ bound_service_account_names=rook-ceph-osd \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h
2.3.1. Enabling and disabling key rotation when using KMS
Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS.
2.3.1.1. Enabling key rotation
To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value>
to PersistentVolumeClaims
, Namespace
, or StorageClass
(in the decreasing order of precedence).
<value>
can be @hourly
, @daily
, @weekly
, @monthly
, or @yearly
. If <value>
is empty, the default is @weekly
. The below examples use @weekly
.
Key rotation is only supported for RBD backed volumes.
Annotating Namespace
$ oc get namespace default NAME STATUS AGE default Active 5d2h
$ oc annotate namespace default "keyrotation.csiaddons.openshift.io/schedule=@weekly" namespace/default annotated
Annotating StorageClass
$ oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h
$ oc annotate storageclass rbd-sc "keyrotation.csiaddons.openshift.io/schedule=@weekly" storageclass.storage.k8s.io/rbd-sc annotated
Annotating PersistentVolumeClaims
$ oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=@weekly" persistentvolumeclaim/data-pvc annotated
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s
$ oc annotate pvc data-pvc "keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *" --overwrite=true persistentvolumeclaim/data-pvc annotated
$ oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s
2.3.1.2. Disabling key rotation
You can disable key rotation for the following:
- All the persistent volume claims (PVCs) of storage class
- A specific PVC
Disabling key rotation for all PVCs of a storage class
To disable key rotation for all PVCs, update the annotation of the storage class:
$ oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h
$ oc annotate storageclass rbd-sc "keyrotation.csiaddons.openshift.io/enable: false" storageclass.storage.k8s.io/rbd-sc annotated
Disabling key rotation for a specific persistent volume claim
Identify the
EncryptionKeyRotationCronJob
CR for the PVC you want to disable key rotation on:$ oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim=="<PVC_NAME>")]}{.metadata.name}{"\n"}{end}'
Where
<PVC_NAME>
is the name of the PVC that you want to disable.Apply the following to the
EncryptionKeyRotationCronJob
CR from the previous step to disable the key rotation:Update the
csiaddons.openshift.io/state
annotation frommanaged
tounmanaged
:$ oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> "csiaddons.openshift.io/state=unmanaged" --overwrite=true
Where <encryptionkeyrotationcronjob_name> is the name of the
EncryptionKeyRotationCronJob
CR.Add
suspend: true
under thespec
field:$ oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{"spec": {"suspend": true}}' --type=merge.
- Save and exit. The key rotation will be disabled for the PVC.
2.4. Creating OpenShift Data Foundation cluster
Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click on the OpenShift Data Foundation operator, and then click Create StorageSystem.
In the Backing storage page, select the following:
- Select Full Deployment for the Deployment type option.
- Select the Use an existing StorageClass option.
Select the Storage Class.
As of OpenShift Data Foundation version 4.12, you can choose
gp2-csi
orgp3-csi
as the storage class.Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview].
This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure.
Provide the following connection details:
- Username
- Password
- Server name and Port
- Database name
- Select Enable TLS/SSL checkbox to enable encryption for the Postgres server.
- Click Next.
In the Capacity and nodes page, provide the necessary information:
Select a value for Requested Capacity from the dropdown list. It is set to
2 TiB
by default.NoteOnce you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage).
- In the Select Nodes section, select at least three available nodes.
In the Configure performance section, select one of the following performance profiles:
Lean
Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.
Balanced (default)
Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.
Performance
Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.
NoteYou have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab.
ImportantBefore selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.
For more information about resource requirements, see Resource requirement for performance profiles.
Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones.
If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirements:
To enable encryption, select Enable data encryption for block and file storage.
Select either one or both the encryption levels:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details:
Vault
Select an Authentication Method.
Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key.
- Click Save.
Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
Click Save.
NoteIn case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created:
oc patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{"op": "add", "path":"/spec/encryption/keyRotation/enable", "value": true}]'
Thales CipherTrust Manager (using KMIP)
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
To enable in-transit encryption, select In-transit encryption.
- Select a Network.
- Click Next.
In the Review and create page, review the configuration details.
To modify any configuration settings, click Back.
- Click Create StorageSystem.
When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources.
-
Verify that
Status
ofStorageCluster
isReady
and has a green tick mark next to it.
- To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment.
Additional resources
To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
2.5. Verifying OpenShift Data Foundation deployment
To verify that OpenShift Data Foundation is deployed correctly:
2.5.1. Verifying the state of the pods
Procedure
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table:
-
Set filter for Running and Completed pods to verify that the following pods are in
Running
andCompleted
state:
Component | Corresponding pods |
OpenShift Data Foundation Operator |
|
Rook-ceph Operator |
(1 pod on any storage node) |
Multicloud Object Gateway |
|
MON |
(3 pods distributed across storage nodes) |
MGR |
(1 pod on any storage node) |
MDS |
(2 pods distributed across storage nodes) |
CSI |
|
rook-ceph-crashcollector |
(1 pod on each storage node) |
OSD |
|
ceph-csi-operator |
|
2.5.2. Verifying the OpenShift Data Foundation cluster is healthy
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
- In the Details card, verify that the cluster information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation.
2.5.3. Verifying the Multicloud Object Gateway is healthy
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation.
The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article.
2.5.4. Verifying that the specific storage classes exist
Procedure
- Click Storage → Storage Classes from the left pane of the OpenShift Web Console.
Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
-
ocs-storagecluster-ceph-rbd
-
ocs-storagecluster-cephfs
-
openshift-storage.noobaa.io
-
Chapter 3. Deploy standalone Multicloud Object Gateway
Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser.
Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps:
- Installing Red Hat OpenShift Data Foundation Operator
- Creating standalone Multicloud Object Gateway
The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article.
3.1. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and operator installation permissions. - You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the
openshift-storage
namespace (createopenshift-storage
namespace in this case):$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infra
to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundation
into the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.18.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storage
does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
-
After the operator is successfully installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console:
- Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
- Navigate to Storage and verify if the Data Foundation dashboard is available.
3.2. Creating a standalone Multicloud Object Gateway
You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser.
Prerequisites
- Ensure that the OpenShift Data Foundation Operator is installed.
Procedure
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage
.- Click OpenShift Data Foundation operator and then click Create StorageSystem.
In the Backing storage page, select the following:
- Select Multicloud Object Gateway for Deployment type.
- Select the Use an existing StorageClass option.
- Click Next.
Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
Select an Authentication Method.
- Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
- Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step iv.
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
- Select a Network.
- Click Next.
In the Review and create page, review the configuration details:
To modify any configuration settings, click Back.
- Click Create StorageSystem.
Verification steps
- Verifying that the OpenShift Data Foundation cluster is healthy
- In the OpenShift Web Console, click Storage → Data Foundation.
In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
- In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick.
- In the Details card, verify that the MCG information is displayed.
- Verifying the state of the pods
- Click Workloads → Pods from the OpenShift Web Console.
Select
openshift-storage
from the Project drop-down list and verify that the following pods are inRunning
state.NoteIf the Show default projects option is disabled, use the toggle button to list all the default projects.
Component Corresponding pods OpenShift Data Foundation Operator
-
ocs-operator-*
(1 pod on any storage node) -
ocs-metrics-exporter-*
(1 pod on any storage node) -
odf-operator-controller-manager-*
(1 pod on any storage node) -
odf-console-*
(1 pod on any storage node) -
csi-addons-controller-manager-*
(1 pod on any storage node)
Rook-ceph Operator
rook-ceph-operator-*
(1 pod on any storage node)
Multicloud Object Gateway
-
noobaa-operator-*
(1 pod on any storage node) -
noobaa-core-*
(1 pod on any storage node) -
noobaa-db-pg-*
(1 pod on any storage node) -
noobaa-endpoint-*
(1 pod on any storage node)
-
Chapter 4. Creating an AWS-STS-backed backingstore
Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following:
- Creating an AWS role using a script, which helps to get the temporary security credentials for the role session
- Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster
- Creating backingstore in AWS STS OpenShift cluster
4.1. Creating an AWS role using a script
You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator.
Prerequisites
- Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials.
Procedure
Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation.
The following example shows the details that are required to create the role:
{ “Version”: “2012-10-17", “Statement”: [ { “Effect”: “Allow”, “Principal”: { “Federated”: “arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com” }, “Action”: “sts:AssumeRoleWithWebIdentity”, “Condition”: { “StringEquals”: { “mybucket-oidc.s3.us-east-2.amazonaws.com:sub”: [ “system:serviceaccount:openshift-storage:noobaa”, "system:serviceaccount:openshift-storage:noobaa-core", “system:serviceaccount:openshift-storage:noobaa-endpoint” ] } } } ] }
where
123456789123
- Is the AWS account ID
mybucket
- Is the bucket name (using public bucket configuration)
us-east-2
- Is the AWS region
openshift-storage
- Is the namespace name
Sample script
#!/bin/bash set -x # This is a sample script to help you deploy MCG on AWS STS cluster. # This script shows how to create role-policy and then create the role in AWS. # For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html # WARNING: This is a sample script. You need to adjust the variables based on your requirement. # Variables : # user variables - REPLACE these variables with your values: ROLE_NAME="<role-name>" # role name that you pick in your AWS account NAMESPACE="<namespace>" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. # MCG variables SERVICE_ACCOUNT_NAME_1="noobaa" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2="noobaa-endpoint" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3="noobaa-core" # The service account name of statefulset core # AWS variables # Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) # AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) # If you want to create the role before using the cluster, replace this field too. # The OIDC provider is in the structure: # 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket # 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=$(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e "s/^https:\/\///") # the permission (S3 full access) POLICY_ARN_STRINGS="arn:aws:iam::aws:policy/AmazonS3FullAccess" # Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": [ "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_1}", "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_2}", "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_3}" ] } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust.json aws iam create-role --role-name "$ROLE_NAME" --assume-role-policy-document file://trust.json --description "role for demo" while IFS= read -r POLICY_ARN; do echo -n "Attaching $POLICY_ARN ... " aws iam attach-role-policy \ --role-name "$ROLE_NAME" \ --policy-arn "${POLICY_ARN}" echo "ok." done <<< "$POLICY_ARN_STRINGS"
4.1.1. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster
Prerequisites
- Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials.
- Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script.
Procedure
Install OpenShift Data Foundation Operator from the Operator Hub.
- During the installation add the role ARN in the ARN Details field.
- Make sure that the Update approval field is set to Manual.
4.1.2. Creating a new AWS STS backingstore
Prerequisites
- Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials.
- Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script.
- Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster.
Procedure
Install Multicloud Object Gateway (MCG).
It is installed with the default backingstore by using the short-lived credentials.
After the MCG system is ready, you can create more backingstores of the type
aws-sts-s3
using the following MCG command line interface command:$ noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>
where
- backingstore-name
- Name of the backingstore
- aws-sts-role-arn
- The AWS STS role ARN which will assume role
- region
- The AWS bucket region
- target-bucket
- The target bucket name on the cloud
Chapter 5. View OpenShift Data Foundation Topology
The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether.
Procedure
On the OpenShift Web Console, navigate to Storage → Data Foundation → Topology.
The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts.
- Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon.
To view deployment details
- Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses.
- Click the Back to main view button in the model’s upper left corner to close and return to the previous view.
- Select a specific deployment to see more information about it. All relevant data is shown in the side panel.
- Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting.
- Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
Chapter 6. Uninstalling OpenShift Data Foundation
6.1. Uninstalling OpenShift Data Foundation in Internal mode
To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation.