OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Managing hybrid and multicloud resources
Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa).
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Chapter 1. About the Multicloud Object Gateway Copy linkLink copied to clipboard!
The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage.
Chapter 2. Accessing the Multicloud Object Gateway with your applications Copy linkLink copied to clipboard!
You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information.
For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint.
Prerequisites
- A running OpenShift Data Foundation Platform.
Download the MCG command-line interface for easier management.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page.
NoteChoose the correct Product Variant according to your architecture.
You can access the relevant endpoint, access key, and secret access key in two ways:
Example 2.1. Example
- Accessing the MCG bucket(s) using the virtual-hosted style
- If the client application tries to access https://<bucket-name>.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com
<bucket-name>is the name of the MCG bucket
For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com
A DNS entry is needed for
mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.comto point to the S3 Service.
Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style.
2.1. Accessing the Multicloud Object Gateway from the terminal Copy linkLink copied to clipboard!
Procedure
Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key (AWS_ACCESS_KEY_ID value) and secret access key (AWS_SECRET_ACCESS_KEY value).
oc describe noobaa -n openshift-storage
# oc describe noobaa -n openshift-storage
The output will look similar to the following:
2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Copy linkLink copied to clipboard!
Prerequisites
Download the MCG command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Run the status command to access the endpoint, access key, and secret access key:
noobaa status -n openshift-storage
noobaa status -n openshift-storage
The output will look similar to the following:
You now have the relevant endpoint, access key, and secret access key in order to connect to your applications.
Example 2.2. Example
If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation:
AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls
AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls
Chapter 3. Allowing user access to the Multicloud Object Gateway Console Copy linkLink copied to clipboard!
To allow access to the Multicloud Object Gateway (MCG) Console to a user, ensure that the user meets the following conditions:
- User is in cluster-admins group.
- User is in system:cluster-admins virtual group.
Prerequisites
- A running OpenShift Data Foundation Platform.
Procedure
Enable access to the MCG console.
Perform the following steps once on the cluster :
Create a
cluster-adminsgroup.oc adm groups new cluster-admins
# oc adm groups new cluster-adminsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the group to the
cluster-adminrole.oc adm policy add-cluster-role-to-group cluster-admin cluster-admins
# oc adm policy add-cluster-role-to-group cluster-admin cluster-adminsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Add or remove users from the
cluster-adminsgroup to control access to the MCG console.To add a set of users to the
cluster-adminsgroup :oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>...
# oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>...Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<user-name>is the name of the user to be added.NoteIf you are adding a set of users to the
cluster-adminsgroup, you do not need to bind the newly added users to the cluster-admin role to allow access to the OpenShift Data Foundation dashboard.To remove a set of users from the
cluster-adminsgroup :oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>...
# oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>...Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<user-name>is the name of the user to be removed.
Verification steps
- On the OpenShift Web Console, login as a user with access permission to Multicloud Object Gateway Console.
- Navigate to Storage → Data Foundation.
- In the Storage Systems tab, select the storage system and then click Overview → Object tab.
- Select the Multicloud Object Gateway link.
- Click Allow selected permissions.
Chapter 4. Adding storage resources for hybrid or Multicloud Copy linkLink copied to clipboard!
4.1. Creating a new backing store Copy linkLink copied to clipboard!
Use this procedure to create a new backing store in OpenShift Data Foundation.
Prerequisites
- Administrator access to OpenShift Data Foundation.
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Backing Store tab.
- Click Create Backing Store.
On the Create New Backing Store page, perform the following:
- Enter a Backing Store Name.
- Select a Provider.
- Select a Region.
- Enter an Endpoint. This is optional.
Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets.
For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation.
Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 4.2, “Adding storage resources for hybrid or Multicloud using the MCG command line interface” and follow the procedure for the addition of storage resources using a YAML.
NoteThis menu is relevant for all providers except Google Cloud and local PVC.
- Enter the Target bucket. The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system.
- Click Create Backing Store.
Verification steps
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Backing Store tab to view all the backing stores.
4.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface Copy linkLink copied to clipboard!
The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters.
Add a backing storage that can be used by the MCG.
Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage:
- For creating an AWS-backed backingstore, see Section 4.2.1, “Creating an AWS-backed backingstore”
- For creating an IBM COS-backed backingstore, see Section 4.2.2, “Creating an IBM COS-backed backingstore”
- For creating an Azure-backed backingstore, see Section 4.2.3, “Creating an Azure-backed backingstore”
- For creating a GCP-backed backingstore, see Section 4.2.4, “Creating a GCP-backed backingstore”
- For creating a local Persistent Volume-backed backingstore, see Section 4.2.5, “Creating a local Persistent Volume-backed backingstore”
For VMware deployments, skip to Section 4.3, “Creating an s3 compatible Multicloud Object Gateway backingstore” for further instructions.
4.2.1. Creating an AWS-backed backingstore Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages
NoteChoose the correct Product Variant according to your architecture.
Procedure
From the MCG command-line interface, run the following command:
noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
<backingstore_name>with the name of the backingstore. -
Replace
<AWS ACCESS KEY>and<AWS SECRET ACCESS KEY>with an AWS access key ID and secret access key you created for this purpose. Replace
<bucket-name>with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "aws-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-aws-resource"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "aws-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-aws-resource"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also add storage resources using a YAML:
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of
<AWS ACCESS KEY ID ENCODED IN BASE64>and<AWS SECRET ACCESS KEY ENCODED IN BASE64>. -
Replace
<backingstore-secret-name>with a unique name.
-
You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of
Apply the following YAML for a specific backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bucket-name>with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. -
Replace
<backingstore-secret-name>with the name of the secret created in the previous step.
-
Replace
4.2.2. Creating an IBM COS-backed backingstore Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance,
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages
NoteChoose the correct Product Variant according to your architecture.
Procedure
From the MCG command-line interface, run the following command:
noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage
noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore_name>with the name of the backingstore. Replace
<IBM ACCESS KEY>,<IBM SECRET ACCESS KEY>,<IBM COS ENDPOINT>with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket.
Replace
<bucket-name>with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "ibm-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-ibm-resource"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "ibm-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-ibm-resource"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
You can also add storage resources using a YAML:
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of
<IBM COS ACCESS KEY ID ENCODED IN BASE64>and<IBM COS SECRET ACCESS KEY ENCODED IN BASE64>. -
Replace
<backingstore-secret-name>with a unique name.
-
You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of
Apply the following YAML for a specific backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bucket-name>with an existing IBM COS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. -
Replace
<endpoint>with a regional endpoint that corresponds to the location of the existing IBM bucket name. This argument tells Multicloud Object Gateway which endpoint to use for its backing store, and subsequently, data storage and administration. -
Replace
<backingstore-secret-name>with the name of the secret created in the previous step.
-
Replace
4.2.3. Creating an Azure-backed backingstore Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages
NoteChoose the correct Product Variant according to your architecture.
Procedure
From the MCG command-line interface, run the following command:
noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage
noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore_name>with the name of the backingstore. -
Replace
<AZURE ACCOUNT KEY>and<AZURE ACCOUNT NAME>with an AZURE account key and account name you created for this purpose. Replace
<blob container name>with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "azure-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-azure-resource"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "azure-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-azure-resource"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
You can also add storage resources using a YAML:
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of
<AZURE ACCOUNT NAME ENCODED IN BASE64>and<AZURE ACCOUNT KEY ENCODED IN BASE64>. -
Replace
<backingstore-secret-name>with a unique name.
-
You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of
Apply the following YAML for a specific backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<blob-container-name>with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. -
Replace
<backingstore-secret-name>with the name of the secret created in the previous step.
-
Replace
4.2.4. Creating a GCP-backed backingstore Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages
NoteChoose the correct Product Variant according to your architecture.
Procedure
From the MCG command-line interface, run the following command:
noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage
noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore_name>with the name of the backingstore. -
Replace
<PATH TO GCP PRIVATE KEY JSON FILE>with a path to your GCP private key created for this purpose. Replace
<GCP bucket name>with an existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "google-gcp" INFO[0002] ✅ Created: Secret "backing-store-google-cloud-storage-gcp"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "google-gcp" INFO[0002] ✅ Created: Secret "backing-store-google-cloud-storage-gcp"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
You can also add storage resources using a YAML:
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own GCP service account private key using Base64, and use the results in place of
<GCP PRIVATE KEY ENCODED IN BASE64>. -
Replace
<backingstore-secret-name>with a unique name.
-
You must supply and encode your own GCP service account private key using Base64, and use the results in place of
Apply the following YAML for a specific backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<target bucket>with an existing Google storage bucket. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. -
Replace
<backingstore-secret-name>with the name of the secret created in the previous step.
-
Replace
4.2.5. Creating a local Persistent Volume-backed backingstore Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages
NoteChoose the correct Product Variant according to your architecture.
Procedure
From the MCG command-line interface, run the following command:
NoteThis command must be run from within the
openshift-storagenamespace.noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS> -n openshift-storage
noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore_name>with the name of the backingstore. -
Replace
<NUMBER OF VOLUMES>with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. -
Replace
<VOLUME SIZE>with the required size, in GB, of each volume. Replace
<LOCAL STORAGE CLASS>with the local storage class, recommended to useocs-storagecluster-ceph-rbd.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Exists: BackingStore "local-mcg-storage"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Exists: BackingStore "local-mcg-storage"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
You can also add storage resources using a YAML:
Apply the following YAML for a specific backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore_name>with the name of the backingstore. -
Replace
<NUMBER OF VOLUMES>with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. -
Replace
<VOLUME SIZE>with the required size, in GB, of each volume. Note that the letter G should remain. -
Replace
<LOCAL STORAGE CLASS>with the local storage class, recommended to useocs-storagecluster-ceph-rbd.
-
Replace
4.3. Creating an s3 compatible Multicloud Object Gateway backingstore Copy linkLink copied to clipboard!
The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage’s RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage’s RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically.
Procedure
From the MCG command-line interface, run the following command:
NoteThis command must be run from within the
openshift-storagenamespace.noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage
noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow To get the
<RGW ACCESS KEY>and<RGW SECRET KEY>, run the following command using your RGW user secret name:oc get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage
oc get secret <RGW USER SECRET NAME> -o yaml -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Decode the access key ID and the access key from Base64 and keep them.
-
Replace
<RGW USER ACCESS KEY>and<RGW USER SECRET ACCESS KEY>with the appropriate, decoded data from the previous step. -
Replace
<bucket-name>with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the
<RGW endpoint>, see Accessing the RADOS Object Gateway S3 endpoint.The output will be similar to the following:
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "rgw-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-rgw-resource"
INFO[0001] ✅ Exists: NooBaa "noobaa" INFO[0002] ✅ Created: BackingStore "rgw-resource" INFO[0002] ✅ Created: Secret "backing-store-secret-rgw-resource"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also create the backingstore using a YAML:
Create a
CephObjectStoreuser. This also creates a secret containing the RGW credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<RGW-Username>and<Display-name>with a unique username and display name.
-
Replace
Apply the following YAML for an S3-Compatible backing store:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<backingstore-secret-name>with the name of the secret that was created withCephObjectStorein the previous step. -
Replace
<bucket-name>with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. -
To get the
<RGW endpoint>, see Accessing the RADOS Object Gateway S3 endpoint.
-
Replace
4.4. Adding storage resources for hybrid and Multicloud using the user interface Copy linkLink copied to clipboard!
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Storage Systems tab, select the storage system and then click Overview → Object tab.
- Select the Multicloud Object Gateway link.
Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource.
Select Add new connection.
Select the relevant native cloud provider or S3 compatible option and fill in the details.
Select the newly created connection and map it to the existing bucket.
- Repeat these steps to create as many backing stores as needed.
Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI.
4.5. Creating a new bucket class Copy linkLink copied to clipboard!
Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC).
Use this procedure to create a bucket class in OpenShift Data Foundation.
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Bucket Class tab.
- Click Create Bucket Class.
On the Create new Bucket Class page, perform the following:
Select the bucket class type and enter a bucket class name.
Select the BucketClass type. Choose one of the following options:
- Standard: data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted.
Namespace: data is stored on the NamespaceStores without performing de-duplication, compression or encryption.
By default, Standard is selected.
- Enter a Bucket Class Name.
- Click Next.
In Placement Policy, select Tier 1 - Policy Type and click Next. You can choose either one of the options as per your requirements.
- Spread allows spreading of the data across the chosen resources.
- Mirror allows full duplication of the data across the chosen resources.
- Click Add Tier to add another policy tier.
Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click Next. Alternatively, you can also create a new backing store.
NoteYou need to select at least 2 backing stores when you select Policy Type as Mirror in previous step.
- Review and confirm Bucket Class settings.
- Click Create Bucket Class.
Verification steps
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Bucket Class tab and search the new Bucket Class.
4.6. Editing a bucket class Copy linkLink copied to clipboard!
Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console.
Prerequisites
- Administrator access to OpenShift Web Console.
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Bucket Class tab.
- Click the Action Menu (⋮) next to the Bucket class you want to edit.
- Click Edit Bucket Class.
- You are redirected to the YAML file, make the required changes in this file and click Save.
4.7. Editing backing stores for bucket class Copy linkLink copied to clipboard!
Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class.
Prerequisites
- Administrator access to OpenShift Web Console.
- A bucket class.
- Backing stores.
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- Click the Bucket Class tab.
Click the Action Menu (⋮) next to the Bucket class you want to edit.
- Click Edit Bucket Class Resources.
On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies.
- To add a backing store to the bucket class, select the name of the backing store.
To remove a backing store from the bucket class, clear the name of the backing store.
- Click Save.
Chapter 5. Managing namespace buckets Copy linkLink copied to clipboard!
Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider.
A namespace bucket can only be used if its write target is available and functional.
5.1. Amazon S3 API endpoints for objects in namespace buckets Copy linkLink copied to clipboard!
You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API.
Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations:
See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them.
Additional resources
5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML Copy linkLink copied to clipboard!
For more information about namespace buckets, see Managing namespace buckets.
Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket:
5.2.1. Adding an AWS S3 namespace bucket using YAML Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Procedure
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of
<AWS ACCESS KEY ID ENCODED IN BASE64>and<AWS SECRET ACCESS KEY ENCODED IN BASE64>. -
Replace
<namespacestore-secret-name>with a unique name.
-
You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of
Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give to the resource. -
Replace
<namespacestore-secret-name>with the secret created in step 1. -
Replace
<namespace-secret>with the namespace where the secret can be found. -
Replace
<target-bucket>with the target bucket you created for the NamespaceStore.
-
Replace
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.A namespace policy of type
singlerequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-class>with a unique namespace bucket class name. -
Replace
<resource>with the name of a single namespace-store that defines the read and write target of the namespace bucket.
-
Replace
A namespace policy of type
multirequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<write-resource>with the name of a single namespace-store that defines the write target of the namespace bucket. -
Replace
<read-resources>with a list of the names of the namespace-stores that defines the read targets of the namespace bucket.
-
Replace
Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give to the resource. -
Replace
<my-bucket>with the name you want to give to the bucket. -
Replace
<my-bucket-class>with the bucket class created in the previous step.
-
Replace
Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.
5.2.2. Adding an IBM COS namespace bucket using YAML Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Procedure
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of
<IBM COS ACCESS KEY ID ENCODED IN BASE64>and<IBM COS SECRET ACCESS KEY ENCODED IN BASE64>. -
Replace
<namespacestore-secret-name>with a unique name.
-
You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of
Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<IBM COS ENDPOINT>with the appropriate IBM COS endpoint. -
Replace
<namespacestore-secret-name>with the secret created in step 1. -
Replace
<namespace-secret>with the namespace where the secret can be found. -
Replace
<target-bucket>with the target bucket you created for the NamespaceStore.
-
Replace
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.A namespace policy of type
singlerequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-class>with a unique namespace bucket class name. -
Replace
<resource>with a the name of a single namespace-store that defines the read and write target of the namespace bucket.
-
Replace
A namespace policy of type
multirequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<write-resource>with the name of a single namespace-store that defines the write target of the namespace bucket. -
Replace
<read-resources>with a list of the names of namespace-stores that defines the read targets of the namespace bucket.
-
Replace
Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give to the resource. -
Replace
<my-bucket>with the name you want to give to the bucket. -
Replace
<my-bucket-class>with the bucket class created in the previous step.
-
Replace
Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.
5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
- Download the MCG command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
# yum install mcg
Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
Choose the correct Product Variant according to your architecture.
Procedure
Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command:
noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with the name of the NamespaceStore. -
Replace
<AWS ACCESS KEY>and<AWS SECRET ACCESS KEY>with an AWS access key ID and secret access key you created for this purpose. -
Replace
<bucket-name>with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
-
Replace
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.Run the following command to create a namespace bucket class with a namespace policy of type
single:noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give the resource. -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<resource>with a single namespace-store that defines the read and write target of the namespace bucket.
-
Replace
Run the following command to create a namespace bucket class with a namespace policy of type
multi:noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give the resource. -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<write-resource>with a single namespace-store that defines the write target of the namespace bucket. -
Replace
<read-resources>with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket.
-
Replace
Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bucket-name>with a bucket name of your choice. -
Replace
<custom-bucket-class>with the name of the bucket class created in step 2.
-
Replace
Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.
5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Download the MCG command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
NoteChoose the correct Product Variant according to your architecture.
Procedure
Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command:
noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with the name of the NamespaceStore. -
Replace
<IBM ACCESS KEY>,<IBM SECRET ACCESS KEY>,<IBM COS ENDPOINT>with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. -
Replace
<bucket-name>with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
-
Replace
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.Run the following command to create a namespace bucket class with a namespace policy of type
single:noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give the resource. -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<resource>with a single namespace-store that defines the read and write target of the namespace bucket.
-
Replace
Run the following command to create a namespace bucket class with a namespace policy of type
multi:noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<resource-name>with the name you want to give the resource. -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<write-resource>with a single namespace-store that defines the write target of the namespace bucket. -
Replace
<read-resources>with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket.
-
Replace
Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2.
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bucket-name>with a bucket name of your choice. -
Replace
<custom-bucket-class>with the name of the bucket class created in step 2.
-
Replace
Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC.
5.3. Adding a namespace bucket using the OpenShift Container Platform user interface Copy linkLink copied to clipboard!
Starting with OpenShift Data Foundation 4.8, namespace buckets can be added using the OpenShift Container Platform user interface. For more information about namespace buckets, see Managing namespace buckets.
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG).
Procedure
- Log into the OpenShift Web Console.
- Click Storage → Data Foundation.
Click the Namespace Store tab to create a
namespacestoreresources to be used in the namespace bucket.- Click Create namespace store.
- Enter a namespacestore name.
- Choose a provider.
- Choose a region.
- Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key.
- Choose a target bucket.
- Click Create.
- Verify the namespacestore is in the Ready state.
- Repeat these steps until you have the desired amount of resources.
Click the Bucket Class tab → Create a new Bucket Class.
- Select the Namespace radio button.
- Enter a Bucket Class name.
- Add a description (optional).
- Click Next.
- Choose a namespace policy type for your namespace bucket, and then click Next.
Select the target resource(s).
- If your namespace policy type is Single, you need to choose a read resource.
- If your namespace policy type is Multi, you need to choose read resources and a write resource.
- If your namespace policy type is Cache, you need to choose a Hub namespace store that defines the read and write target of the namespace bucket.
- Click Next.
- Review your new bucket class, and then click Create Bucketclass.
- On the BucketClass page, verify that your newly created resource is in the Created phase.
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card, click Storage System and click the storage system link from the pop up that appears.
- In the Object tab, click Multicloud Object Gateway → Buckets → Namespace Buckets tab .
Click Create Namespace Bucket.
- On the Choose Name tab, specify a Name for the namespace bucket and click Next.
On the Set Placement tab:
- Under Read Policy, select the checkbox for each namespace resource created in step 5 that the namespace bucket should read data from.
- If the namespace policy type you are using is Multi, then Under Write Policy, specify which namespace resource the namespace bucket should write data to.
- Click Next.
- Click Create.
Verification
- Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name.
5.4. Sharing legacy application data with cloud native application using S3 protocol Copy linkLink copied to clipboard!
Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to:
- Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol.
- Access file system datasets from both file system and S3 protocol.
- Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs).
5.4.1. Creating a NamespaceStore to use a file system Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG).
Procedure
- Log into the OpenShift Web Console.
- Click Storage → Data Foundation.
- Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket.
- Click Create namespacestore.
- Enter a name for the NamespaceStore.
- Choose Filesystem as the provider.
- Choose the Persistent volume claim.
Enter a folder name.
If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created.
- Click Create.
- Verify the NamespaceStore is in the Ready state.
5.4.2. Creating accounts with NamespaceStore filesystem configuration Copy linkLink copied to clipboard!
You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML.
You cannot remove a NamespaceStore filesystem configuration from an account.
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface.
noobaa account create <noobaa-account-name> [flags]
$ noobaa account create <noobaa-account-name> [flags]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestore
$ noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestoreCopy to Clipboard Copied! Toggle word wrap Toggle overflow allow_bucket_create
Indicates whether the account is allowed to create new buckets. Supported values are
trueorfalse. Default value istrue.allowed_buckets
A comma separated list of bucket names to which the user is allowed to have access and management rights.
default_resource
The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC).
full_permission
Indicates whether the account should be allowed full permission or not. Supported values are
trueorfalse. Default value isfalse.new_buckets_path
The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes.
nsfs_account_config
A mandatory field that indicates if the account is used for NamespaceStore filesystem.
nsfs_only
Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or
false. Default value isfalse. If it is set to 'true', it limits you from accessing other types of buckets.uid
The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem
gid
The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem
The MCG system sends a response with the account configuration and its S3 credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can list all the custom resource definition (CRD) based accounts by using the following command:
noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s
$ noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17sCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.3. Accessing legacy application data from the openshift-storage namespace Copy linkLink copied to clipboard!
When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses.
In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses.
Procedure
Display the application namespace with
scc:oc get ns <application_namespace> -o yaml | grep scc
$ oc get ns <application_namespace> -o yaml | grep sccCopy to Clipboard Copied! Toggle word wrap Toggle overflow <application_namespace>Specify the name of the application namespace.
Example 5.1. Example
oc get ns testnamespace -o yaml | grep scc
$ oc get ns testnamespace -o yaml | grep sccCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.2. Example output
openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000
openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Navigate into the application namespace:
oc project <application_namespace>
$ oc project <application_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.3. Example
oc project testnamespace
$ oc project testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature:
oc get pvc
$ oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.4. Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12sCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pod
$ oc get podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.5. Example output
NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s
NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the mount point of the Persistent Volume (PV) inside your pod.
Get the volume name of the PV from the pod:
oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'$ oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow <pod_name>Specify the name of the pod.
Example 5.6. Example
oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}'$ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.7. Example output
{"name":"app-persistent-storage","persistentVolumeClaim":{"claimName":"cephfs-write-workload-generator-no-cache-pv-claim"}}{"name":"app-persistent-storage","persistentVolumeClaim":{"claimName":"cephfs-write-workload-generator-no-cache-pv-claim"}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the name of the volume for the PVC is
cephfs-write-workload-generator-no-cache-pv-claim.
List all the mounts in the pod, and check for the mount point of the volume that you identified in the previous step:
oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'$ oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.8. Example
oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}'$ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.9. Example output
[{"mountPath":"/mnt/pv","name":"app-persistent-storage"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-8tnc5","readOnly":true}][{"mountPath":"/mnt/pv","name":"app-persistent-storage"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-8tnc5","readOnly":true}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm the mount point of the RWX PV in your pod:
oc exec -it <pod_name> -- df <mount_path>
$ oc exec -it <pod_name> -- df <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <mount_path>Specify the path to the mount point that you identified in the previous step.
Example 5.10. Example
oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv
$ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.11. Example output
main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv
main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses:
oc exec -it <pod_name> -- ls -latrZ <mount_path>
$ oc exec -it <pod_name> -- ls -latrZ <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.12. Example
oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/
$ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.13. Example output
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the information of the legacy application RWX PV that you want to make accessible from the
openshift-storagenamespace:oc get pv | grep <pv_name>
$ oc get pv | grep <pv_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <pv_name>Specify the name of the PV.
Example 5.14. Example
oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
$ oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.15. Example output
pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s
pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the PVC from the legacy application is accessible from the
openshift-storagenamespace so that one or more noobaa-endpoint pods can access the PVC.Find the values of the
subvolumePathandvolumeHandlefrom thevolumeAttributes. You can get these values from the YAML description of the legacy application PV:oc get pv <pv_name> -o yaml
$ oc get pv <pv_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.16. Example
oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml
$ oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.17. Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
subvolumePathandvolumeHandlevalues that you identified in the previous step to create a new PV and PVC object in theopenshift-storagenamespace that points to the same CephFS volume as the legacy application PV:Example 5.18. Example YAML file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The storage capacity of the PV that you are creating in the
openshift-storagenamespace must be the same as the original PV. - 2
- The volume handle for the target PV that you create in
openshift-storageneeds to have a different handle than the original application PV, for example, add-cloneat the end of the volume handle. - 3
- The storage capacity of the PVC that you are creating in the
openshift-storagenamespace must be the same as the original PVC.
Create the PV and PVC in the
openshift-storagenamespace using the YAML file specified in the previous step:oc create -f <YAML_file>
$ oc create -f <YAML_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <YAML_file>Specify the name of the YAML file.
Example 5.19. Example
oc create -f pv-openshift-storage.yaml
$ oc create -f pv-openshift-storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.20. Example output
persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created
persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the PVC is available in the
openshift-storagenamespace:oc get pvc -n openshift-storage
$ oc get pvc -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.21. Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate into the
openshift-storageproject:oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.22. Example output
Now using project "openshift-storage" on server "https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443".
Now using project "openshift-storage" on server "https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443".Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the NSFS namespacestore:
noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'
$ noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'Copy to Clipboard Copied! Toggle word wrap Toggle overflow <nsfs_namespacestore>- Specify the name of the NSFS namespacestore.
<cephfs_pvc_name>Specify the name of the CephFS PVC in the
openshift-storagenamespace.Example 5.23. Example
noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'
$ noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example,
/nsfs/legacy-namespacemountpoint:oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>
$ oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <noobaa_endpoint_pod_name>Specify the name of the noobaa-endpoint pod.
Example 5.24. Example
oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace
$ oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.25. Example output
Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace
Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a MCG user account:
noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'
$ noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'Copy to Clipboard Copied! Toggle word wrap Toggle overflow <user_account>- Specify the name of the MCG user account.
<gid_number>- Specify the GID number.
<uid_number>Specify the UID number.
Example 5.26. Example
ImportantUse the same
UIDandGIDas that of the legacy application. You can find it from the previous output.noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'
$ noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a MCG bucket.
Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod:
oc exec -it <pod_name> -- mkdir <mount_path>/nsfs
$ oc exec -it <pod_name> -- mkdir <mount_path>/nsfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.27. Example
oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs
$ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the MCG bucket using the
nsfs/path:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.28. Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check the SELinux labels of the folders residing in the PVCs in the legacy application and
openshift-storagenamespaces:oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>
$ oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.29. Example
oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace
$ oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.30. Example output
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc exec -it <pod_name> -- ls -latrZ <mount_path>
$ oc exec -it <pod_name> -- ls -latrZ <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.31. Example
oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/
$ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.32. Example output
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..
total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..Copy to Clipboard Copied! Toggle word wrap Toggle overflow In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues.
Ensure that the legacy application and
openshift-storagepods use the same SELinux labels on the files.You can do this one of two ways:
Delete the NSFS namespacestore:
Delete the MCG bucket:
noobaa bucket delete <bucket_name>
$ noobaa bucket delete <bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.33. Example
noobaa bucket delete legacy-bucket
$ noobaa bucket delete legacy-bucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the MCG user account:
noobaa account delete <user_account>
$ noobaa account delete <user_account>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.34. Example
noobaa account delete leguser
$ noobaa account delete leguserCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the NSFS namespacestore:
noobaa namespacestore delete <nsfs_namespacestore>
$ noobaa namespacestore delete <nsfs_namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.35. Example
noobaa namespacestore delete legacy-namespace
$ noobaa namespacestore delete legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the PV and PVC:
ImportantBefore you delete the PV and PVC, ensure that the PV has a retain policy configured.
oc delete pv <cephfs_pv_name>
$ oc delete pv <cephfs_pv_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc <cephfs_pvc_name>
$ oc delete pvc <cephfs_pvc_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <cephfs_pv_name>- Specify the CephFS PV name of the legacy application.
<cephfs_pvc_name>Specify the CephFS PVC name of the legacy application.
Example 5.36. Example
oc delete pv cephfs-pv-legacy-openshift-storage
$ oc delete pv cephfs-pv-legacy-openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc cephfs-pvc-legacy
$ oc delete pvc cephfs-pvc-legacyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Copy linkLink copied to clipboard!
Display the current
openshift-storagenamespace withsa.scc.mcs:oc get ns openshift-storage -o yaml | grep sa.scc.mcs
$ oc get ns openshift-storage -o yaml | grep sa.scc.mcsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.37. Example output
openshift.io/sa.scc.mcs: s0:c26,c0
openshift.io/sa.scc.mcs: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the legacy application namespace, and modify the
sa.scc.mcswith the value from thesa.scc.mcsof theopenshift-storagenamespace:oc edit ns <appplication_namespace>
$ oc edit ns <appplication_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.38. Example
oc edit ns testnamespace
$ oc edit ns testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ns <application_namespace> -o yaml | grep sa.scc.mcs
$ oc get ns <application_namespace> -o yaml | grep sa.scc.mcsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.39. Example
oc get ns testnamespace -o yaml | grep sa.scc.mcs
$ oc get ns testnamespace -o yaml | grep sa.scc.mcsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.40. Example output
openshift.io/sa.scc.mcs: s0:c26,c0
openshift.io/sa.scc.mcs: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the
openshift-storagedeployment.
5.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Copy linkLink copied to clipboard!
Create a new
sccwith theMustRunAsandseLinuxOptionsoptions, with the Multi Category Security (MCS) that theopenshift-storageproject uses:Example 5.41. Example YAML file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f scc.yaml
$ oc create -f scc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account for the deployment and add it to the newly created
scc.Create a service account:
oc create serviceaccount <service_account_name>
$ oc create serviceaccount <service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <service_account_name>Specify the name of the service account.
Example 5.42. Example
oc create serviceaccount testnamespacesa
$ oc create serviceaccount testnamespacesaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the service account to the newly created
scc:oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>
$ oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.43. Example
oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa
$ oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Patch the legacy application deployment so that it uses the newly created service account. Now, this allows you to specify the SELinux label in the deployment:
oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'$ oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.44. Example
oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'$ oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration:
oc edit dc <pod_name> -n <application_namespace>
$ oc edit dc <pod_name> -n <application_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <security_context_value>You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod.
Example 5.45. Example
oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace
$ oc edit dc cephfs-write-workload-generator-no-cache -n testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly:
oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext
$ oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContextCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.46. Example
oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext
$ oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContextCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example 5.47. Example output
securityContext: seLinuxOptions: level: s0:c26,c0securityContext: seLinuxOptions: level: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow The legacy application is restarted and begins using the same SELinux labels as the
openshift-storagenamespace.
Chapter 6. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Copy linkLink copied to clipboard!
Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security.
For more information on how to change the default MCG account credentials, see the Red Hat Knowledgebase solution How to change the default account credentials to ensure better security in the Multicloud Object Gateway?.
Chapter 7. Mirroring data for hybrid and Multicloud buckets Copy linkLink copied to clipboard!
The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters.
Prerequisites
- You must first add a backing storage that can be used by the MCG, see Chapter 4, Adding storage resources for hybrid or Multicloud.
Then you create a bucket class that reflects the data management policy, mirroring.
Procedure
You can set up mirroring data in three ways:
7.1. Creating bucket classes to mirror data using the MCG command-line-interface Copy linkLink copied to clipboard!
From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy:
noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror
$ noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement MirrorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations:
noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws
$ noobaa obc create mirrored-bucket --bucketclass=mirror-to-awsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Creating bucket classes to mirror data using a YAML Copy linkLink copied to clipboard!
Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines to your standard Object Bucket Claim (OBC):
additionalConfig: bucketclass: mirror-to-aws
additionalConfig: bucketclass: mirror-to-awsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about OBCs, see Chapter 10, Object Bucket Claim.
7.3. Configuring buckets to mirror data using the user interface Copy linkLink copied to clipboard!
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card, click Storage System and click the storage system link from the pop up that appears.
- In the Object tab, click the Multicloud Object Gateway link.
On the NooBaa page, click the buckets icon on the left side. You can see a list of your buckets:
- Click the bucket you want to update.
Click Edit Tier 1 Resources:
Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between
noobaa-default-backing-storewhich is on RGW andAWS-backingstorewhich is on AWS is mirrored:- Click Save.
Resources created in NooBaa UI cannot be used by OpenShift UI or Multicloud Object Gateway (MCG) CLI.
Chapter 8. Bucket policies in the Multicloud Object Gateway Copy linkLink copied to clipboard!
OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them.
8.1. About bucket policies Copy linkLink copied to clipboard!
Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview.
8.2. Using bucket policies Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications
Procedure
To use bucket policies in the MCG:
Create the bucket policy in JSON format. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow There are many available elements for bucket policies with regard to access permissions.
For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview.
For more examples of bucket policies, see AWS Bucket Policy Examples.
Instructions for creating S3 users can be found in Section 8.3, “Creating an AWS S3 user in the Multicloud Object Gateway”.
Using AWS S3 client, use the
put-bucket-policycommand to apply the bucket policy to your S3 bucket:aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy
# aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
ENDPOINTwith the S3 endpoint. -
Replace
MyBucketwith the bucket to set the policy on. -
Replace
BucketPolicywith the bucket policy JSON file. Add
--no-verify-sslif you are using the default self signed certificates.For example:
aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy
# aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on the
put-bucket-policycommand, see the AWS CLI Command Reference for put-bucket-policy.NoteThe principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account
obc-account.<generated bucket name>@noobaa.io.NoteBucket policy conditions are not supported.
-
Replace
8.3. Creating an AWS S3 user in the Multicloud Object Gateway Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications
Procedure
- In the OpenShift Web Console, click Storage → Data Foundation.
- In the Status card, click Storage System and click the storage system link from the pop up that appears.
- In the Object tab, click the Multicloud Object Gateway link.
Under the Accounts tab, click Create Account.
Select S3 Access Only, provide the Account Name, for example, john.doe@example.com. Click Next.
Select S3 default placement, for example, noobaa-default-backing-store. Select Buckets Permissions. A specific bucket or all buckets can be selected. Click Create.
Chapter 9. Multicloud Object Gateway bucket replication Copy linkLink copied to clipboard!
Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (S3, Azure, etc.).
A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication.
Prerequisites
- A running OpenShift Data Foundation Platform.
- Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications.
Download the Multicloud Object Gateway (MCG) command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantSpecify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the
mcgpackage from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packagesImportantChoose the correct Product Variant according to your architecture.
NoteCertain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG’s features.
To replicate a bucket, see Replicating a bucket to another bucket.
To set a bucket class replication policy, see Setting a bucket class replication policy.
9.1. Replicating a bucket to another bucket Copy linkLink copied to clipboard!
You can set the bucket replication policy in two ways:
9.1.1. Replicating a bucket to another bucket using the MCG command-line interface Copy linkLink copied to clipboard!
Applications that require a Multicloud Object Gateway (MCG) bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and define the replication policy parameter in a JSON file.
Procedure
From the MCG command-line interface, run the following command to create an OBC with a specific replication policy:
noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json
noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow <bucket-claim-name>- Specify the name of the bucket claim.
/path/to/json-file.jsonIs the path to a JSON file which defines the replication policy.
Example JSON file:
[{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}][{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow "prefix"-
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example,
{"prefix": ""}.
Example 9.1. Example
noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json
noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
9.1.2. Replicating a bucket to another bucket using a YAML Copy linkLink copied to clipboard!
Applications that require a Multicloud Object Gateway (MCG) data bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and add the spec.additionalConfig.replication-policy parameter to the OBC.
Procedure
Apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <desired-bucket-claim>- Specify the name of the bucket claim.
<desired-namespace>- Specify the namespace.
<desired-bucket-name>- Specify the prefix of the bucket name.
"rule_id"-
Specify the ID number of the rule, for example,
{"rule_id": "rule-1"}. "destination_bucket"-
Specify the name of the destination bucket, for example,
{"destination_bucket": "first.bucket"}. "prefix"-
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example,
{"prefix": ""}.
Additional information
- For more information about OBCs, see Object Bucket Claim.
9.2. Setting a bucket class replication policy Copy linkLink copied to clipboard!
It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways:
9.2.1. Setting a bucket class replication policy using the MCG command-line interface Copy linkLink copied to clipboard!
Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucketclass and define the replication-policy parameter in a JSON file.
It is possible to set a bucket class replication policy for two types of bucket classes:
- Placement
- Namespace
Procedure
From the MCG command-line interface, run the following command:
noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json
noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow <bucketclass-name>- Specify the name of the bucket class.
<backingstores>- Specify the name of a backingstore. It is possible to pass several backingstores separated by commas.
/path/to/json-file.jsonIs the path to a JSON file which defines the replication policy.
Example JSON file:
[{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}][{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow "prefix"-
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example,
{"prefix": ""}.
Example 9.2. Example
noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json
noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example creates a placement bucket class with a specific replication policy defined in the JSON file.
9.2.2. Setting a bucket class replication policy using a YAML Copy linkLink copied to clipboard!
Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucket class using the spec.replicationPolicy field.
Procedure
Apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to
first.bucket.<desired-app-label>- Specify a label for the app.
<desired-bucketclass-name>- Specify the bucket class name.
<desired-namespace>- Specify the namespace in which the bucket class gets created.
<backingstore>- Specify the name of a backingstore. It is possible to pass several backingstores.
"rule_id"-
Specify the ID number of the rule, for example,
`{"rule_id": "rule-1"}. "destination_bucket"-
Specify the name of the destination bucket, for example,
{"destination_bucket": "first.bucket"}. "prefix"-
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example,
{"prefix": ""}.
Chapter 10. Object Bucket Claim Copy linkLink copied to clipboard!
An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads.
You can create an Object Bucket Claim in three ways:
An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can’t create new buckets by default.
10.1. Dynamic Object Bucket Claim Copy linkLink copied to clipboard!
Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application’s YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application.
The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information.
Procedure
Add the following lines to your application YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These lines are the OBC itself.
-
Replace
<obc-name>with the a unique OBC name. -
Replace
<obc-bucket-name>with a unique bucket name for your OBC.
-
Replace
You can add more lines to the YAML file to automate the use of the OBC. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace all instances of
<obc-name>with your OBC name. -
Replace
<your application image>with your application image.
-
Replace all instances of
Apply the updated YAML file:
oc apply -f <yaml.file>
# oc apply -f <yaml.file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<yaml.file>with the name of your YAML file.To view the new configuration map, run the following:
oc get cm <obc-name> -o yaml
# oc get cm <obc-name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
obc-namewith the name of your OBC.You can expect the following environment variables in the output:
-
BUCKET_HOST- Endpoint to use in the application. BUCKET_PORT- The port available for the application.-
The port is related to the
BUCKET_HOST. For example, if theBUCKET_HOSTis https://my.example.com, and theBUCKET_PORTis 443, the endpoint for the object service would be https://my.example.com:443.
-
The port is related to the
-
BUCKET_NAME- Requested or generated bucket name. -
AWS_ACCESS_KEY_ID- Access key that is part of the credentials. -
AWS_SECRET_ACCESS_KEY- Secret access key that is part of the credentials.
-
Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them.
oc get secret <obc_name> -o yaml
# oc get secret <obc_name> -o yaml
<obc_name>- Specify the name of the object bucket claim.
10.2. Creating an Object Bucket Claim using the command line interface Copy linkLink copied to clipboard!
When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service.
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Use the command-line interface to generate the details of a new bucket and credentials. Run the following command:
noobaa obc create <obc-name> -n openshift-storage
# noobaa obc create <obc-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<obc-name>with a unique OBC name, for example,myappobc.Additionally, you can use the
--app-namespaceoption to specify the namespace where the OBC configuration map and secret will be created, for example,myapp-namespace.Example output:
INFO[0001] ✅ Created: ObjectBucketClaim "test21obc"
INFO[0001] ✅ Created: ObjectBucketClaim "test21obc"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC.
Run the following command to view the OBC:
oc get obc -n openshift-storage
# oc get obc -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s
NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to view the YAML file for the new OBC:
oc get obc test21obc -o yaml -n openshift-storage
# oc get obc test21obc -o yaml -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Inside of your
openshift-storagenamespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret:oc get -n openshift-storage secret test21obc -o yaml
# oc get -n openshift-storage secret test21obc -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The secret gives you the S3 access credentials.
Run the following command to view the configuration map:
oc get -n openshift-storage cm test21obc -o yaml
# oc get -n openshift-storage cm test21obc -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration map contains the S3 endpoint information for your application.
10.3. Creating an Object Bucket Claim using the OpenShift Web Console Copy linkLink copied to clipboard!
You can create an Object Bucket Claim (OBC) using the OpenShift Web Console.
Prerequisites
- Administrative access to the OpenShift Web Console.
- In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 10.1, “Dynamic Object Bucket Claim”.
Procedure
- Log into the OpenShift Web Console.
On the left navigation bar, click Storage → Object Bucket Claims → Create Object Bucket Claim.
Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu:
- Internal mode
The following storage classes, which were created after deployment, are available for use:
-
ocs-storagecluster-ceph-rgwuses the Ceph Object Gateway (RGW) -
openshift-storage.noobaa.iouses the Multicloud Object Gateway (MCG)
-
- External mode
The following storage classes, which were created after deployment, are available for use:
-
ocs-external-storagecluster-ceph-rgwuses the RGW openshift-storage.noobaa.iouses the MCGNoteThe RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from previous OpenShift Data Foundation releases.
-
Click Create.
Once you create the OBC, you are redirected to its detail page:
Additional Resources
10.4. Attaching an Object Bucket Claim to a deployment Copy linkLink copied to clipboard!
Once created, Object Bucket Claims (OBCs) can be attached to specific deployments.
Prerequisites
- Administrative access to the OpenShift Web Console.
Procedure
- On the left navigation bar, click Storage → Object Bucket Claims.
Click the Action menu (⋮) next to the OBC you created.
From the drop-down menu, select Attach to Deployment.
Select the desired deployment from the Deployment Name list, then click Attach.
Additional Resources
10.5. Viewing object buckets using the OpenShift Web Console Copy linkLink copied to clipboard!
You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console.
Prerequisites
- Administrative access to the OpenShift Web Console.
Procedure
- Log into the OpenShift Web Console.
On the left navigation bar, click Storage → Object Buckets.
Alternatively, you can also navigate to the details page of a specific OBC and click the Resource link to view the object buckets for that OBC.
- Select the object bucket you want to see details for. You are navigated to the Object Bucket Details page.
Additional Resources
10.6. Deleting Object Bucket Claims Copy linkLink copied to clipboard!
Prerequisites
- Administrative access to the OpenShift Web Console.
Procedure
- On the left navigation bar, click Storage → Object Bucket Claims.
Click the Action menu (⋮) next to the Object Bucket Claim (OBC) you want to delete.
- Select Delete Object Bucket Claim.
- Click Delete.
Additional Resources
Chapter 11. Caching policy for object buckets Copy linkLink copied to clipboard!
A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket.
11.1. Creating an AWS cache bucket Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z infrastructure use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
NoteChoose the correct Product Variant according to your architecture.
Procedure
Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command:
noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>
noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with the name of the namespacestore. -
Replace
<AWS ACCESS KEY>and<AWS SECRET ACCESS KEY>with an AWS access key ID and secret access key you created for this purpose. Replace
<bucket-name>with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.You can also add storage resources by applying a YAML. First create a secret with credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of
<AWS ACCESS KEY ID ENCODED IN BASE64>and<AWS SECRET ACCESS KEY ENCODED IN BASE64>.Replace
<namespacestore-secret-name>with a unique name.Then apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with a unique name. -
Replace
<namespacestore-secret-name>with the secret created in the previous step. -
Replace
<namespace-secret>with the namespace used to create the secret in the previous step. -
Replace
<target-bucket>with the AWS S3 bucket you created for the namespacestore.
-
Replace
Run the following command to create a bucket class:
noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>
noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-cache-bucket-class>with a unique bucket class name. -
Replace
<backing-store>with the relevant backing store. You can list one or more backingstores separated by commas in this field. -
Replace
<namespacestore>with the namespacestore created in the previous step.
-
Replace
Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2.
noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>
noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-claim>with a unique name. -
Replace
<custom-bucket-class>with the name of the bucket class created in step 2.
-
Replace
11.2. Creating an IBM COS cache bucket Copy linkLink copied to clipboard!
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface.
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using the subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z infrastructure, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
NoteChoose the correct Product Variant according to your architecture.
Procedure
Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command:
noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>
noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with the name of the NamespaceStore. -
Replace
<IBM ACCESS KEY>,<IBM SECRET ACCESS KEY>,<IBM COS ENDPOINT>with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace
<bucket-name>with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.You can also add storage resources by applying a YAML. First, Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of
<IBM COS ACCESS KEY ID ENCODED IN BASE64>and<IBM COS SECRET ACCESS KEY ENCODED IN BASE64>.Replace
<namespacestore-secret-name>with a unique name.Then apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<namespacestore>with a unique name. -
Replace
<IBM COS ENDPOINT>with the appropriate IBM COS endpoint. -
Replace
<backingstore-secret-name>with the secret created in the previous step. -
Replace
<namespace-secret>with the namespace used to create the secret in the previous step. -
Replace
<target-bucket>with the AWS S3 bucket you created for the namespacestore.
-
Replace
Run the following command to create a bucket class:
noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>
noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-class>with a unique bucket class name. -
Replace
<backing-store>with the relevant backing store. You can list one or more backingstores separated by commas in this field. -
Replace
<namespacestore>with the namespacestore created in the previous step.
-
Replace
Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2.
noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>
noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<my-bucket-claim>with a unique name. -
Replace
<custom-bucket-class>with the name of the bucket class created in step 2.
-
Replace
Chapter 12. Scaling Multicloud Object Gateway performance by adding endpoints Copy linkLink copied to clipboard!
The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints.
The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default:
- Storage service
- S3 endpoint service
S3 endpoint service
The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG.
12.1. Automatic scaling of MultiCloud Object Gateway endpoints Copy linkLink copied to clipboard!
The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.
12.2. Scaling the Multicloud Object Gateway with storage nodes Copy linkLink copied to clipboard!
Prerequisites
- A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG).
A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods.
Procedure
- Log in to OpenShift Web Console.
- From the MCG user interface, click Overview → Add Storage Resources.
- In the window, click Deploy Kubernetes Pool.
- In the Create Pool step create the target pool for the future installed nodes.
- In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created.
- In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally.
- All nodes will be assigned to the pool you chose in the first step, and can be found under Resources → Storage resources → Resource name.
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Copy linkLink copied to clipboard!
Users can access the RADOS Object Gateway (RGW) endpoint directly.
In previous versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore.