Chapter 4. Managing namespace buckets
Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider.
A namespace bucket can only be used if its write target is available and functional.
4.1. Amazon S3 API endpoints for objects in namespace buckets Copy linkLink copied to clipboard!
You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API.
Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket.
Red Hat OpenShift Data Foundation supports the following namespace bucket operations:
See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them.
Additional resources
4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML Copy linkLink copied to clipboard!
For more information about namespace buckets, see Managing namespace buckets.
Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket:
4.2.1. Adding an AWS S3 namespace bucket using YAML Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
Access to the Multicloud Object Gateway (MCG).
For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Procedure
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
<namespacestore-secret-name>is a unique NamespaceStore name.You must provide and encode your own AWS access key ID and secret access key using
Base64, and use the results in place of<AWS ACCESS KEY ID ENCODED IN BASE64>and<AWS SECRET ACCESS KEY ENCODED IN BASE64>.Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs).
A NamespaceStore represents underlying storage to be used as a
readorwritetarget for the data in the MCG namespace buckets.To create a NamespaceStore resource, apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give to the resource.
<namespacestore-secret-name>- The secret created in the previous step.
<namespace-secret>- The namespace where the secret can be found.
<target-bucket>- The target bucket you created for the NamespaceStore.
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.A namespace policy of type
singlerequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <my-bucket-class>- The unique namespace bucket class name.
<resource>- The name of a single NamespaceStore that defines the read and write target of the namespace bucket.
A namespace policy of type
multirequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <my-bucket-class>- A unique bucket class name.
<write-resource>-
The name of a single NamespaceStore that defines the
writetarget of the namespace bucket. <read-resources>-
A list of the names of the NamespaceStores that defines the
readtargets of the namespace bucket.
Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give to the resource.
<my-bucket>- The name you want to give to the bucket.
<my-bucket-class>- The bucket class created in the previous step.
After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC.
4.2.2. Adding an IBM COS namespace bucket using YAML Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Procedure
Create a secret with the credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <namespacestore-secret-name>A unique NamespaceStore name.
You must provide and encode your own IBM COS access key ID and secret access key using
Base64, and use the results in place of<IBM COS ACCESS KEY ID ENCODED IN BASE64>and<IBM COS SECRET ACCESS KEY ENCODED IN BASE64>.
Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs).
A NamespaceStore represents underlying storage to be used as a
readorwritetarget for the data in the MCG namespace buckets.To create a NamespaceStore resource, apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <IBM COS ENDPOINT>- The appropriate IBM COS endpoint.
<namespacestore-secret-name>- The secret created in the previous step.
<namespace-secret>- The namespace where the secret can be found.
<target-bucket>- The target bucket you created for the NamespaceStore.
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.The namespace policy of type
singlerequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <my-bucket-class>- The unique namespace bucket class name.
<resource>-
The name of a single NamespaceStore that defines the
readandwritetarget of the namespace bucket.
The namespace policy of type
multirequires the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <my-bucket-class>- The unique bucket class name.
<write-resource>- The name of a single NamespaceStore that defines the write target of the namespace bucket.
<read-resources>-
A list of the NamespaceStores names that defines the
readtargets of the namespace bucket.
To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the previous step, apply the following YAML:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give to the resource.
<my-bucket>- The name you want to give to the bucket.
<my-bucket-class>The bucket class created in the previous step.
After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a
SecretandConfigMapwith the same name and in the same namespace as that of the OBC.
4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
- Download the MCG command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
# yum install mcg
Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
Choose the correct Product Variant according to your architecture.
Procedure
In the MCG command-line interface, create a NamespaceStore resource.
A NamespaceStore represents an underlying storage to be used as a
readorwritetarget for the data in MCG namespace buckets.noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
$ noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <namespacestore>- The name of the NamespaceStore.
<AWS ACCESS KEY>and<AWS SECRET ACCESS KEY>- The AWS access key ID and secret access key you created for this purpose.
<bucket-name>- The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either
singleormulti.To create a namespace bucket class with a namespace policy of type
single:noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
$ noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give the resource.
<my-bucket-class>- A unique bucket class name.
<resource>-
A single namespace-store that defines the
readandwritetarget of the namespace bucket.
To create a namespace bucket class with a namespace policy of type
multi:noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
$ noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give the resource.
<my-bucket-class>- A unique bucket class name.
<write-resource>-
A single namespace-store that defines the
writetarget of the namespace bucket. <read-resources>s-
A list of namespace-stores separated by commas that defines the
readtargets of the namespace bucket.
Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the previous step.
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
$ noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <bucket-name>- A bucket name of your choice.
<custom-bucket-class>- The name of the bucket class created in the previous step.
After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a
Secretand aConfigMapwith the same name and in the same namespace as that of the OBC.
4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
Download the MCG command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSpecify the appropriate architecture for enabling the repositories using subscription manager.
- For IBM Power, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For IBM Z, use the following command:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package.
NoteChoose the correct Product Variant according to your architecture.
Procedure
In the MCG command-line interface, create a NamespaceStore resource.
A NamespaceStore represents an underlying storage to be used as a
readorwritetarget for the data in the MCG namespace buckets.noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
$ noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <namespacestore>- The name of the NamespaceStore.
<IBM ACCESS KEY>,<IBM SECRET ACCESS KEY>,<IBM COS ENDPOINT>- An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.
<bucket-name>- An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either
singleormulti.To create a namespace bucket class with a namespace policy of type
single:noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
$ noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give the resource.
<my-bucket-class>- A unique bucket class name.
<resource>-
A single NamespaceStore that defines the
readandwritetarget of the namespace bucket.
To create a namespace bucket class with a namespace policy of type
multi:noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
$ noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow <resource-name>- The name you want to give the resource.
<my-bucket-class>- A unique bucket class name.
<write-resource>-
A single NamespaceStore that defines the
writetarget of the namespace bucket. <read-resources>-
A comma-separated list of NamespaceStores that defines the
readtargets of the namespace bucket.
Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step.
noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
$ noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <bucket-name>- A bucket name of your choice.
<custom-bucket-class>- The name of the bucket class created in the previous step.
After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC.
4.3. Adding a namespace bucket using the OpenShift Container Platform user interface Copy linkLink copied to clipboard!
You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets.
Prerequisites
- Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed.
- Access to the Multicloud Object Gateway (MCG).
Procedure
-
On the OpenShift Web Console, navigate to Storage
Object Storage Namespace Store tab. Click Create namespace store to create a
namespacestoreresources to be used in the namespace bucket.-
Enter a
namespacestorename. - Choose a provider and region.
- Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key.
- Enter a target bucket.
- Click Create.
-
Enter a
-
On the Namespace Store tab, verify that the newly created
namespacestoreis in the Ready state. - Repeat steps 2 and 3 until you have created all the desired amount of resources.
Navigate to Bucket Class tab and click Create Bucket Class.
-
Choose
NamespaceBucketClass type radio button. - Enter a BucketClass name and click Next.
Choose a Namespace Policy Type for your namespace bucket, and then click Next.
- If your namespace policy type is Single, you need to choose a read resource.
- If your namespace policy type is Multi, you need to choose read resources and a write resource.
- If your namespace policy type is Cache, you need to choose a Hub namespace store that defines the read and write target of the namespace bucket.
- Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click Next.
- Review your new bucket class details, and then click Create Bucket Class.
-
Choose
- Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase.
Navigate to Object Bucket Claims tab and click Create Object Bucket Claim.
- Enter ObjectBucketClaim Name for the namespace bucket.
-
Select StorageClass as
openshift-storage.noobaa.io. -
Select the BucketClass that you created earlier for your
namespacestorefrom the list. By default,noobaa-default-bucket-classgets selected. - Click Create. The namespace bucket is created along with Object Bucket Claim for your namespace.
- Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state.
- Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state.
4.4. Sharing legacy application data with cloud native application using S3 protocol Copy linkLink copied to clipboard!
Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following:
- Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol.
- Access file system datasets from both file system and S3 protocol.
- Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs).
4.4.1. Creating a NamespaceStore to use a file system Copy linkLink copied to clipboard!
Prerequisites
- Openshift Container Platform with OpenShift Data Foundation operator installed.
- Access to the Multicloud Object Gateway (MCG).
Procedure
- Log into the OpenShift Web Console.
-
Click Storage
Object Storage. - Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket.
- Click Create namespacestore.
- Enter a name for the NamespaceStore.
- Choose Filesystem as the provider.
- Choose the Persistent volume claim.
Enter a folder name.
If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created.
- Click Create.
- Verify the NamespaceStore is in the Ready state.
4.4.2. Creating accounts with NamespaceStore filesystem configuration Copy linkLink copied to clipboard!
You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML.
You cannot remove a NamespaceStore filesystem configuration from an account.
Prerequisites
Download the Multicloud Object Gateway (MCG) command-line interface:
subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms # yum install mcgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface.
noobaa account create <noobaa-account-name> [flags]
$ noobaa account create <noobaa-account-name> [flags]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestore
$ noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestoreCopy to Clipboard Copied! Toggle word wrap Toggle overflow allow_bucket_createIndicates whether the account is allowed to create new buckets. Supported values are
trueorfalse. Default value istrue.allowed_bucketsA comma separated list of bucket names to which the user is allowed to have access and management rights.
default_resourceThe NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC).
full_permissionIndicates whether the account should be allowed full permission or not. Supported values are
trueorfalse. Default value isfalse.new_buckets_pathThe filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes.
nsfs_account_configA mandatory field that indicates if the account is used for NamespaceStore filesystem.
nsfs_onlyIndicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or
false. Default value isfalse. If it is set to 'true', it limits you from accessing other types of buckets.uidThe user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem
gidThe group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem
The MCG system sends a response with the account configuration and its S3 credentials:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can list all the custom resource definition (CRD) based accounts by using the following command:
noobaa account list
$ noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17sCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3. Accessing legacy application data from the openshift-storage namespace Copy linkLink copied to clipboard!
When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses.
In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses.
Procedure
Display the application namespace with
scc:oc get ns <application_namespace> -o yaml | grep scc
$ oc get ns <application_namespace> -o yaml | grep sccCopy to Clipboard Copied! Toggle word wrap Toggle overflow - <application_namespace>
Specify the name of the application namespace.
For example:
oc get ns testnamespace -o yaml | grep scc
$ oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Navigate into the application namespace:
oc project <application_namespace>
$ oc project <application_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project testnamespace
$ oc project testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pod
$ oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the mount point of the Persistent Volume (PV) inside your pod.
Get the volume name of the PV from the pod:
oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'$ oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <pod_name>
Specify the name of the pod.
For example:
oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}'$ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {"name":"app-persistent-storage","persistentVolumeClaim":{"claimName":"cephfs-write-workload-generator-no-cache-pv-claim"}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the name of the volume for the PVC is
cephfs-write-workload-generator-no-cache-pv-claim.
List all the mounts in the pod, and check for the mount point of the volume that you identified in the previous step:
oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'$ oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}'$ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{"mountPath":"/mnt/pv","name":"app-persistent-storage"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-8tnc5","readOnly":true}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm the mount point of the RWX PV in your pod:
oc exec -it <pod_name> -- df <mount_path>
$ oc exec -it <pod_name> -- df <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <mount_path>Specify the path to the mount point that you identified in the previous step.
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses:
oc exec -it <pod_name> -- ls -latrZ <mount_path>
$ oc exec -it <pod_name> -- ls -latrZ <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the information of the legacy application RWX PV that you want to make accessible from the
openshift-storagenamespace:oc get pv | grep <pv_name>
$ oc get pv | grep <pv_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <pv_name>Specify the name of the PV.
For example:
oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
$ oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the PVC from the legacy application is accessible from the
openshift-storagenamespace so that one or more noobaa-endpoint pods can access the PVC.Find the values of the
subvolumePathandvolumeHandlefrom thevolumeAttributes. You can get these values from the YAML description of the legacy application PV:oc get pv <pv_name> -o yaml
$ oc get pv <pv_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
subvolumePathandvolumeHandlevalues that you identified in the previous step to create a new PV and PVC object in theopenshift-storagenamespace that points to the same CephFS volume as the legacy application PV:Example YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The storage capacity of the PV that you are creating in the
openshift-storagenamespace must be the same as the original PV. - 2
- The volume handle for the target PV that you create in
openshift-storageneeds to have a different handle than the original application PV, for example, add-cloneat the end of the volume handle. - 3
- The storage capacity of the PVC that you are creating in the
openshift-storagenamespace must be the same as the original PVC.
Create the PV and PVC in the
openshift-storagenamespace using the YAML file specified in the previous step:oc create -f <YAML_file>
$ oc create -f <YAML_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <YAML_file>Specify the name of the YAML file.
For example:
oc create -f pv-openshift-storage.yaml
$ oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the PVC is available in the
openshift-storagenamespace:oc get pvc -n openshift-storage
$ oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate into the
openshift-storageproject:oc project openshift-storage
$ oc project openshift-storage Now using project "openshift-storage" on server "https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443".Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the NSFS namespacestore:
noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'
$ noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'Copy to Clipboard Copied! Toggle word wrap Toggle overflow <nsfs_namespacestore>- Specify the name of the NSFS namespacestore.
<cephfs_pvc_name>Specify the name of the CephFS PVC in the
openshift-storagenamespace.For example:
noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'
$ noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example,
/nsfs/legacy-namespacemountpoint:oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>
$ oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <noobaa_endpoint_pod_name>Specify the name of the noobaa-endpoint pod.
For example:
oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace
$ oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a MCG user account:
noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'
$ noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'Copy to Clipboard Copied! Toggle word wrap Toggle overflow <user_account>- Specify the name of the MCG user account.
<gid_number>- Specify the GID number.
<uid_number>Specify the UID number.
ImportantUse the same
UIDandGIDas that of the legacy application. You can find it from the previous output.For example:
noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'
$ noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a MCG bucket.
Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod:
oc exec -it <pod_name> -- mkdir <mount_path>/nsfs
$ oc exec -it <pod_name> -- mkdir <mount_path>/nsfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs
$ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the MCG bucket using the
nsfs/path:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check the SELinux labels of the folders residing in the PVCs in the legacy application and
openshift-storagenamespaces:oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>
$ oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc exec -it <pod_name> -- ls -latrZ <mount_path>
$ oc exec -it <pod_name> -- ls -latrZ <mount_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues.
Ensure that the legacy application and
openshift-storagepods use the same SELinux labels on the files.You can do this in one of the following ways:
Delete the NSFS namespacestore:
Delete the MCG bucket:
noobaa bucket delete <bucket_name>
$ noobaa bucket delete <bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
noobaa bucket delete legacy-bucket
$ noobaa bucket delete legacy-bucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the MCG user account:
noobaa account delete <user_account>
$ noobaa account delete <user_account>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
noobaa account delete leguser
$ noobaa account delete leguserCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the NSFS namespacestore:
noobaa namespacestore delete <nsfs_namespacestore>
$ noobaa namespacestore delete <nsfs_namespacestore>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
noobaa namespacestore delete legacy-namespace
$ noobaa namespacestore delete legacy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the PV and PVC:
ImportantBefore you delete the PV and PVC, ensure that the PV has a retain policy configured.
oc delete pv <cephfs_pv_name>
$ oc delete pv <cephfs_pv_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc <cephfs_pvc_name>
$ oc delete pvc <cephfs_pvc_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <cephfs_pv_name>- Specify the CephFS PV name of the legacy application.
<cephfs_pvc_name>Specify the CephFS PVC name of the legacy application.
For example:
oc delete pv cephfs-pv-legacy-openshift-storage
$ oc delete pv cephfs-pv-legacy-openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc cephfs-pvc-legacy
$ oc delete pvc cephfs-pvc-legacyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Copy linkLink copied to clipboard!
Display the current
openshift-storagenamespace withsa.scc.mcs:oc get ns openshift-storage -o yaml | grep sa.scc.mcs
$ oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the legacy application namespace, and modify the
sa.scc.mcswith the value from thesa.scc.mcsof theopenshift-storagenamespace:oc edit ns <appplication_namespace>
$ oc edit ns <appplication_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc edit ns testnamespace
$ oc edit ns testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ns <application_namespace> -o yaml | grep sa.scc.mcs
$ oc get ns <application_namespace> -o yaml | grep sa.scc.mcsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get ns testnamespace -o yaml | grep sa.scc.mcs
$ oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the
openshift-storagedeployment.
4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Copy linkLink copied to clipboard!
Create a new
sccwith theMustRunAsandseLinuxOptionsoptions, with the Multi Category Security (MCS) that theopenshift-storageproject uses.Example YAML file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f scc.yaml
$ oc create -f scc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account for the deployment and add it to the newly created
scc.Create a service account:
oc create serviceaccount <service_account_name>
$ oc create serviceaccount <service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <service_account_name>`
Specify the name of the service account.
For example:
oc create serviceaccount testnamespacesa
$ oc create serviceaccount testnamespacesaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the service account to the newly created
scc:oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>
$ oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa
$ oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment:
oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'$ oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'$ oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration:
oc edit dc <pod_name> -n <application_namespace>
$ oc edit dc <pod_name> -n <application_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <security_context_value>You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod.
For example:
oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace
$ oc edit dc cephfs-write-workload-generator-no-cache -n testnamespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly:
oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext
$ oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContextCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example"
oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext
$ oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow The legacy application is restarted and begins using the same SELinux labels as the
openshift-storagenamespace.