Managing hybrid and multicloud resources


Red Hat OpenShift Data Foundation 4.18

Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa).

Red Hat Storage Documentation Team

Abstract

This document explains how to manage storage resources across a hybrid cloud or multicloud environment.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Do let us know how we can make it better.

To give feedback, create a Jira ticket:

  1. Log in to the Jira.
  2. Click Create in the top navigation bar
  3. Enter a descriptive title in the Summary field.
  4. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
  5. Select Documentation in the Components field.
  6. Click Create at the bottom of the dialogue.

Chapter 1. About the Multicloud Object Gateway

The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage.

Chapter 2. Accessing the Multicloud Object Gateway with your applications

You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information.

For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint.

Prerequisites

  • A running OpenShift Data Foundation Platform.

2.1. Accessing the Multicloud Object Gateway from the terminal

Procedure

Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key (AWS_ACCESS_KEY_ID value) and secret access key (AWS_SECRET_ACCESS_KEY value).

# oc describe noobaa -n openshift-storage

The output will look similar to the following:

Name:         noobaa
Namespace:    openshift-storage
Labels:       <none>
Annotations:  <none>
API Version:  noobaa.io/v1alpha1
Kind:         NooBaa
Metadata:
  Creation Timestamp:  2019-07-29T16:22:06Z
  Generation:          1
  Resource Version:    6718822
  Self Link:           /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa
  UID:                 019cfb4a-b21d-11e9-9a02-06c8de012f9e
Spec:
Status:
  Accounts:
    Admin:
      Secret Ref:
        Name:           noobaa-admin
        Namespace:      openshift-storage
  Actual Image:         noobaa/noobaa-core:4.0
  Observed Generation:  1
  Phase:                Ready
  Readme:

  Welcome to NooBaa!
  -----------------

  Welcome to NooBaa!
    -----------------
    NooBaa Core Version:
    NooBaa Operator Version:

    Lets get started:

    1. Connect to Management console:

      Read your mgmt console login information (email & password) from secret: "noobaa-admin".

        kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)'

      Open the management console service - take External IP/DNS or Node Port or use port forwarding:

        kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 &
        open https://localhost:11443

    2. Test S3 client:

      kubectl port-forward -n openshift-storage service/s3 10443:443 &
1
      NOOBAA_ACCESS_KEY=$(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d')
2
      NOOBAA_SECRET_KEY=$(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d')
      alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3'
      s3 ls


    Services:
      Service Mgmt:
        External DNS:
          https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com
          https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443
        Internal DNS:
          https://noobaa-mgmt.openshift-storage.svc:443
        Internal IP:
          https://172.30.235.12:443
        Node Ports:
          https://10.0.142.103:31385
        Pod Ports:
          https://10.131.0.19:8443
      serviceS3:
        External DNS: 3
          https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com
          https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443
        Internal DNS:
          https://s3.openshift-storage.svc:443
        Internal IP:
          https://172.30.86.41:443
        Node Ports:
          https://10.0.142.103:31011
        Pod Ports:
          https://10.131.0.19:6443
1
access key (AWS_ACCESS_KEY_ID value)
2
secret access key (AWS_SECRET_ACCESS_KEY value)
3
MCG endpoint
Note

The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour.

2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Run the status command to access the endpoint, access key, and secret access key:

noobaa status -n openshift-storage

The output will look similar to the following:

INFO[0000] Namespace: openshift-storage
INFO[0000]
INFO[0000] CRD Status:
INFO[0003] ✅ Exists: CustomResourceDefinition "noobaas.noobaa.io"
INFO[0003] ✅ Exists: CustomResourceDefinition "backingstores.noobaa.io"
INFO[0003] ✅ Exists: CustomResourceDefinition "bucketclasses.noobaa.io"
INFO[0004] ✅ Exists: CustomResourceDefinition "objectbucketclaims.objectbucket.io"
INFO[0004] ✅ Exists: CustomResourceDefinition "objectbuckets.objectbucket.io"
INFO[0004]
INFO[0004] Operator Status:
INFO[0004] ✅ Exists: Namespace "openshift-storage"
INFO[0004] ✅ Exists: ServiceAccount "noobaa"
INFO[0005] ✅ Exists: Role "ocs-operator.v0.0.271-6g45f"
INFO[0005] ✅ Exists: RoleBinding "ocs-operator.v0.0.271-6g45f-noobaa-f9vpj"
INFO[0006] ✅ Exists: ClusterRole "ocs-operator.v0.0.271-fjhgh"
INFO[0006] ✅ Exists: ClusterRoleBinding "ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5"
INFO[0006] ✅ Exists: Deployment "noobaa-operator"
INFO[0006]
INFO[0006] System Status:
INFO[0007] ✅ Exists: NooBaa "noobaa"
INFO[0007] ✅ Exists: StatefulSet "noobaa-core"
INFO[0007] ✅ Exists: Service "noobaa-mgmt"
INFO[0008] ✅ Exists: Service "s3"
INFO[0008] ✅ Exists: Secret "noobaa-server"
INFO[0008] ✅ Exists: Secret "noobaa-operator"
INFO[0008] ✅ Exists: Secret "noobaa-admin"
INFO[0009] ✅ Exists: StorageClass "openshift-storage.noobaa.io"
INFO[0009] ✅ Exists: BucketClass "noobaa-default-bucket-class"
INFO[0009] ✅ (Optional) Exists: BackingStore "noobaa-default-backing-store"
INFO[0010] ✅ (Optional) Exists: CredentialsRequest "noobaa-cloud-creds"
INFO[0010] ✅ (Optional) Exists: PrometheusRule "noobaa-prometheus-rules"
INFO[0010] ✅ (Optional) Exists: ServiceMonitor "noobaa-service-monitor"
INFO[0011] ✅ (Optional) Exists: Route "noobaa-mgmt"
INFO[0011] ✅ (Optional) Exists: Route "s3"
INFO[0011] ✅ Exists: PersistentVolumeClaim "db-noobaa-core-0"
INFO[0011] ✅ System Phase is "Ready"
INFO[0011] ✅ Exists:  "noobaa-admin"

#------------------#
#- Mgmt Addresses -#
#------------------#

ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443]
ExternalIP  : []
NodePorts   : [https://10.0.142.103:31385]
InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443]
InternalIP  : [https://172.30.235.12:443]
PodPorts    : [https://10.131.0.19:8443]

#--------------------#
#- Mgmt Credentials -#
#--------------------#

email    : admin@noobaa.io
password : HKLbH1rSuVU0I/souIkSiA==

#----------------#
#- S3 Addresses -#
#----------------#

1
ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443]
ExternalIP  : []
NodePorts   : [https://10.0.142.103:31011]
InternalDNS : [https://s3.openshift-storage.svc:443]
InternalIP  : [https://172.30.86.41:443]
PodPorts    : [https://10.131.0.19:6443]

#------------------#
#- S3 Credentials -#
#------------------#

2
AWS_ACCESS_KEY_ID     : jVmAsu9FsvRHYmfjTiHV
3
AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c

#------------------#
#- Backing Stores -#
#------------------#

NAME                           TYPE     TARGET-BUCKET                                               PHASE   AGE
noobaa-default-backing-store   aws-s3   noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9   Ready   141h35m32s

#------------------#
#- Bucket Classes -#
#------------------#

NAME                          PLACEMENT                                                             PHASE   AGE
noobaa-default-bucket-class   {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]}   Ready   141h35m33s

#-----------------#
#- Bucket Claims -#
#-----------------#

No OBC's found.
1
endpoint
2
access key
3
secret access key

You have the relevant endpoint, access key, and secret access key in order to connect to your applications.

For example:

If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation:

AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls

2.3. Support of Multicloud Object Gateway data bucket APIs

The following table lists the Multicloud Object Gateway (MCG) data bucket APIs and their support levels.

Data buckets

Support

 

List buckets

Supported

 

Delete bucket

Supported

Replication configuration is part of MCG bucket class configuration

Create bucket

Supported

A different set of canned ACLs

Post bucket

Not supported

 

Put bucket

Partially supported

Replication configuration is part of MCG bucket class configuration

Bucket lifecycle

Partially supported

Object expiration only

Policy (Buckets, Objects)

Partially supported

Bucket policies are supported

Bucket Website

Supported

 

Bucket ACLs (Get, Put)

Supported

A different set of canned ACLs

Bucket Location

Partialy

Returns a default value only

Bucket Notification

Not supported

 

Bucket Object Versions

Supoorted

 

Get Bucket Info (HEAD)

Supported

 

Bucket Request Payment

Partially supported

Returns the bucket owner

Put Object

Supported

 

Delete Object

Supported

 

Get Object

Supported

 

Object ACLs (Get, Put)

Supported

 

Get Object Info (HEAD)

Supported

 

POST Object

Supported

 

Copy Object

Supported

 

Multipart Uploads

Supported

 

Object Tagging

Supported

 

Storage Class

Not supported

 
Note

No support for cors, metrics, inventory, analytics, inventory, logging, notifications, accelerate, replication, request payment, locks verbs

Chapter 3. Adding storage resources for hybrid or Multicloud

3.1. Creating a new backing store

Use this procedure to create a new backing store in OpenShift Data Foundation.

Prerequisites

  • Administrator access to OpenShift Data Foundation.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Backing Store tab.
  3. Click Create Backing Store.
  4. On the Create New Backing Store page, perform the following:

    1. Enter a Backing Store Name.
    2. Select a Provider.
    3. Select a Region.
    4. Optional: Enter an Endpoint.
    5. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets.

      For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation.

      Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, “Adding storage resources for hybrid or Multicloud using the MCG command line interface” and follow the procedure for the addition of storage resources using a YAML.

      Note

      This menu is relevant for all providers except Google Cloud and local PVC.

    6. Enter the Target bucket. The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system.
  5. Click Create Backing Store.

Verification steps

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Backing Store tab to view all the backing stores.

3.2. Overriding the default backing store

You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance.

Prerequisites

  • Openshift Container Platform with OpenShift Data Foundation operator installed.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. Check if noobaa-default-backing-store is present:

    $ oc get backingstore
    NAME TYPE PHASE AGE
    noobaa-default-backing-store pv-pool Creating 102s
  2. Patch the NooBaa CR to enable manualDefaultBackingStore:

    $ oc patch noobaa/noobaa --type json --patch='[{"op":"add","path":"/spec/manualDefaultBackingStore","value":true}]'
    Important

    Use the Multicloud Object Gateway CLI to create a new backing store and update accounts.

  3. Create a new default backing store to override the default backing store. For example:

    $ noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16
    1. Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store.
  4. Update the admin account to use the new default backing store as its default resource:

    $ noobaa account update admin@noobaa.io --new_default_resource=_NEW-DEFAULT-BACKING-STORE_
    1. Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the previous step.

      Updating the default resource for admin accounts ensures that the new configuration is used throughout your system.

  5. Configure the default-bucketclass to use the new default backingstore:

    $ oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{"op": "replace", "path": "/spec/placementPolicy/tiers/0/backingStores/0", "value": "NEW-DEFAULT-BACKING-STORE"}]'
  6. Optional: Delete the noobaa-default-backing-store.

    1. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource.
    2. Delete the noobaa-default-backing-store:

      $ oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'

      You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition.

3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface

The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters.

Add a backing storage that can be used by the MCG.

Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage:

For VMware deployments, skip to Section 3.4, “Creating an s3 compatible Multicloud Object Gateway backingstore” for further instructions.

3.3.1. Creating an AWS-backed backingstore

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Using MCG command-line interface

  • From the MCG command-line interface, run the following command:

    noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
    <backingstore_name>
    The name of the backingstore.
    <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY>
    The AWS access key ID and secret access key you created for this purpose.
    <bucket-name>

    The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

    The output will be similar to the following:

    INFO[0001] ✅ Exists: NooBaa "noobaa"
    INFO[0002] ✅ Created: BackingStore "aws-resource"
    INFO[0002] ✅ Created: Secret "backing-store-secret-aws-resource"

Adding storage resources using a YAML

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <backingstore-secret-name>
      namespace: openshift-storage
    type: Opaque
    data:
      AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64>
      AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>
    <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY>
    Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64>.
    <backingstore-secret-name>
    The name of the backingstore secret created in the previous step.
  2. Apply the following YAML for a specific backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      awsS3:
        secret:
          name: <backingstore-secret-name>
          namespace: openshift-storage
        targetBucket: <bucket-name>
      type: aws-s3
    <bucket-name>
    The existing AWS bucket name.
    <backingstore-secret-name>
    The name of the backingstore secret created in the previous step.

3.3.2. Creating an AWS-STS-backed backingstore

Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following:

  • Creating an AWS role using a script, which helps to get the temporary security credentials for the role session
  • Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster
  • Creating backingstore in AWS STS OpenShift cluster
3.3.2.1. Creating an AWS role using a script

You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator.

Prerequisites

Procedure

  • Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation.

    The following example shows the details that are required to create the role:

    {
        “Version”: “2012-10-17",
        “Statement”: [
            {
                “Effect”: “Allow”,
                “Principal”: {
                    “Federated”: “arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com”
                },
                “Action”: “sts:AssumeRoleWithWebIdentity”,
                “Condition”: {
                    “StringEquals”: {
                        “mybucket-oidc.s3.us-east-2.amazonaws.com:sub”: [
                            “system:serviceaccount:openshift-storage:noobaa”,
                            "system:serviceaccount:openshift-storage:noobaa-core",
                            “system:serviceaccount:openshift-storage:noobaa-endpoint”
                        ]
                    }
                }
            }
        ]
    }

    where

    123456789123
    Is the AWS account ID
    mybucket
    Is the bucket name (using public bucket configuration)
    us-east-2
    Is the AWS region
    openshift-storage
    Is the namespace name

Sample script

#!/bin/bash
set -x

# This is a sample script to help you deploy MCG on AWS STS cluster.
# This script shows how to create role-policy and then create the role in AWS.
# For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html

# WARNING: This is a sample script. You need to adjust the variables based on your requirement.

# Variables :
# user variables - REPLACE these variables with your values:
ROLE_NAME="<role-name>" # role name that you pick in your AWS account
NAMESPACE="<namespace>" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage.

# MCG variables
SERVICE_ACCOUNT_NAME_1="noobaa" # The service account name of deployment operator
SERVICE_ACCOUNT_NAME_2="noobaa-endpoint" # The service account name of deployment endpoint
SERVICE_ACCOUNT_NAME_3="noobaa-core" # The service account name of statefulset core

# AWS variables
# Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER)
# AWS_ACCOUNT_ID is your AWS account number
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
# If you want to create the role before using the cluster, replace this field too.
# The OIDC provider is in the structure:
# 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket
# 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL
OIDC_PROVIDER=$(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e "s/^https:\/\///")
# the permission (S3 full access)
POLICY_ARN_STRINGS="arn:aws:iam::aws:policy/AmazonS3FullAccess"

# Creating the role (with AWS command line interface)

read -r -d '' TRUST_RELATIONSHIP <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
   {
 	"Effect": "Allow",
 	"Principal": {
   	"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
 	},
 	"Action": "sts:AssumeRoleWithWebIdentity",
 	"Condition": {
   	"StringEquals": {
    	"${OIDC_PROVIDER}:sub": [
      	"system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_1}",
      	"system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_2}",
        "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME_3}"
      	]
   	}
 	}
   }
 ]
}
EOF

echo "${TRUST_RELATIONSHIP}" > trust.json

aws iam create-role --role-name "$ROLE_NAME" --assume-role-policy-document file://trust.json --description "role for demo"

while IFS= read -r POLICY_ARN; do
   echo -n "Attaching $POLICY_ARN ... "
   aws iam attach-role-policy \
   	--role-name "$ROLE_NAME" \
   	--policy-arn "${POLICY_ARN}"
   echo "ok."
done <<< "$POLICY_ARN_STRINGS"
3.3.2.2. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster

Prerequisites

Procedure

  • Install OpenShift Data Foundation Operator from the Operator Hub.

    • During the installation add the role ARN in the ARN Details field.
    • Make sure that the Update approval field is set to Manual.
3.3.2.3. Creating a new AWS STS backingstore

Prerequisites

Procedure

  1. Install Multicloud Object Gateway (MCG).

    It is installed with the default backingstore by using the short-lived credentials.

  2. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command:

    $ noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>

    where

    backingstore-name
    Name of the backingstore
    aws-sts-role-arn
    The AWS STS role ARN which will assume role
    region
    The AWS bucket region
    target-bucket
    The target bucket name on the cloud

3.3.3. Creating an IBM COS-backed backingstore

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Using command-line interface

  1. From the MCG command-line interface, run the following command:

    noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage
    <backingstore_name>
    The name of the backingstore.
    <IBM ACCESS KEY>, <IBM SECRET ACCESS KEY>, and <IBM COS ENDPOINT>

    An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.

    To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket.

    <bucket-name>

    An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

    The output will be similar to the following:

    INFO[0001] ✅ Exists: NooBaa "noobaa"
    INFO[0002] ✅ Created: BackingStore "ibm-resource"
    INFO[0002] ✅ Created: Secret "backing-store-secret-ibm-resource"

Adding storage resources using an YAML

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <backingstore-secret-name>
      namespace: openshift-storage
    type: Opaque
    data:
      IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64>
      IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>
    <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>
    Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively.
    <backingstore-secret-name>
    The name of the backingstore secret.
  2. Apply the following YAML for a specific backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      ibmCos:
        endpoint: <endpoint>
        secret:
          name: <backingstore-secret-name>
          namespace: openshift-storage
        targetBucket: <bucket-name>
      type: ibm-cos
    <bucket-name>
    an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration.
    <endpoint>
    A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration.
    <backingstore-secret-name>
    The name of the secret created in the previous step.

3.3.4. Creating an Azure-backed backingstore

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Using the MCG command-line interface

  • From the MCG command-line interface, run the following command:

    noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage
    <backingstore_name>
    The name of the backingstore.
    <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME>
    An AZURE account key and account name you created for this purpose.
    <blob container name>

    An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration.

    The output will be similar to the following:

    INFO[0001] ✅ Exists: NooBaa "noobaa"
    INFO[0002] ✅ Created: BackingStore "azure-resource"
    INFO[0002] ✅ Created: Secret "backing-store-secret-azure-resource"

Adding storage resources using a YAML

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <backingstore-secret-name>
    type: Opaque
    data:
      AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64>
      AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>
    <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64>
    Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively.
    <backingstore-secret-name>
    A unique name of backingstore secret.
  2. Apply the following YAML for a specific backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      azureBlob:
        secret:
          name: <backingstore-secret-name>
          namespace: openshift-storage
        targetBlobContainer: <blob-container-name>
      type: azure-blob
    <blob-container-name>
    An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration.
    <backingstore-secret-name>
    with the name of the secret created in the previous step.

3.3.5. Creating a GCP-backed backingstore

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Using the MCG command-line interface

  • From the MCG command-line interface, run the following command:

    noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage
    <backingstore_name>
    Name of the backingstore.
    <PATH TO GCP PRIVATE KEY JSON FILE>
    A path to your GCP private key created for this purpose.
    <GCP bucket name>

    An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

    The output will be similar to the following:

    INFO[0001] ✅ Exists: NooBaa "noobaa"
    INFO[0002] ✅ Created: BackingStore "google-gcp"
    INFO[0002] ✅ Created: Secret "backing-store-google-cloud-storage-gcp"

Adding storage resources using a YAML

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <backingstore-secret-name>
    type: Opaque
    data:
      GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>
    <GCP PRIVATE KEY ENCODED IN BASE64>
    Provide and encode your own GCP service account private key using Base64, and use the results for this attribute.
    <backingstore-secret-name>
    A unique name of the backingstore secret.
  2. Apply the following YAML for a specific backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      googleCloudStorage:
        secret:
          name: <backingstore-secret-name>
          namespace: openshift-storage
        targetBucket: <target bucket>
      type: google-cloud-storage
    <target bucket>
    An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration.
    <backingstore-secret-name>
    The name of the secret created in the previous step.

3.3.6. Creating a local Persistent Volume-backed backingstore

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Adding storage resources using the MCG command-line interface

  • From the MCG command-line interface, run the following command:

    Note

    This command must be run from within the openshift-storage namespace.

    $ noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES>  --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>

Adding storage resources using YAML

  • Apply the following YAML for a specific backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: <backingstore_name>
      namespace: openshift-storage
    spec:
       pvPool:
        numVolumes: <NUMBER OF VOLUMES>
        resources:
          requests:
            storage: <VOLUME SIZE>
            cpu: <CPU REQUEST>
            memory: <MEMORY REQUEST>
          limits:
            cpu: <CPU LIMIT>
            memory: <MEMORY LIMIT>
        storageClass: <LOCAL STORAGE CLASS>
      type: pv-pool
    <backingstore_name>
    The name of the backingstore.
    <NUMBER OF VOLUMES>
    The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage.
    <VOLUME SIZE>
    Required size in GB of each volume.
    <CPU REQUEST>
    Guaranteed amount of CPU requested in CPU unit m.
    <MEMORY REQUEST>
    Guaranteed amount of memory requested.
    <CPU LIMIT>
    Maximum amount of CPU that can be consumed in CPU unit m.
    <MEMORY LIMIT>
    Maximum amount of memory that can be consumed.
    <LOCAL STORAGE CLASS>

    The local storage class name, recommended to use ocs-storagecluster-ceph-rbd.

    The output will be similar to the following:

    INFO[0001] ✅ Exists: NooBaa "noobaa"
    INFO[0002] ✅ Exists: BackingStore "local-mcg-storage"

3.4. Creating an s3 compatible Multicloud Object Gateway backingstore

The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage’s RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage’s RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically.

Procedure

  1. From the MCG command-line interface, run the following command:

    Note

    This command must be run from within the openshift-storage namespace.

    noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage
    1. To get the <RGW ACCESS KEY> and <RGW SECRET KEY>, run the following command using your RGW user secret name:

      oc get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage
    2. Decode the access key ID and the access key from Base64 and keep them.
    3. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the previous step.
    4. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
    5. To get the <RGW endpoint>, see Accessing the RADOS Object Gateway S3 endpoint.

      The output will be similar to the following:

      INFO[0001] ✅ Exists: NooBaa "noobaa"
      INFO[0002] ✅ Created: BackingStore "rgw-resource"
      INFO[0002] ✅ Created: Secret "backing-store-secret-rgw-resource"

You can also create the backingstore using a YAML:

  1. Create a CephObjectStore user. This also creates a secret containing the RGW credentials:

    apiVersion: ceph.rook.io/v1
    kind: CephObjectStoreUser
    metadata:
      name: <RGW-Username>
      namespace: openshift-storage
    spec:
      store: ocs-storagecluster-cephobjectstore
      displayName: "<Display-name>"
    1. Replace <RGW-Username> and <Display-name> with a unique username and display name.
  2. Apply the following YAML for an S3-Compatible backing store:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: <backingstore-name>
      namespace: openshift-storage
    spec:
      s3Compatible:
        endpoint: <RGW endpoint>
        secret:
          name: <backingstore-secret-name>
          namespace: openshift-storage
        signatureVersion: v4
        targetBucket: <RGW-bucket-name>
      type: s3-compatible
    1. Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the previous step.
    2. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
    3. To get the <RGW endpoint>, see Accessing the RADOS Object Gateway S3 endpoint.

3.5. Creating a new bucket class

Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class.

Use this procedure to create a bucket class in OpenShift Data Foundation.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Bucket Class tab.
  3. Click Create Bucket Class.
  4. On the Create new Bucket Class page, perform the following:

    1. Select the bucket class type and enter a bucket class name.

      1. Select the BucketClass type. Choose one of the following options:

        • Standard: data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted.
        • Namespace: data is stored on the NamespaceStores without performing de-duplication, compression or encryption.

          By default, Standard is selected.

      2. Enter a Bucket Class Name.
      3. Click Next.
    2. In Placement Policy, select Tier 1 - Policy Type and click Next. You can choose either one of the options as per your requirements.

      • Spread allows spreading of the data across the chosen resources.
      • Mirror allows full duplication of the data across the chosen resources.
      • Click Add Tier to add another policy tier.
    3. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click Next. Alternatively, you can also create a new backing store.

      Note

      You need to select at least 2 backing stores when you select Policy Type as Mirror in previous step.

    4. Review and confirm Bucket Class settings.
    5. Click Create Bucket Class.

Verification steps

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Bucket Class tab and search the new Bucket Class.

3.6. Editing a bucket class

Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console.

Prerequisites

  • Administrator access to OpenShift Web Console.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Bucket Class tab.
  3. Click the Action Menu (⋮) next to the Bucket class you want to edit.
  4. Click Edit Bucket Class.
  5. You are redirected to the YAML file, make the required changes in this file and click Save.

3.7. Editing backing stores for bucket class

Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class.

Prerequisites

  • Administrator access to OpenShift Web Console.
  • A bucket class.
  • Backing stores.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Bucket Class tab.
  3. Click the Action Menu (⋮) next to the Bucket class you want to edit.
  4. Click Edit Bucket Class Resources.
  5. On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies.

    • To add a backing store to the bucket class, select the name of the backing store.
    • To remove a backing store from the bucket class, uncheck the name of the backing store.
  6. Click Save.

Chapter 4. Creating and managing buckets using MCG object browser

Multicloud Object Gateway (MCG) object browser within the OpenShift console enables you to create and manage buckets to keep your data organized and accessible. You can navigate to your S3 and OpenShift managed buckets and perform the following activities:

  • Add new objects
  • View objects
  • Download objects
  • Share objects

4.1. Creating new buckets using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab.
  3. Select one of the following options to create the buckets:

    • Create via Object Bucket Claim

      1. Select a namespace.
      2. Enter the ObjectBucketClaim name.
      3. Select the storage class.
      4. Select the bucket class
      5. Select the Enable replication to obtain higher resiliency of the objects stored in the buckets.
      6. Click Create using ObjectBucketClaim.
    • Create via S3 API

      1. Enter a name for the bucket.
      2. Add tags with different key and value criteria to tag your bucket.
      3. Click Create.

4.2. Listing bucket details using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab.

    All the MCG buckets that are created are listed in the page.

  3. Select the required bucket.
  4. Click the Details tab.

4.3. Listing objects and object details using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Click the Objects tab.
  4. Select the required object.

    A sidebar pops-up with the object details.

4.4. Downloading or previewing objects using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Select the Objects tab.
  4. In the options menu corresponding to the required object, select either Download or Preview based on your requirement.
Note

Preview is a browser dependent feature, which means that if a browser does not recognize the file or the object’s type or extension, such as .zip, .rar, and so on, and unable to display it in a separate tab, then the file or the object is downloaded by default.

4.5. Generating the presigned URL to share the object

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Select the Objects tab.
  4. In the options menu corresponding to the required object, select Share with presigned URL.
  5. Select the expiration time to set the validity period of the presigned URL
  6. Click Create.
  7. Copy the generated URL.

4.6. Uploading objects using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Select the Objects tab.
  4. In the Add objects section, do one of the following:

    • To upload a folder, click Upload and select the folder that you want to upload.
    • To upload files, drag and drop the files.

4.7. Creating folder using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Select the Objects tab.
  4. Click Create folder.
  5. Enter a name for the folder that satisfies the requirements.
  6. Click Create.

    You are redirected inside the new folder.

Note

The new folder persists only if you upload objects to it. If you navigate away without uploading the objects, the folder disappears.

4.8. Deleting objects using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab and select the required bucket.
  3. Select the Objects tab.
  4. To delete the object, do one of the following:

    • To delete multiple objects together, select multiple objects using the checkbox and then from the Actions menu select Delete.
    • To delete a single object, select the Action menu corresponding to the object and select Delete.
Note

You cannot delete a folder directly.

4.9. Deleting or emptying bucket using MCG object browser

Prerequisites

  • Administrator access to OpenShift Data Foundation.
  • Make sure MCG cluster is deployed and noobaa-endpoint-* pod is running in the MCG namespace.

Procedure

  1. In the OpenShift Web Console, click StorageObject Storage.
  2. Click the Buckets tab.
  3. In the options menu corresponding to the required bucket, select Empty bucket.
  4. Enter the bucket name for confirmation and click Empty bucket.
  5. After the bucket is empty, in the options menu corresponding to the bucket, select Delete bucket.
  6. Enter the bucket name for confirmation and click Delete bucket.

Chapter 5. Managing namespace buckets

Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider.

Note

A namespace bucket can only be used if its write target is available and functional.

5.1. Amazon S3 API endpoints for objects in namespace buckets

You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API.

Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket.

Red Hat OpenShift Data Foundation supports the following namespace bucket operations:

See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them.

5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML

For more information about namespace buckets, see Managing namespace buckets.

Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket:

5.2.1. Adding an AWS S3 namespace bucket using YAML

Prerequisites

Procedure

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
      type: Opaque
    data:
      AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64>
      AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>

    where <namespacestore-secret-name> is a unique NamespaceStore name.

    You must provide and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64>.

  2. Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs).

    A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets.

    To create a NamespaceStore resource, apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: NamespaceStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: <resource-name>
      namespace: openshift-storage
    spec:
      awsS3:
        secret:
          name: <namespacestore-secret-name>
          namespace: <namespace-secret>
        targetBucket: <target-bucket>
      type: aws-s3
    <resource-name>
    The name you want to give to the resource.
    <namespacestore-secret-name>
    The secret created in the previous step.
    <namespace-secret>
    The namespace where the secret can be found.
    <target-bucket>
    The target bucket you created for the NamespaceStore.
  3. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • A namespace policy of type single requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type:
          single:
            resource: <resource>
      <my-bucket-class>
      The unique namespace bucket class name.
      <resource>
      The name of a single NamespaceStore that defines the read and write target of the namespace bucket.
    • A namespace policy of type multi requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type: Multi
          multi:
            writeResource: <write-resource>
            readResources:
            - <read-resources>
            - <read-resources>
      <my-bucket-class>
      A unique bucket class name.
      <write-resource>
      The name of a single NamespaceStore that defines the write target of the namespace bucket.
      <read-resources>
      A list of the names of the NamespaceStores that defines the read targets of the namespace bucket.
  4. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <resource-name>
      namespace: openshift-storage
    spec:
      generateBucketName: <my-bucket>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: <my-bucket-class>
    <resource-name>
    The name you want to give to the resource.
    <my-bucket>
    The name you want to give to the bucket.
    <my-bucket-class>
    The bucket class created in the previous step.

After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC.

5.2.2. Adding an IBM COS namespace bucket using YAML

Prerequisites

Procedure

  1. Create a secret with the credentials:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
      type: Opaque
    data:
      IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64>
      IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>
    <namespacestore-secret-name>

    A unique NamespaceStore name.

    You must provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>.

  2. Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs).

    A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets.

    To create a NamespaceStore resource, apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: NamespaceStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: bs
      namespace: openshift-storage
    spec:
      s3Compatible:
        endpoint: <IBM COS ENDPOINT>
        secret:
          name: <namespacestore-secret-name>
          namespace: <namespace-secret>
        signatureVersion: v2
        targetBucket: <target-bucket>
      type: ibm-cos
    <IBM COS ENDPOINT>
    The appropriate IBM COS endpoint.
    <namespacestore-secret-name>
    The secret created in the previous step.
    <namespace-secret>
    The namespace where the secret can be found.
    <target-bucket>
    The target bucket you created for the NamespaceStore.
  3. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • The namespace policy of type single requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type:
          single:
            resource: <resource>
      <my-bucket-class>
      The unique namespace bucket class name.
      <resource>
      The name of a single NamespaceStore that defines the read and write target of the namespace bucket.
    • The namespace policy of type multi requires the following configuration:

      apiVersion: noobaa.io/v1alpha1
      kind: BucketClass
      metadata:
        labels:
          app: noobaa
        name: <my-bucket-class>
        namespace: openshift-storage
      spec:
        namespacePolicy:
          type: Multi
          multi:
            writeResource: <write-resource>
            readResources:
            - <read-resources>
            - <read-resources>
      <my-bucket-class>
      The unique bucket class name.
      <write-resource>
      The name of a single NamespaceStore that defines the write target of the namespace bucket.
      <read-resources>
      A list of the NamespaceStores names that defines the read targets of the namespace bucket.
  4. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the previous step, apply the following YAML:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <resource-name>
      namespace: openshift-storage
    spec:
      generateBucketName: <my-bucket>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        bucketclass: <my-bucket-class>
    <resource-name>
    The name you want to give to the resource.
    <my-bucket>
    The name you want to give to the bucket.
    <my-bucket-class>

    The bucket class created in the previous step.

    After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC.

5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI

Prerequisites

  • Openshift Container Platform with OpenShift Data Foundation operator installed.
  • Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS

Procedure

  1. In the MCG command-line interface, create a NamespaceStore resource.

    A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets.

    $ noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
    <namespacestore>
    The name of the NamespaceStore.
    <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY>
    The AWS access key ID and secret access key you created for this purpose.
    <bucket-name>
    The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
  2. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi.

    • To create a namespace bucket class with a namespace policy of type single:

      $ noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
      <resource-name>
      The name you want to give the resource.
      <my-bucket-class>
      A unique bucket class name.
      <resource>
      A single namespace-store that defines the read and write target of the namespace bucket.
    • To create a namespace bucket class with a namespace policy of type multi:

      $ noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
      <resource-name>
      The name you want to give the resource.
      <my-bucket-class>
      A unique bucket class name.
      <write-resource>
      A single namespace-store that defines the write target of the namespace bucket.
      <read-resources>s
      A list of namespace-stores separated by commas that defines the read targets of the namespace bucket.
  3. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the previous step.

    $ noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
    <bucket-name>
    A bucket name of your choice.
    <custom-bucket-class>
    The name of the bucket class created in the previous step.

    After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC.

5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI

Prerequisites

Procedure

  1. In the MCG command-line interface, create a NamespaceStore resource.

    A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets.

    $ noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage
    <namespacestore>
    The name of the NamespaceStore.
    <IBM ACCESS KEY>, <IBM SECRET ACCESS KEY>, <IBM COS ENDPOINT>
    An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.
    <bucket-name>
    An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.
  2. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi.

    • To create a namespace bucket class with a namespace policy of type single:

      $ noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage
      <resource-name>
      The name you want to give the resource.
      <my-bucket-class>
      A unique bucket class name.
      <resource>
      A single NamespaceStore that defines the read and write target of the namespace bucket.
    • To create a namespace bucket class with a namespace policy of type multi:

      $ noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage
      <resource-name>
      The name you want to give the resource.
      <my-bucket-class>
      A unique bucket class name.
      <write-resource>
      A single NamespaceStore that defines the write target of the namespace bucket.
      <read-resources>
      A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket.
  3. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step.

    $ noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>
    <bucket-name>
    A bucket name of your choice.
    <custom-bucket-class>
    The name of the bucket class created in the previous step.

After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC.

5.3. Adding a namespace bucket using the OpenShift Container Platform user interface

You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets.

Prerequisites

  • Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed.
  • Access to the Multicloud Object Gateway (MCG).

Procedure

  1. On the OpenShift Web Console, navigate to StorageObject StorageNamespace Store tab.
  2. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket.

    1. Enter a namespacestore name.
    2. Choose a provider and region.
    3. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key.
    4. Enter a target bucket.
    5. Click Create.
  3. On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state.
  4. Repeat steps 2 and 3 until you have created all the desired amount of resources.
  5. Navigate to Bucket Class tab and click Create Bucket Class.

    1. Choose Namespace BucketClass type radio button.
    2. Enter a BucketClass name and click Next.
    3. Choose a Namespace Policy Type for your namespace bucket, and then click Next.

      • If your namespace policy type is Single, you need to choose a read resource.
      • If your namespace policy type is Multi, you need to choose read resources and a write resource.
      • If your namespace policy type is Cache, you need to choose a Hub namespace store that defines the read and write target of the namespace bucket.
    4. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click Next.
    5. Review your new bucket class details, and then click Create Bucket Class.
  6. Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase.
  7. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim.

    1. Enter ObjectBucketClaim Name for the namespace bucket.
    2. Select StorageClass as openshift-storage.noobaa.io.
    3. Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected.
    4. Click Create. The namespace bucket is created along with Object Bucket Claim for your namespace.
  8. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state.
  9. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state.

5.4. Sharing legacy application data with cloud native application using S3 protocol

Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following:

  • Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol.
  • Access file system datasets from both file system and S3 protocol.
  • Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs).

5.4.1. Creating a NamespaceStore to use a file system

Prerequisites

  • Openshift Container Platform with OpenShift Data Foundation operator installed.
  • Access to the Multicloud Object Gateway (MCG).

Procedure

  1. Log into the OpenShift Web Console.
  2. Click StorageObject Storage.
  3. Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket.
  4. Click Create namespacestore.
  5. Enter a name for the NamespaceStore.
  6. Choose Filesystem as the provider.
  7. Choose the Persistent volume claim.
  8. Enter a folder name.

    If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created.

  9. Click Create.
  10. Verify the NamespaceStore is in the Ready state.

5.4.2. Creating accounts with NamespaceStore filesystem configuration

You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML.

Note

You cannot remove a NamespaceStore filesystem configuration from an account.

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  • Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface.

    $ noobaa account create <noobaa-account-name> [flags]

    For example:

    $ noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 –default_resource fs_namespacestore

    allow_bucket_create

    Indicates whether the account is allowed to create new buckets. Supported values are true or false. Default value is true.

    allowed_buckets

    A comma separated list of bucket names to which the user is allowed to have access and management rights.

    default_resource

    The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC).

    full_permission

    Indicates whether the account should be allowed full permission or not. Supported values are true or false. Default value is false.

    new_buckets_path

    The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes.

    nsfs_account_config

    A mandatory field that indicates if the account is used for NamespaceStore filesystem.

    nsfs_only

    Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false. Default value is false. If it is set to 'true', it limits you from accessing other types of buckets.

    uid

    The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem

    gid

    The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem

    The MCG system sends a response with the account configuration and its S3 credentials:

    # NooBaaAccount spec:
    allow_bucket_creation: true
    Allowed_buckets:
      full_permission: true
      permission_list: []
    default_resource: noobaa-default-namespace-store
    Nsfs_account_config:
      gid: 10001
      new_buckets_path: /
      nsfs_only: true
      uid: 10001
    INFO[0006] ✅ Exists: Secret "noobaa-account-testaccount"
    Connection info:
      AWS_ACCESS_KEY_ID      : <aws-access-key-id>
      AWS_SECRET_ACCESS_KEY  : <aws-secret-access-key>

    You can list all the custom resource definition (CRD) based accounts by using the following command:

    $ noobaa account list
    NAME          ALLOWED_BUCKETS   DEFAULT_RESOURCE               PHASE   AGE
    testaccount   [*]               noobaa-default-backing-store   Ready   1m17s

    If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name:

    $ oc get noobaaaccount/testaccount -o yaml
    spec:
      allow_bucket_creation: true
      allowed_buckets:
        full_permission: true
        permission_list: []
      default_resource: noobaa-default-namespace-store
      nsfs_account_config:
        gid: 10001
        new_buckets_path: /
        nsfs_only: true
        uid: 10001

5.4.3. Accessing legacy application data from the openshift-storage namespace

When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses.

In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses.

Procedure

  1. Display the application namespace with scc:

    $ oc get ns <application_namespace> -o yaml | grep scc
    <application_namespace>

    Specify the name of the application namespace.

    For example:

    $ oc get ns testnamespace -o yaml | grep scc
    
    openshift.io/sa.scc.mcs: s0:c26,c5
    openshift.io/sa.scc.supplemental-groups: 1000660000/10000
    openshift.io/sa.scc.uid-range: 1000660000/10000
  2. Navigate into the application namespace:

    $ oc project <application_namespace>

    For example:

    $ oc project testnamespace
  3. Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature:

    $ oc get pvc
    
    NAME                                               STATUS VOLUME
    CAPACITY ACCESS MODES STORAGECLASS              AGE
    cephfs-write-workload-generator-no-cache-pv-claim  Bound  pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
    10Gi     RWX          ocs-storagecluster-cephfs 12s
    $ oc get pod
    
    NAME                                                READY   STATUS              RESTARTS   AGE
    cephfs-write-workload-generator-no-cache-1-cv892    1/1     Running             0          11s
  4. Check the mount point of the Persistent Volume (PV) inside your pod.

    1. Get the volume name of the PV from the pod:

      $ oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'
      <pod_name>

      Specify the name of the pod.

      For example:

      $ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}'
      
      {"name":"app-persistent-storage","persistentVolumeClaim":{"claimName":"cephfs-write-workload-generator-no-cache-pv-claim"}}

      In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim.

    2. List all the mounts in the pod, and check for the mount point of the volume that you identified in the previous step:

      $ oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'

      For example:

      $ oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}'
      
      [{"mountPath":"/mnt/pv","name":"app-persistent-storage"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"kube-api-access-8tnc5","readOnly":true}]
  5. Confirm the mount point of the RWX PV in your pod:

    $ oc exec -it <pod_name> -- df <mount_path>
    <mount_path>

    Specify the path to the mount point that you identified in the previous step.

    For example:

    $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv
    
    main
    Filesystem
    1K-blocks Used Available  Use%  Mounted on
    172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
    10485760  0    10485760   0%    /mnt/pv
  6. Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses:

    $ oc exec -it <pod_name> -- ls -latrZ <mount_path>

    For example:

    $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/
    
    total 567
    drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5      2 May 25 06:35 .
    -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
    drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5     30 May 25 06:35 ..
  7. Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace:

    $ oc get pv | grep <pv_name>
    <pv_name>

    Specify the name of the PV.

    For example:

    $ oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
    
    pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a   10Gi       RWX            Delete           Bound    testnamespace/cephfs-write-workload-generator-no-cache-pv-claim   ocs-storagecluster-cephfs              47s
  8. Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC.

    1. Find the values of the subvolumePath and volumeHandle from the volumeAttributes. You can get these values from the YAML description of the legacy application PV:

      $ oc get pv <pv_name> -o yaml

      For example:

      $ oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml
      
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com
        creationTimestamp: "2022-05-25T06:27:49Z"
        finalizers:
        - kubernetes.io/pv-protection
        name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a
        resourceVersion: "177458"
        uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500
      spec:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: cephfs-write-workload-generator-no-cache-pv-claim
          namespace: testnamespace
          resourceVersion: "177453"
          uid: aa58fb91-c3d2-475b-bbee-68452a613e1a
        csi:
          controllerExpandSecretRef:
            name: rook-csi-cephfs-provisioner
            namespace: openshift-storage
          driver: openshift-storage.cephfs.csi.ceph.com
          nodeStageSecretRef:
            name: rook-csi-cephfs-node
            namespace: openshift-storage
          volumeAttributes:
            clusterID: openshift-storage
            fsName: ocs-storagecluster-cephfilesystem
            storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com
            subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213
            subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
          volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213
        persistentVolumeReclaimPolicy: Delete
        storageClassName: ocs-storagecluster-cephfs
        volumeMode: Filesystem
      status:
        phase: Bound
    2. Use the subvolumePath and volumeHandle values that you identified in the previous step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV:

      Example YAML file:

      $ cat << EOF >> pv-openshift-storage.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: cephfs-pv-legacy-openshift-storage
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi     1
        csi:
          driver: openshift-storage.cephfs.csi.ceph.com
          nodeStageSecretRef:
            name: rook-csi-cephfs-node
            namespace: openshift-storage
          volumeAttributes:
          # Volume Attributes can be copied from the Source testnamespace PV
            "clusterID": "openshift-storage"
            "fsName": "ocs-storagecluster-cephfilesystem"
            "staticVolume": "true"
          # rootpath is the subvolumePath: you copied from the Source testnamespace PV
            "rootPath": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c
          volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone   2
        persistentVolumeReclaimPolicy: Retain
        volumeMode: Filesystem
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: cephfs-pvc-legacy
        namespace: openshift-storage
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi     3
        volumeMode: Filesystem
        # volumeName should be same as PV name
        volumeName: cephfs-pv-legacy-openshift-storage
      EOF
      1
      The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV.
      2
      The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle.
      3
      The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC.
    3. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the previous step:

      $ oc create -f <YAML_file>
      <YAML_file>

      Specify the name of the YAML file.

      For example:

      $ oc create -f pv-openshift-storage.yaml
      
      persistentvolume/cephfs-pv-legacy-openshift-storage created
      persistentvolumeclaim/cephfs-pvc-legacy created
    4. Ensure that the PVC is available in the openshift-storage namespace:

      $ oc get pvc -n openshift-storage
      
      NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
      cephfs-pvc-legacy                     Bound    cephfs-pv-legacy-openshift-storage         10Gi       RWX                                          14s
    5. Navigate into the openshift-storage project:

      $ oc project openshift-storage
      
      Now using project "openshift-storage" on server "https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443".
    6. Create the NSFS namespacestore:

      $ noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name='<cephfs_pvc_name>' --fs-backend='CEPH_FS'
      <nsfs_namespacestore>
      Specify the name of the NSFS namespacestore.
      <cephfs_pvc_name>

      Specify the name of the CephFS PVC in the openshift-storage namespace.

      For example:

      $ noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'
    7. Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint:

      $ oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/<nsfs_namespacestore>
      <noobaa_endpoint_pod_name>

      Specify the name of the noobaa-endpoint pod.

      For example:

      $ oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace
      
      Filesystem                                                                                                                                                Size  Used Avail Use% Mounted on
      172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c   10G     0   10G   0% /nsfs/legacy-namespace
    8. Create a MCG user account:

      $ noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'
      <user_account>
      Specify the name of the MCG user account.
      <gid_number>
      Specify the GID number.
      <uid_number>

      Specify the UID number.

      Important

      Use the same UID and GID as that of the legacy application. You can find it from the previous output.

      For example:

      $ noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'
    9. Create a MCG bucket.

      1. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod:

        $ oc exec -it <pod_name> -- mkdir <mount_path>/nsfs

        For example:

        $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs
      2. Create the MCG bucket using the nsfs/ path:

        $ noobaa api bucket_api create_bucket '{
          "name": "<bucket_name>",
          "namespace":{
            "write_resource": { "resource": "<nsfs_namespacestore>", "path": "nsfs/" },
            "read_resources": [ { "resource": "<nsfs_namespacestore>", "path": "nsfs/" }]
          }
        }'

        For example:

        $ noobaa api bucket_api create_bucket '{
          "name": "legacy-bucket",
          "namespace":{
            "write_resource": { "resource": "legacy-namespace", "path": "nsfs/" },
            "read_resources": [ { "resource": "legacy-namespace", "path": "nsfs/" }]
          }
        }'
    10. Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces:

      $ oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/<nsfs_namespacstore>

      For example:

      $ oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace
      
      total 567
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c0,c26      2 May 25 06:35 .
      -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c0,c26     30 May 25 06:35 ..
      $ oc exec -it <pod_name> -- ls -latrZ <mount_path>

      For example:

      $ oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/
      
      total 567
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5      2 May 25 06:35 .
      -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log
      drwxrwxrwx. 3 root       root system_u:object_r:container_file_t:s0:c26,c5     30 May 25 06:35 ..

      In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues.

  9. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files.

    You can do this in one of the following ways:

  10. Delete the NSFS namespacestore:

    1. Delete the MCG bucket:

      $ noobaa bucket delete <bucket_name>

      For example:

      $ noobaa bucket delete legacy-bucket
    2. Delete the MCG user account:

      $ noobaa account delete <user_account>

      For example:

      $ noobaa account delete leguser
    3. Delete the NSFS namespacestore:

      $ noobaa namespacestore delete <nsfs_namespacestore>

      For example:

      $ noobaa namespacestore delete legacy-namespace
  11. Delete the PV and PVC:

    Important

    Before you delete the PV and PVC, ensure that the PV has a retain policy configured.

    $ oc delete pv <cephfs_pv_name>
    $ oc delete pvc <cephfs_pvc_name>
    <cephfs_pv_name>
    Specify the CephFS PV name of the legacy application.
    <cephfs_pvc_name>

    Specify the CephFS PVC name of the legacy application.

    For example:

    $ oc delete pv cephfs-pv-legacy-openshift-storage
    $ oc delete pvc cephfs-pvc-legacy
5.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project
  1. Display the current openshift-storage namespace with sa.scc.mcs:

    $ oc get ns openshift-storage -o yaml | grep sa.scc.mcs
    
    openshift.io/sa.scc.mcs: s0:c26,c0
  2. Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace:

    $ oc edit ns <appplication_namespace>

    For example:

    $ oc edit ns testnamespace
    $ oc get ns <application_namespace> -o yaml | grep sa.scc.mcs

    For example:

    $ oc get ns testnamespace -o yaml | grep sa.scc.mcs
    
    openshift.io/sa.scc.mcs: s0:c26,c0
  3. Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment.
5.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC
  1. Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses.

    Example YAML file:

    $ cat << EOF >> scc.yaml
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegeEscalation: true
    allowPrivilegedContainer: false
    allowedCapabilities: null
    apiVersion: security.openshift.io/v1
    defaultAddCapabilities: null
    fsGroup:
      type: MustRunAs
    groups:
    - system:authenticated
    kind: SecurityContextConstraints
    metadata:
      annotations:
      name: restricted-pvselinux
    priority: null
    readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - KILL
    - MKNOD
    - SETUID
    - SETGID
    runAsUser:
      type: MustRunAsRange
    seLinuxContext:
      seLinuxOptions:
        level: s0:c26,c0
      type: MustRunAs
    supplementalGroups:
      type: RunAsAny
    users: []
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - persistentVolumeClaim
    - projected
    - secret
    EOF
    $ oc create -f scc.yaml
  2. Create a service account for the deployment and add it to the newly created scc.

    1. Create a service account:

      $ oc create serviceaccount <service_account_name>
      <service_account_name>`

      Specify the name of the service account.

      For example:

      $ oc create serviceaccount testnamespacesa
    2. Add the service account to the newly created scc:

      $ oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>

      For example:

      $ oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa
  3. Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment:

    $ oc patch dc/<pod_name> '{"spec":{"template":{"spec":{"serviceAccountName": "<service_account_name>"}}}}'

    For example:

    $ oc patch dc/cephfs-write-workload-generator-no-cache --patch '{"spec":{"template":{"spec":{"serviceAccountName": "testnamespacesa"}}}}'
  4. Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration:

    $ oc edit dc <pod_name> -n <application_namespace>

    Add the following lines:

    spec:
     template:
        metadata:
          securityContext:
            seLinuxOptions:
              Level: <security_context_value>
    <security_context_value>

    You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod.

    For example:

    $ oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace
    spec:
     template:
        metadata:
          securityContext:
            seLinuxOptions:
              level: s0:c26,c0
  5. Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly:

    $ oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext

    For example"

    $ oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext
    
          securityContext:
            seLinuxOptions:
              level: s0:c26,c0

    The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.

Chapter 6. Securing Multicloud Object Gateway

6.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway

Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security.

6.1.1. Resetting the noobaa account password

Prerequisites

  • A running OpenShift Data Foundation cluster.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  • To reset the noobaa account password, run the following command:

    $ noobaa account passwd <noobaa_account_name> [options]
    $ noobaa account passwd
    FATA[0000] ❌ Missing expected arguments: <noobaa_account_name>
    
    Options:
        --new-password='': New Password for authentication - the best practice is to omit this flag, in that
        case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t
         he shell history
        --old-password='': Old Password for authentication - the best practice is to omit this flag, in that
        case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in
        the shell history
        --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag, in that case the CLI will prompt to prompt and read it securely from the terminal to avoid
        leaking secrets in the shell history
    
    
    Usage:
        noobaa account passwd <noobaa-account-name> [flags] [options]
    
    Use "noobaa options" for a list of global command-line options (applies to all commands).

    Example:

    $ noobaa account passwd admin@noobaa.io

    Example output:

    Enter old-password: [got 24 characters]
    Enter new-password: [got 7 characters]
    Enter retype-new-password: [got 7 characters]
    INFO[0017] ✅ Exists: Secret "noobaa-admin"
    INFO[0017] ✅ Exists: NooBaa "noobaa"
    INFO[0017] ✅ Exists: Service "noobaa-mgmt"
    INFO[0017] ✅ Exists: Secret "noobaa-operator"
    INFO[0017] ✅ Exists: Secret "noobaa-admin"
    INFO[0017] ✈️  RPC: account.reset_password() Request: {Email:admin@noobaa.io VerificationPassword:* Password:*}
    WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0
    INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0
    Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>}
    INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0
    Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>}
    INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms
    INFO[0020] ✅ Updated:  "noobaa-admin"
    INFO[0020] ✅ Successfully reset the password for the account "admin@noobaa.io"
    Important

    To access the admin account credentials run the noobaa status command from the terminal:

    --------------------
    - Mgmt Credentials -
    --------------------
    
    email    : admin@noobaa.io
    password : ***

6.1.2. Setting Multicloud Object Gateway account credentials using CLI command

You can update and verify the Multicloud Object Gateway (MCG) account credentials manually by using the MCG CLI command.

Prerequisites

Ensure that the following prerequisites are met:

  • A running OpenShift Data Foundation cluster.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.
Note

Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  • To update the MCG account credentials, run the following command:

    $ noobaa account credentials <noobaa-account-name> [options]

    Example:

    $ noobaa account credentials admin@noobaa.io

    Example output:

    $ noobaa account credentials admin@noobaa.io
    Enter access-key: [got 20 characters]
    Enter secret-key: [got 40 characters]
    INFO[0026] ❌ Not Found: NooBaaAccount "admin@noobaa.io"
    INFO[0026] ✅ Exists: NooBaa "noobaa"
    INFO[0026] ✅ Exists: Service "noobaa-mgmt"
    INFO[0026] ✅ Exists: Secret "noobaa-operator"
    INFO[0026] ✅ Exists: Secret "noobaa-admin"
    INFO[0026] ✈️  RPC: account.update_account_keys() Request: {Email:admin@noobaa.io AccessKeys:{AccessKey:* SecretKey:}} WARN[0026] RPC: GetConnection creating connection to wss://localhost:33495/rpc/ 0xc000cd9980 INFO[0026] RPC: Connecting websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] RPC: Connected websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] ✅ RPC: account.update_account_keys() Response OK: took 42.7ms INFO[0026] ✈️ RPC: account.read_account() Request: {Email:admin@noobaa.io} INFO[0026] ✅ RPC: account.read_account() Response OK: took 2.0ms INFO[0026] ✅ Updated: "noobaa-admin" INFO[0026] ✅ Successfully updated s3 credentials for the account "admin@noobaa.io" INFO[0026] ✅ Exists: Secret "noobaa-admin" Connection info: AWS_ACCESS_KEY_ID :  AWS_SECRET_ACCESS_KEY : *

    Credential complexity requirements:

    Access key
    The account access key must be 20 characters in length and it must contain only alphanumeric characters.
    Secret key

    The secret key must be 40 characters in length and it must contain alphanumeric characters and "+", "/".

    For example:

    $ noobaa account credentials my-account --access-key=ABCDEF1234567890ABCD --secret-key=ABCDE12345+FGHIJ67890/KLMNOPQRSTUV123456
  • To verify the credentials, run the following command:

    noobaa account status <noobaa-account-name> --show-secrets
Note

You cannot have a duplicate access-key. Each user must have a unique access-key and secret-key.

6.1.3. Regenerating the S3 credentials for the accounts

Prerequisites

  • A running OpenShift Data Foundation cluster.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. Get the account name.

    For listing the accounts, run the following command:

    $ noobaa account list

    Example output:

    NAME           ALLOWED_BUCKETS   DEFAULT_RESOURCE               PHASE   AGE
    account-test   [*]               noobaa-default-backing-store   Ready   14m17s
    test2          [first.bucket]    noobaa-default-backing-store   Ready   3m12s

    Alternatively, run the oc get noobaaaccount command from the terminal:

    $ oc get noobaaaccount

    Example output:

    NAME           PHASE   AGE
    account-test   Ready   15m
    test2          Ready   3m59s
  2. To regenerate the noobaa account S3 credentials, run the following command:

    $ noobaa account regenerate <noobaa_account_name> [options]
    $ noobaa account regenerate
    FATA[0000] ❌ Missing expected arguments: <noobaa-account-name>
    
    Usage:
        noobaa account regenerate <noobaa-account-name> [flags] [options]
    
    Use "noobaa options" for a list of global command-line options (applies to all commands).
  3. Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials.", and ask for confirmation:

    Example:

    $ noobaa account regenerate account-test

    Example output:

    INFO[0000] You are about to regenerate an account's security credentials.
    INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials.
    INFO[0000] are you sure? y/n
  4. On approving, it will regenerate the credentials and eventually print them:

    INFO[0015] ✅ Exists: Secret "noobaa-account-account-test"
    Connection info:
    AWS_ACCESS_KEY_ID      : ***
    AWS_SECRET_ACCESS_KEY  : ***

6.1.4. Regenerating the S3 credentials for the OBC

Prerequisites

  • A running OpenShift Data Foundation cluster.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. To get the OBC name, run the following command:

    $ noobaa obc list

    Example output:

    NAMESPACE   NAME       BUCKET-NAME                                     STORAGE-CLASS       BUCKET-CLASS                  PHASE
    default     obc-test   obc-test-35800e50-8978-461f-b7e0-7793080e26ba   default.noobaa.io   noobaa-default-bucket-class   Bound

    Alternatively, run the oc get obc command from the terminal:

    $ oc get obc

    Example output:

    NAME       STORAGE-CLASS       PHASE   AGE
    obc-test   default.noobaa.io   Bound   38s
  2. To regenerate the noobaa OBC S3 credentials, run the following command:

    $ noobaa obc regenerate <bucket_claim_name> [options]
    $ noobaa obc regenerate
    FATA[0000] ❌ Missing expected arguments: <bucket-claim-name>
    
    Usage:
       noobaa obc regenerate <bucket-claim-name> [flags] [options]
    
    Use "noobaa options" for a list of global command-line options (applies to all commands).
  3. Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials.", and ask for confirmation:

    Example:

    $ noobaa obc regenerate obc-test

    Example output:

    INFO[0000] You are about to regenerate an OBC's security credentials.
    INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials.
    INFO[0000] are you sure? y/n
  4. On approving, it will regenerate the credentials and eventually print them:

    INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms
    
    ObjectBucketClaim info:
      Phase                  : Bound
      ObjectBucketClaim      : kubectl get -n default objectbucketclaim obc-test
      ConfigMap              : kubectl get -n default configmap obc-test
      Secret                 : kubectl get -n default secret obc-test
      ObjectBucket           : kubectl get objectbucket obc-default-obc-test
      StorageClass           : kubectl get storageclass default.noobaa.io
      BucketClass            : kubectl get -n default bucketclass noobaa-default-bucket-class
    
    Connection info:
     BUCKET_HOST            : s3.default.svc
     BUCKET_NAME            : obc-test-35800e50-8978-461f-b7e0-7793080e26ba
        BUCKET_PORT            : 443
        AWS_ACCESS_KEY_ID      : ***
        AWS_SECRET_ACCESS_KEY  : ***
    
    Shell commands:
      AWS S3 Alias           : alias s3='AWS_ACCESS_KEY_ID=***
    AWS_SECRET_ACCESS_KEY=*** aws s3 --no-verify-ssl --endpoint-url ***'
    
    Bucket status:
      Name                   : obc-test-35800e50-8978-461f-b7e0-7793080e26ba
      Type                   : REGULAR
      Mode                   : OPTIMAL
      ResiliencyStatus       : OPTIMAL
      QuotaStatus            : QUOTA_NOT_SET
      Num Objects            : 0
      Data Size              : 0.000 B
      Data Size Reduced      : 0.000 B
      Data Space Avail       : 13.261 GB
      Num Objects Avail      : 9007199254740991

6.2. Enabling secured mode deployment for Multicloud Object Gateway

You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services.

Note

You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP. For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface. For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation.

Prerequisites

  • A running OpenShift Data Foundation cluster.
  • In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services.

Procedure

  • Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation.

    $ oc edit noobaa -n openshift-storage noobaa
    noobaa
    The NooBaa CR type that controls the NooBaa system deployment.
    noobaa

    The name of the NooBaa CR.

    For example:

    ...
    spec:
      ...
      loadBalancerSourceSubnets:
        s3: ["10.0.0.0/16", "192.168.10.0/32"]
        sts:
          - "10.0.0.0/16"
          - "192.168.10.0/32"
    ...
    loadBalancerSourceSubnets

    A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services.

    In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access.

Verification steps

  • To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG:

    $ oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'

Chapter 7. Mirroring data for hybrid and Multicloud buckets

You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud.

You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface.

See the following sections:

7.1. Creating bucket classes to mirror data using the MCG command-line-interface

Prerequisites

  • Ensure to download Multicloud Object Gateway (MCG) command-line interface.

Procedure

  1. From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy:

    $ noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror
  2. Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations:

    $ noobaa obc create  mirrored-bucket --bucketclass=mirror-to-aws

7.2. Creating bucket classes to mirror data using a YAML

  1. Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: <bucket-class-name>
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - <backing-store-1>
          - <backing-store-2>
          placement: Mirror
  2. Add the following lines to your standard Object Bucket Claim (OBC):

    additionalConfig:
      bucketclass: mirror-to-aws

    For more information about OBCs, see Chapter 11, Object Bucket Claim.

Chapter 8. Bucket policies in the Multicloud Object Gateway

OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them.

8.1. Introduction to bucket policies

Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview.

8.2. Using bucket policies in Multicloud Object Gateway

Prerequisites

Procedure

To use bucket policies in the MCG:

  1. Create the bucket policy in JSON format.

    For example:

    {
        "Version": "NewVersion",
        "Statement": [
            {
                "Sid": "Example",
                "Effect": "Allow",
                "Principal": [
                        "john.doe@example.com"
                ],
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "arn:aws:s3:::john_bucket"
                ]
            }
        ]
    }

    Replace john.doe@example.com with a valid Multicloud Object Gateway user account.

  2. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket:

    # aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file://BucketPolicy
    1. Replace ENDPOINT with the S3 endpoint.
    2. Replace MyBucket with the bucket to set the policy on.
    3. Replace BucketPolicy with the bucket policy JSON file.
    4. Add --no-verify-ssl if you are using the default self signed certificates.

      For example:

      # aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy

      For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy.

      Note

      The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io.

      Note

      Bucket policy conditions are not supported.

Additional resources

  • There are many available elements for bucket policies with regard to access permissions.
  • For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview.
  • For more examples of bucket policies, see AWS Bucket Policy Examples.
  • OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal, NotAction, and NotResource. For more information on these elements, see IAM JSON policy elements reference.

8.3. Creating a user in the Multicloud Object Gateway

Prerequisites

  • A running OpenShift Data Foundation Platform.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

Execute the following command to create an MCG user account:

noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]
<noobaa-account-name>
Specify the name of the new MCG user account.
--allow_bucket_create
Allows the user to create new buckets.
--allowed_buckets
Sets the user’s allowed bucket list (use commas or multiple flags).
--default_resource
Sets the default resource.The new buckets are created on this default resource (including the future ones).
--full_permission
Allows this account to access all existing and future buckets.
Important

You need to provide permission to access atleast one bucket or full permission to access all the buckets.

Chapter 9. Bucket notification in Multicloud Object Gateway

Bucket notification allows you to send notification to an external server whenever there is an event in the bucket. Multicloud Object Gateway (MCG) supports several bucket event notification types. You can receive bucket notifications for activities such as flow creation, triggering data digestion, and so on. MCG supports the following event types that can be specified in the notification configuration:

  • s3:TestEvent
  • s3:ObjectCreated:*
  • s3:ObjectCreated:Put
  • s3:ObjectCreated:Post
  • s3:ObjectCreated:Copy
  • s3:ObjectCreated:CompleteMultipartUpload
  • s3:ObjectRemoved:*
  • s3:ObjectRemoved:Delete
  • s3:ObjectRemoved:DeleteMarkerCreated
  • s3:LifecycleExpiration:*
  • s3:LifecycleExpiration:Delete
  • s3:LifecycleExpiration:DeleteMarkerCreated
  • s3:ObjectRestore:*
  • s3:ObjectRestore:Post
  • s3:ObjectRestore:Completed
  • s3:ObjectRestore:Delete
  • s3:ObjectTagging:*
  • s3:ObjectTagging:Put
  • s3:ObjectTagging:Delete

9.1. Configuring bucket notification in Multicloud Object Gateway

Prerequisites

  • Ensure to have one of the following before configuring:

    • Kafka cluster is deployed.

      For example, you can deploy Kafka using AMQ/strimzi by referring to Getting Started with AMQ Streams on OpenShift.

    • An HTTP(s) server is connected.

      For example, you can set up an HTTP server to log incoming HTTP requests so that you can observe them using the oc logs command as follows:

      $ cat http_logging_server.yaml
      apiVersion: v1
      kind: List
      metadata: {}
      items:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: http-logger
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: http-logger
          template:
            metadata:
              labels:
                app: http-logger
            spec:
              containers:
              - name: http-logger
                image: registry.redhat.io/ubi9/python-39:latest
                command:
                  - /bin/sh
                  - -c
                  - |
                    set -e  # Fail on any error
                    mkdir -p /tmp/app
                    pip install flask
                    cat <<EOF > /tmp/app/server.py
                    from flask import Flask, request
                    app = Flask(__name__)
                    @app.route("/", methods=["POST", "GET"])
                    def log_request():
                        body = request.get_data(as_text=True)
                        print(body)  # Simple one-line logging per request
                        return "", 200
                    if __name__ == "__main__":
                        app.run(host="0.0.0.0", port=8676, debug=True)
                    EOF
                    exec python /tmp/app/server.py
                ports:
                - containerPort: 8676
                  protocol: TCP
                securityContext:
                  runAsNonRoot: true
                  allowPrivilegeEscalation: false
      - apiVersion: v1
        kind: Service
        metadata:
          name: http-logger
          labels:
            app: http-logger
        spec:
          selector:
            app: http-logger
          ports:
            - port: 8676
              targetPort: 8676
              protocol: TCP
              name: http
      
      $ oc create -f http_logging_server.yaml -n <http-server-namespace>
  • Make sure that server is accessible from the MCG core pod from which the notifications are sent.
  • Ensure to fetch MCG’s credentials:

    NOOBAA_ACCESS_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_ACCESS_KEY_ID --to=- 2>/dev/null); \
    NOOBAA_SECRET_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_SECRET_ACCESS_KEY --to=- 2>/dev/null); \
    S3_ENDPOINT=https://$(oc get route s3 -n openshift-storage -o json | jq -r ".spec.host")
    alias aws_alias='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $S3_ENDPOINT --no-verify-

Procedure

  1. Describe connection using a json file, for example, connection.json on your local machine:

    For Kafka:

    {
      "name": "kafka_notif_conn_file" <-- any string
      "notification_protocol": "kafka",
      "metadata.broker.list": "<kafka-service-name>.<project>.svc.cluster.local:<kafka-service-port>",
      "topic": "my-topic",<--  refer to an existing KafkaTopic resource
    }

    The structure under metadata.broker.list must be <service-name>.<namespace>.svc.cluster.local:9092, topic must refer to the name of an existing kafkatopic resource in the namespace.

    For HTTP(s):

    {
      "name": "http_notif_connection_config",
      "notification_protocol": "http", <-- or "https"
      "agent_request_object": {
        "host": "<http-service>.<http-server-namespace>.svc.cluster.local",
        "port": <http-server-port>
       }
    }

    Additional options:

    request_options_object
    Value is a JSON that is passed to nodejs' http(s) request (optional).

    Any field supported by nodejs' http(s) request option can be used, for example:

    'path'
    Used to specify the url path
    'auth'
    Used for http simple auth. Syntax for the value of 'auth' is: <name>:<password>.
  2. Create a secret from the file in the openshift-storage namespace:

    $ oc create secret generic <connection-secret> --from-file=connect.json -n openshift-storage
  3. Update the NooBaa CR in the openshift-storage namespace with the connection secret and an optional CephFS RWX PVC. If PVC is not provided, MCG creates one automatically as needed:

    $ oc get pvc bn-pvc -n openshift-storage
    NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                VOLUMEATTRIBUTESCLASS   AGE
    bn-pvc   Bound    pvc-24683f8d-48e4-4c6b-b108-d507ca2f4fd1   25Gi       RWX            ocs-storagecluster-cephfs   <unset>                 25h
    $ oc patch noobaa noobaa --type='merge' -n openshift-storage -p '{
      "spec": {
        "bucketNotifications": {
          "connections": [
            {
              "name": <connection-secret>,
              "namespace": "openshift-storage"
            }
          ],
          "enabled": true,
          "pvc": "bn-pvc" <-- optional
        }
      }
    }'
    Caution

    Names of the connection secret must be unique in the list even though the namespaces are different.

    Wait for the noobaa-core and noobaa-endpoint pods to restart before proceeding with the next step.

  4. Use S3:PutBucketNotification on an MCG bucket using the noobaa-admin credentials and S3 endpoint under an S3 alias.

    $ aws_alias s3api put-bucket-notification --bucket first.bucket --notification-configuration '{
      "TopicConfiguration": {
        "Id": "<notif_event_kafka>", <-- Unique string
        "Events": ["s3:ObjectCreated:*"], <-- a filter for events
        "Topic": "<connection-secret/connect.json>"
      }
    }'

Verification steps

  1. Verify that the bucket notification configuration is set on the bucket:

    $ aws_alias s3api get-bucket-notification-configuration --bucket first.bucket
    {
        "TopicConfigurations": [
            {
                "Id": "notif_event_kafka",
                "TopicArn": "kafka-connection-secret/connect.json",
                "Events": [
                    "s3:ObjectCreated:*"
                ]
            }
        ]
    }
  2. Add some objects on the bucket:

    echo 'a' | aws_alias s3 cp - s3://first.bucket/a

    Wait for a while and query the topic messages to verify the expected notification has been sent and received.

    For example:

    For Kafka

    $ oc -n your-kafka-project rsh my-cluster-kafka-0 bin/kafka-console-consumer.sh \
      --bootstrap-server my-cluster-kafka-bootstrap.myproject.svc.cluster.local:9092 \
      --topic my-topic \
      --from-beginning --timeout-ms 10000 | grep '^{.*}' | jq -c '.' | jq

    For HTTP(s)

    $ oc logs deployment/http-logger -n <http-server-namespace> | grep '^{.*}' | jq

    Output

    {
      "Records": [
        {
          "eventVersion": "2.3",
          "eventSource": "noobaa:s3",
          "eventTime": "2024-11-27T12:44:21.987Z",
          "s3": {
            "s3SchemaVersion": "1.0",
            "object": {
              "sequencer": 10,
              "key": "a",
              "eTag": "60b725f10c9c85c70d97880dfe8191b3"
            },
            "bucket": {
              "name": "second.bucket",
              "ownerIdentity": {
                "principalId": "admin@noobaa.io"
              },
              "arn": "arn:aws:s3:::first.bucket"
            }
          },
          "eventName": "ObjectCreated:Put",
          "userIdentity": {
            "principalId": "noobaa"
          },
          "requestParameters": {
            "sourceIPAddress": "100.64.0.3"
          },
          "responseElements": {
            "x-amz-request-id": "m3zvo0cm-5239xd-bhj",
            "x-amz-id-2": "m3zvo0cm-5239xd-bhj"
          }
        }
      ]
    }

Chapter 10. Multicloud Object Gateway bucket replication

Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on).

A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication.

Prerequisites

  • A running OpenShift Data Foundation Platform.
  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

To replicate a bucket, see Replicating a bucket to another bucket.

To set a bucket class replication policy, see Setting a bucket class replication policy.

10.1. Replicating a bucket to another bucket

You can set the bucket replication policy in two ways:

10.1.1. Replicating a bucket to another bucket using the MCG command-line interface

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file.

Procedure

From the MCG command-line interface, run the following command to create an OBC with a specific replication policy:

noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json
<bucket-claim-name>
Specify the name of the bucket claim.
/path/to/json-file.json
Is the path to a JSON file which defines the replication policy.

Example JSON file:

[{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]
"prefix"
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""}.

For example:

noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json

10.1.2. Replicating a bucket to another bucket using a YAML

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure.

Procedure

  • Apply the following YAML:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <desired-bucket-claim>
      namespace: <desired-namespace>
    spec:
      generateBucketName: <desired-bucket-name>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        replicationPolicy: {"rules": [{ "rule_id": "", "destination_bucket": "", "filter": {"prefix": ""}}]}
    <desired-bucket-claim>
    Specify the name of the bucket claim.
    <desired-namespace>
    Specify the namespace.
    <desired-bucket-name>
    Specify the prefix of the bucket name.
    "rule_id"
    Specify the ID number of the rule, for example, {"rule_id": "rule-1"}.
    "destination_bucket"
    Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"}.
    "prefix"
    Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""}.

Additional information

10.2. Setting a bucket class replication policy

It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways:

10.2.1. Setting a bucket class replication policy using the MCG command-line interface

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes.

You can set a bucket class replication policy for the Placement and Namespace bucket classes.

Procedure

  • From the MCG command-line interface, run the following command:

    noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json
    <bucketclass-name>
    Specify the name of the bucket class.
    <backingstores>
    Specify the name of a backingstore. You can pass many backingstores separated by commas.
    /path/to/json-file.json

    Is the path to a JSON file which defines the replication policy.

    Example JSON file:

    [{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]
    "prefix"

    Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""}.

    For example:

    noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json

    This example creates a placement bucket class with a specific replication policy defined in the JSON file.

10.2.2. Setting a bucket class replication policy using a YAML

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure.

Procedure

  1. Apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: <desired-app-label>
      name: <desired-bucketclass-name>
      namespace: <desired-namespace>
    spec:
      placementPolicy:
        tiers:
        - backingstores:
          - <backingstore>
          placement: Spread
      replicationPolicy: [{ "rule_id": "<rule id>", "destination_bucket": "first.bucket", "filter": {"prefix": "<object name prefix>"}}]

    This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket.

    <desired-app-label>
    Specify a label for the app.
    <desired-bucketclass-name>
    Specify the bucket class name.
    <desired-namespace>
    Specify the namespace in which the bucket class gets created.
    <backingstore>
    Specify the name of a backingstore. You can pass many backingstores.
    "rule_id"
    Specify the ID number of the rule, for example, `{"rule_id": "rule-1"}.
    "destination_bucket"
    Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"}.
    "prefix"
    Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""}.

10.3. Enabling log based bucket replication

When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data.

Important

This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging. The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket.

Note

This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication.

10.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment

You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets.

Prerequisites

  • Ensure that object logging is enabled in AWS. For more information, see the “Using the S3 console” section in Enabling Amazon S3 server access logging.
  • Administrator access to OpenShift Web Console.

Procedure

  1. In the OpenShift Web Console, navigate to StorageObject StorageObject Bucket Claims.
  2. Click Create ObjectBucketClaim.
  3. Enter the name of ObjectBucketName and select StorageClass and BucketClass.
  4. Select the Enable replication check box to enable replication.
  5. In the Replication policy section, select the Optimize replication using event logs checkbox.
  6. Enter the name of the bucket that will contain the logs under Event log Bucket.

    If the logs are not stored in the root of the bucket, provide the full path without s3://

  7. Enter a prefix to replicate only the objects whose name begins with the given prefix.

10.3.2. Enabling log based bucket replication for existing namespace buckets using YAML

You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands.

Procedure

  • Edit the YAML of the bucket’s OBC to enable log based bucket replication. Add the following under spec:

    replicationPolicy: '{"rules":[{"rule_id":"<RULE ID>", "destination_bucket":"<DEST>", "filter": {"prefix": "<PREFIX>"}}], "log_replication_info": {"logs_location": {"logs_bucket": "<LOGS_BUCKET>"}}}'
Note

It is also possible to add this to the YAML of an OBC before it is created.

rule_id
Specify an ID of your choice for identifying the rule
destination_bucket
Specify the name of the target MCG bucket that the objects are copied to
(optional) {"filter": {"prefix": <>}}
Specify a prefix string that you can set to filter the objects that are replicated
log_replication_info
Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs.

10.3.3. Enabling log based bucket replication in Microsoft Azure

Prerequisites

  • Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal:

    1. Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID.

      For information, see Register an application.

    2. Ensure that a new a new client secret is created and the application secret is noted down.
    3. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down.

      For information, see Create a Log Analytics workspace.

    4. Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the previous step is provided.

      For more information, see Assign Azure roles using the Azure portal.

    5. Ensure that a new storage account is created and the Access keys are noted down.
    6. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete, and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added.

      For more information, see Diagnostic settings in Azure Monitor.

    7. Ensure that two new containers for object source and object destination are created.
  • Administrator access to OpenShift Web Console.

Procedure

  1. Create a secret with credentials to be used by the namespacestores.

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
    type: Opaque
    data:
        TenantID: <AZURE TENANT ID ENCODED IN BASE64>
        ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64>
        ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64>
        LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64>
        AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64>
        AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>
  2. Create a NamespaceStore backed by a container created in Azure.

    For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface.

  3. Create a new Namespace-Bucketclass and OBC that utilizes it.
  4. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls.
  5. Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec:

    replicationPolicy:'{"rules":[ {"rule_id":"ID goes here", "sync_deletions": "<true or false>"", "destination_bucket":object bucket name"}
     ], "log_replication_info":{"endpoint_type":"AZURE"}}'
    sync_deletion
    Specify a boolean value, true or false.
    destination_bucket
    Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC’s YAML.

Verification steps

  1. Write objects to the source bucket.
  2. Wait until MCG replicates them.
  3. Delete the objects from the source bucket.
  4. Verify the objects were removed from the target bucket.

10.3.4. Enabling log-based bucket replication deletion

Prerequisites

  • Administrator access to OpenShift Web Console.
  • AWS Server Access Logging configured for the desired bucket.

Procedure

  1. In the OpenShift Web Console, navigate to StorageObject StorageObject Bucket Claims.
  2. Click Create new Object bucket claim.
  3. (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately.
  4. Enter the name of the bucket that will contain the logs under Event log Bucket.

    If the logs are not stored in the root of the bucket, provide the full path without s3://

  5. Enter a prefix to replicate only the objects whose name begins with the given prefix.

10.4. Bucket logging for Multicloud Object Gateway

Bucket logging helps you to record the S3 operations that are performed against the Multicloud Object Gateway (MCG) bucket for compliance, auditing, and optimization purposes.

Bucket logging supports the following two options:

  • Best-effort - Bucket logging is recorded using UDP on the best effort basis
  • Guaranteed - Bucket logging with this option creates a PVC attached to the MCG pods and saves the logs to this PVC on a Guaranteed basis, and then from the PVC to the log buckets. Using this option logging takes place twice for every S3 operation as follows:

    • At the start of processing the request
    • At the end with the result of the S3 operation

10.4.1. Enabling bucket logging for Multicloud Object Gateway using the Best-effort option

Prerequisites

Procedure

  1. Create a data bucket where you can upload the objects.

    nb bucket create data.bucket
  2. Create a log bucket where you want to store the logs for bucket operations by using the following command:

    nb bucket create log.bucket
  3. Configure bucket logging on data bucket with log bucket in one of the following ways:

    • Using the NooBaa API

      nb api bucket_api put_bucket_logging '{
         "name": "data.bucket",
         "log_bucket": "log.bucket",
         "log_prefix": "data-bucket-logs"
      }'
    • Using the S3 API

      alias s3api_alias='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3api'
      1. Create a file called setlogging.json in the following format:

        {
          "LoggingEnabled": {
             "TargetBucket": "<log-bucket-name>",
             "TargetPrefix": "<prefix/empty-string>"
          }
        }
      2. Run the following command:

        s3api_alias put-bucket-logging --endpoint <ep> --bucket <source-bucket> --bucket-logging-status file://setlogging.json --no-verify-ssl
  4. Verify if the bucket logging is set for the data bucket in one of the following ways:

    • Using the NooBaa API

      nb api bucket_api get_bucket_logging '{
         "name": "data.bucket"
      }'
    • Using the S3 API

      s3api_alias get-bucket-logging --no-verify-ssl --endpoint <ep> --bucket <source-bucket>

      The S3 operations can take up to 24 hours to get recorded in the logs bucket. The following example shows the recorded logs and how to download them:

      Example

      s3_alias cp s3://logs.bucket/data-bucket-logs/logs.bucket.bucket_data-bucket-logs_1719230150.log - | tail -n 2
      
      Jun 24 14:00:02 10-XXX-X-XXX.sts.openshift-storage.svc.cluster.local  {"noobaa_bucket_logging":"true","op":"GET","bucket_owner":"operator@noobaa.io","source_bucket":"data.bucket","object_key":"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url","log_bucket":"logs.bucket","remote_ip":"100.XX.X.X","request_uri":"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url","request_id":"luv2XXXX-ctyg2k-12gs"} Jun 24 14:00:06 10-XXX-X-XXX.s3.openshift-storage.svc.cluster.local  {"noobaa_bucket_logging":"true","op":"PUT","bucket_owner":"operator@noobaa.io","source_bucket":"data.bucket","object_key":"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file","log_bucket":"logs.bucket","remote_ip":"100.XX.X.X","request_uri":"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file","request_id":"luv2XXXX-9syea5-x5z"}

  5. (Optional) To disable bucket logging, use the following command:

    nb api bucket_api delete_bucket_logging '{
       "name": "data.bucket"
    }'

10.4.2. Enabling bucket logging using the Guaranteed option

Procedure

  • Enable Guaranteed bucket logging using the NooBaa CR in one of the following ways:

    • Using the default CephFS storage class update the NooBaa CR spec:

      bucketLogging:
      {
      loggingType: guaranteed
      }
    • Using the RWX PVC that you created:

      Note

      Make sure that the PVC supports RWX

      bucketLogging:
      {
      loggingType: guaranteed
      bucketLoggingPVC: <pvc-name>
      }

10.5. Synchronizing versions in Multicloud Object Gateway bucket replication

Prerequisites

  • A Multicloud Object Gateway (MCG) source bucket, which is created from an object bucket claim (OBC) and any MCG target bucket. For example, you can create the two buckets using OBCs using the MCG command line interface (CLI):

    • Create a source bucket using an OBC:

      $ mcg-cli obc create source-bucket --exact
      $ mcg-cli obc create target-bucket --exact

      where --exact is optional.

  • Ensure that S3 client aliases with MCG credentials and endpoint are set up.

    $ NOOBAA_ACCESS_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_ACCESS_KEY_ID --to=- 2>/dev/null); \
    NOOBAA_SECRET_KEY=$(oc extract secret/noobaa-admin -n openshift-storage --keys=AWS_SECRET_ACCESS_KEY --to=- 2>/dev/null); \
    S3_ENDPOINT=https://$(oc get route s3 -n openshift-storage -o json | jq -r ".spec.host")
    $ alias common_s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $S3_ENDPOINT --no-verify-ssl'; \
    alias s3_alias='common_s3 s3'; \
    alias s3api_alias='common_s3 s3api'
  • Make sure to enable versioning on both the source and target bucket by using the put-bucket-versioning command in the AWS S3 client:

    $ s3api_alias put-bucket-versioning --bucket source-bucket --versioning-configuration Status=Enabled
    $ s3api_alias put-bucket-versioning --bucket target-bucket --versioning-configuration Status=Enabled

    Procedure

    1. Patch or edit a replication policy with sync_versions: true to the source bucket’s OBC:

      $ oc patch obc source-bucket -n openshift-storage --type=merge -p '{"spec": {"additionalConfig": {"replicationPolicy": "{\"rules\":[{\"rule_id\":\"replication-rule-a\",\"destination_bucket\":\"target-bucket\", \"sync_versions\": true}]}"}}}'

      Normal bucket replication replicates only the latest version, and the use of sync_versions add the functionality of replicating even the older versions, and in their original order.

Note
  • Delete markers are replicated if configured and using log base replication.
  • Delete version IDs are not replicated to avoid human errors.

Verification steps

  • Verify the replication to the target bucket by adding a few versions under an object key and waiting for them to get replicated to the target bucket:

    $ echo 'version_a' | s3_alias cp - s3://source-bucket/versioned_obj.txt
    $ echo 'version_b' | s3_alias cp - s3://source-bucket/versioned_obj.txt
    $ echo 'version_c' | s3_alias cp - s3://source-bucket/versioned_obj.txt

    After a few minutes, compare the versions of the object on both the buckets:

    $ s3api_alias list-object-versions --bucket source-bucket --prefix versioned_obj.txt | jq -r ".Versions[].ETag"
    "aaabf266d38a8e995cef03c13ee9a7f1"
    "181d8a23f59939c0cddec1692e05cdf3"
    "ebc538cc6ffa04a39263f4b1be2f832f"
    $ s3api_alias list-object-versions --bucket target-bucket --prefix versioned_obj.txt | jq -r ".Versions[].ETag"
    "aaabf266d38a8e995cef03c13ee9a7f1"
    "181d8a23f59939c0cddec1692e05cdf3"
    "ebc538cc6ffa04a39263f4b1be2f832f"

Chapter 11. Object Bucket Claim

An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads.

You can create an Object Bucket Claim in three ways:

An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can’t create new buckets by default.

11.1. Dynamic Object Bucket Claim

Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application’s YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application.

Note

The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information.

Procedure

  1. Add the following lines to your application YAML:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <obc-name>
    spec:
      generateBucketName: <obc-bucket-name>
      storageClassName: openshift-storage.noobaa.io

    These lines are the OBC itself.

    1. Replace <obc-name> with the a unique OBC name.
    2. Replace <obc-bucket-name> with a unique bucket name for your OBC.
  2. To automate the use of the OBC add more lines to the YAML file.

    For example:

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: testjob
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
            - image: <your application image>
              name: test
              env:
                - name: BUCKET_NAME
                  valueFrom:
                    configMapKeyRef:
                      name: <obc-name>
                      key: BUCKET_NAME
                - name: BUCKET_HOST
                  valueFrom:
                    configMapKeyRef:
                      name: <obc-name>
                      key: BUCKET_HOST
                - name: BUCKET_PORT
                  valueFrom:
                    configMapKeyRef:
                      name: <obc-name>
                      key: BUCKET_PORT
                - name: AWS_ACCESS_KEY_ID
                  valueFrom:
                    secretKeyRef:
                      name: <obc-name>
                      key: AWS_ACCESS_KEY_ID
                - name: AWS_SECRET_ACCESS_KEY
                  valueFrom:
                    secretKeyRef:
                      name: <obc-name>
                      key: AWS_SECRET_ACCESS_KEY

    The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account.

    1. Replace all instances of <obc-name> with your OBC name.
    2. Replace <your application image> with your application image.
  3. Apply the updated YAML file:

    # oc apply -f <yaml.file>

    Replace <yaml.file> with the name of your YAML file.

  4. To view the new configuration map, run the following:

    # oc get cm <obc-name> -o yaml

    Replace obc-name with the name of your OBC.

    You can expect the following environment variables in the output:

    • BUCKET_HOST - Endpoint to use in the application.
    • BUCKET_PORT - The port available for the application.

    • BUCKET_NAME - Requested or generated bucket name.
    • AWS_ACCESS_KEY_ID - Access key that is part of the credentials.
    • AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials.
Important

Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them.

# oc get secret <obc_name> -o yaml
<obc_name>
Specify the name of the object bucket claim.

11.2. Creating an Object Bucket Claim using the command line interface

When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service.

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. Use the command-line interface to generate the details of a new bucket and credentials.

    Run the following command:

    # noobaa obc create <obc-name> -n openshift-storage

    Replace <obc-name> with a unique OBC name, for example, myappobc.

    Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace.

    For example:

    INFO[0001] ✅ Created: ObjectBucketClaim "test21obc"

    The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC.

  2. Run the following command to view the OBC:

    # oc get obc -n openshift-storage

    For example:

    NAME        STORAGE-CLASS                 PHASE   AGE
    test21obc   openshift-storage.noobaa.io   Bound   38s
  3. Run the following command to view the YAML file for the new OBC:

    # oc get obc test21obc -o yaml -n openshift-storage

    For example:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      creationTimestamp: "2019-10-24T13:30:07Z"
      finalizers:
      - objectbucket.io/finalizer
      generation: 2
      labels:
        app: noobaa
        bucket-provisioner: openshift-storage.noobaa.io-obc
        noobaa-domain: openshift-storage.noobaa.io
      name: test21obc
      namespace: openshift-storage
      resourceVersion: "40756"
      selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc
      uid: 64f04cba-f662-11e9-bc3c-0295250841af
    spec:
      ObjectBucketName: obc-openshift-storage-test21obc
      bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4
      generateBucketName: test21obc
      storageClassName: openshift-storage.noobaa.io
    status:
      phase: Bound
  4. Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC.

    Run the following command to view the secret:

    # oc get -n openshift-storage secret test21obc -o yaml

    For example:

    apiVersion: v1
    data:
      AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY=
      AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ==
    kind: Secret
    metadata:
      creationTimestamp: "2019-10-24T13:30:07Z"
      finalizers:
      - objectbucket.io/finalizer
      labels:
        app: noobaa
        bucket-provisioner: openshift-storage.noobaa.io-obc
        noobaa-domain: openshift-storage.noobaa.io
      name: test21obc
      namespace: openshift-storage
      ownerReferences:
      - apiVersion: objectbucket.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: ObjectBucketClaim
        name: test21obc
        uid: 64f04cba-f662-11e9-bc3c-0295250841af
      resourceVersion: "40751"
      selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc
      uid: 65117c1c-f662-11e9-9094-0a5305de57bb
    type: Opaque

    The secret gives you the S3 access credentials.

  5. Run the following command to view the configuration map:

    # oc get -n openshift-storage cm test21obc -o yaml

    For example:

    apiVersion: v1
    data:
      BUCKET_HOST: 10.0.171.35
      BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4
      BUCKET_PORT: "31242"
      BUCKET_REGION: ""
      BUCKET_SUBREGION: ""
    kind: ConfigMap
    metadata:
      creationTimestamp: "2019-10-24T13:30:07Z"
      finalizers:
      - objectbucket.io/finalizer
      labels:
        app: noobaa
        bucket-provisioner: openshift-storage.noobaa.io-obc
        noobaa-domain: openshift-storage.noobaa.io
      name: test21obc
      namespace: openshift-storage
      ownerReferences:
      - apiVersion: objectbucket.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: ObjectBucketClaim
        name: test21obc
        uid: 64f04cba-f662-11e9-bc3c-0295250841af
      resourceVersion: "40752"
      selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc
      uid: 651c6501-f662-11e9-9094-0a5305de57bb

    The configuration map contains the S3 endpoint information for your application.

11.3. Creating an Object Bucket Claim using the OpenShift Web Console

You can create an Object Bucket Claim (OBC) using the OpenShift Web Console.

Prerequisites

  • Administrative access to the OpenShift Web Console.
  • In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 11.1, “Dynamic Object Bucket Claim”.

Procedure

  1. Log into the OpenShift Web Console.
  2. On the left navigation bar, click StorageObject StorageObject Bucket ClaimsCreate Object Bucket Claim.

    1. Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu:

      Internal mode

      The following storage classes, which were created after deployment, are available for use:

      • ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW)
      • openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG)
      External mode

      The following storage classes, which were created after deployment, are available for use:

      • ocs-external-storagecluster-ceph-rgw uses the RGW
      • openshift-storage.noobaa.io uses the MCG

        Note

        The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from previous OpenShift Data Foundation releases.

    2. Click Create.

      Once you create the OBC, you are redirected to its detail page.

11.4. Attaching an Object Bucket Claim to a deployment

Once created, Object Bucket Claims (OBCs) can be attached to specific deployments.

Prerequisites

  • Administrative access to the OpenShift Web Console.

Procedure

  1. On the left navigation bar, click StorageObject StorageObject Bucket Claims.
  2. Click the Action menu (⋮) next to the OBC you created.

    1. From the drop-down menu, select Attach to Deployment.
    2. Select the desired deployment from the Deployment Name list, then click Attach.

11.5. Viewing object buckets using the OpenShift Web Console

You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console.

Prerequisites

  • Administrative access to the OpenShift Web Console.

Procedure

  1. Log into the OpenShift Web Console.
  2. On the left navigation bar, click StorageObject StorageObject Buckets.

    Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC.

  3. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page.

11.6. Deleting Object Bucket Claims

Prerequisites

  • Administrative access to the OpenShift Web Console.

Procedure

  1. On the left navigation bar, click StorageObject StorageObject Bucket Claims.
  2. Click the Action menu (⋮) next to the Object Bucket Claim (OBC) you want to delete.

    1. Select Delete Object Bucket Claim.
    2. Click Delete.

Chapter 12. Caching policy for object buckets

A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket.

12.1. Creating an AWS cache bucket

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command:

    noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>
    1. Replace <namespacestore> with the name of the namespacestore.
    2. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose.
    3. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

      You can also add storage resources by applying a YAML. First create a secret with credentials:

      apiVersion: v1
      kind: Secret
      metadata:
        name: <namespacestore-secret-name>
      type: Opaque
      data:
        AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64>
        AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>

      You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64>.

      Replace <namespacestore-secret-name> with a unique name.

      Then apply the following YAML:

      apiVersion: noobaa.io/v1alpha1
      kind: NamespaceStore
      metadata:
        finalizers:
        - noobaa.io/finalizer
        labels:
          app: noobaa
        name: <namespacestore>
        namespace: openshift-storage
      spec:
        awsS3:
          secret:
            name: <namespacestore-secret-name>
            namespace: <namespace-secret>
          targetBucket: <target-bucket>
        type: aws-s3
    4. Replace <namespacestore> with a unique name.
    5. Replace <namespacestore-secret-name> with the secret created in the previous step.
    6. Replace <namespace-secret> with the namespace used to create the secret in the previous step.
    7. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore.
  2. Run the following command to create a bucket class:

    noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>
    1. Replace <my-cache-bucket-class> with a unique bucket class name.
    2. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field.
    3. Replace <namespacestore> with the namespacestore created in the previous step.
  3. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2.

    noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>
    1. Replace <my-bucket-claim> with a unique name.
    2. Replace <custom-bucket-class> with the name of the bucket class created in step 2.

12.2. Creating an IBM COS cache bucket

Prerequisites

  • Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable.

    Note

    Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS.

Procedure

  1. Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command:

    noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>
    1. Replace <namespacestore> with the name of the NamespaceStore.
    2. Replace <IBM ACCESS KEY>, <IBM SECRET ACCESS KEY>, <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket.
    3. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration.

      You can also add storage resources by applying a YAML. First, Create a secret with the credentials:

      apiVersion: v1
      kind: Secret
      metadata:
        name: <namespacestore-secret-name>
      type: Opaque
      data:
        IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64>
        IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>

      You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>.

      Replace <namespacestore-secret-name> with a unique name.

      Then apply the following YAML:

      apiVersion: noobaa.io/v1alpha1
      kind: NamespaceStore
      metadata:
        finalizers:
        - noobaa.io/finalizer
        labels:
          app: noobaa
        name: <namespacestore>
        namespace: openshift-storage
      spec:
        s3Compatible:
          endpoint: <IBM COS ENDPOINT>
          secret:
            name: <backingstore-secret-name>
            namespace: <namespace-secret>
          signatureVersion: v2
          targetBucket: <target-bucket>
        type: ibm-cos
    4. Replace <namespacestore> with a unique name.
    5. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint.
    6. Replace <backingstore-secret-name> with the secret created in the previous step.
    7. Replace <namespace-secret> with the namespace used to create the secret in the previous step.
    8. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore.
  2. Run the following command to create a bucket class:

    noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>
    1. Replace <my-bucket-class> with a unique bucket class name.
    2. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field.
    3. Replace <namespacestore> with the namespacestore created in the previous step.
  3. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2.

    noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>
    1. Replace <my-bucket-claim> with a unique name.
    2. Replace <custom-bucket-class> with the name of the bucket class created in step 2.

Chapter 13. Lifecyle bucket configuration in Multicloud Object Gateway

Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects.

Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects.

AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs.

There are a few limitations with the expiratation rule API for MCG in comaparison with AWS:

  • ExpiredObjectDeleteMarker is accepted but it is not processed.
  • No option to define specific non-current version’s expiration conditions

Chapter 14. Scaling Multicloud Object Gateway performance

The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints.

The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default:

  • Storage service
  • S3 endpoint service

S3 endpoint service

The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG.

14.1. Automatic scaling of MultiCloud Object Gateway endpoints

The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.

You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example:

# oc patch -n openshift-storage storagecluster ocs-storagecluster \
    --type merge \
    --patch '{"spec": {"multiCloudGateway": {"endpoints": {"minCount": 3,"maxCount": 10}}}}'

The example above sets the minCount to 3 and the maxCount to `10.

14.2. Increasing CPU and memory for PV pool resources

MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console.

Procedure

  1. In the OpenShift Web Console, navigate to StorageObject StorageBacking Store.
  2. Select the relevant backing store and click on YAML.
  3. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory.

    Example reference:

    spec:
      pvPool:
        resources:
          limits:
            cpu: 1000m
            memory: 4000Mi
          requests:
            cpu: 800m
            memory: 800Mi
            storage: 50Gi
  4. Click Save.

Verification steps

  • To verfiy, you can check the resource values of the PV pool pods.

Chapter 15. Accessing the RADOS Object Gateway S3 endpoint

Users can access the RADOS Object Gateway (RGW) endpoint directly.

In previous versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore.

Chapter 16. Using TLS certificates for applications accessing RGW

Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths.

TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret.

Prerequisites

A running OpenShift Data Foundation cluster.

Procedure

  • For internal RGW server

    • Get the TLS certificate and key from the kubernetes secret:

      $ oc get secrets/<secret_name> -o jsonpath='{.data..tls\.crt}' | base64 -d
      
      $ oc get secrets/<secret_name> -o jsonpath='{.data..tls\.key}' | base64 -d
      <secret_name>
      The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert. Specify the name of the object store.
  • For external RGW server

    • Get the the TLS certificate from the kubernetes secret:

      $ oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d
      <secret_name>
      The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert.

16.1. Accessing External RGW server in OpenShift Data Foundation

Accessing External RGW server using Object Bucket Claims

The S3 credentials such as AccessKey or Secret Key is stored in the secret generated by the Object Bucket Claim (OBC) creation and you can fetch the same by using the following commands:

# oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode
# oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode

Similarly, you can fetch the endpoint details from the configmap of OBC:

# oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_HOST}'
# oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_PORT}'
# oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_NAME}'

Accessing External RGW server using the Ceph Object Store User CR

You can fetch the S3 Credentials and endpoint details from the secret generated as part of the Ceph Object Store User CR:

# oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.AccessKey}' | base64 --decode
# oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.SecretKey}' | base64 --decode
# oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.Endpoint}' | base64 --decode
Important

For both the access mechanisms, you can either request for new certificates from the administrator or reuse the certificates from the Kubernetes secret, ceph-rgw-tls-cert.

Chapter 17. Using the Multicloud Object Gateway’s Security Token Service to assume the role of another user

Multicloud Object Gateway (MCG) provides support to a security token service (STS) similar to the one provided by Amazon Web Services.

To allow other users to assume the role of a certain user, you need to assign a role configuration to the user. You can manage the configuration of roles using the MCG CLI tool.

The following example shows role configuration that allows two MCG users (assumer@mcg.test and assumer2@mcg.test) to assume a certain user’s role:

'{"role_name": "AllowTwoAssumers", "assume_role_policy": {"version": "2012-10-17", "statement": [ {"action": ["sts:AssumeRole"], "effect": "allow", "principal": ["assumer@mcg.test", "assumer2@mcg.test"]}]}}'
  1. Assign the role configuration by using the MCG CLI tool.

    mcg sts assign-role --email <assumed user's username> --role_config '{"role_name": "AllowTwoAssumers", "assume_role_policy": {"version": "2012-10-17", "statement": [ {"action": ["sts:AssumeRole"], "effect": "allow", "principal": ["assumer@mcg.test", "assumer2@mcg.test"]}]}}'
  2. Collect the following information before proceeding to assume the role as it is needed for the subsequent steps:

    • The access key ID and secret access key of the assumer (the user who assumes the role)
    • The MCG STS endpoint, which can be retrieved by using the command:

      $ oc -n openshift-storage get route
    • The access key ID of the assumed user.
    • The value of the role_name value in your role configuration.
    • A name of your choice for the role session
  3. After the configuration role is ready, assign it to the appropriate user (fill with the data described in the previous step) -
AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> aws --endpoint-url <mcg-sts-endpoint> sts assume-role --role-arn arn:aws:sts::<assumed-user-access-key-id>:role/<role-name> --role-session-name <role-session-name>
Note

Adding --no-verify-ssl might be necessary depending on your cluster’s configuration.

The resulting output contains the access key ID, secret access key, and session token that can be used for executing actions while assuming the other user’s role.

You can use the credentials generated after the assume role steps as shown in the following example:

AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> AWS_SESSION_TOKEN=<session token> aws --endpoint-url <mcg-s3-endpoint> s3 ls
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.