Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 8. Multicloud Object Gateway bucket replication

download PDF

Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on).

A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication.

Prerequisites

  • A running OpenShift Data Foundation Platform.
  • Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications.
  • Download the Multicloud Object Gateway (MCG) command-line interface:

    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
    # yum install mcg
    Important

    Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command:

    # subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
  • Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages

    Important

    Choose the correct Product Variant according to your architecture.

    Note

    Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG’s features.

To replicate a bucket, see Replicating a bucket to another bucket.

To set a bucket class replication policy, see Setting a bucket class replication policy.

8.1. Replicating a bucket to another bucket

You can set the bucket replication policy in two ways:

8.1.1. Replicating a bucket to another bucket using the MCG command-line interface

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file.

Procedure

From the MCG command-line interface, run the following command to create an OBC with a specific replication policy:

noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json
<bucket-claim-name>
Specify the name of the bucket claim.
/path/to/json-file.json
Is the path to a JSON file which defines the replication policy.

Example JSON file:

[{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]
"prefix"
Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""}.

For example:

noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json

8.1.2. Replicating a bucket to another bucket using a YAML

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure.

Procedure

  • Apply the following YAML:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <desired-bucket-claim>
      namespace: <desired-namespace>
    spec:
      generateBucketName: <desired-bucket-name>
      storageClassName: openshift-storage.noobaa.io
      additionalConfig:
        replicationPolicy: {"rules": [{ "rule_id": "", "destination_bucket": "", "filter": {"prefix": ""}}]}
    <desired-bucket-claim>
    Specify the name of the bucket claim.
    <desired-namespace>
    Specify the namespace.
    <desired-bucket-name>
    Specify the prefix of the bucket name.
    "rule_id"
    Specify the ID number of the rule, for example, {"rule_id": "rule-1"}.
    "destination_bucket"
    Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"}.
    "prefix"
    Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""}.

Additional information

8.2. Setting a bucket class replication policy

It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways:

8.2.1. Setting a bucket class replication policy using the MCG command-line interface

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes.

You can set a bucket class replication policy for the Placement and Namespace bucket classes.

Procedure

  • From the MCG command-line interface, run the following command:

    noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json
    <bucketclass-name>
    Specify the name of the bucket class.
    <backingstores>
    Specify the name of a backingstore. You can pass many backingstores separated by commas.
    /path/to/json-file.json

    Is the path to a JSON file which defines the replication policy.

    Example JSON file:

    [{ "rule_id": "rule-1", "destination_bucket": "first.bucket", "filter": {"prefix": "repl"}}]
    "prefix"

    Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""}.

    For example:

    noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json

    This example creates a placement bucket class with a specific replication policy defined in the JSON file.

8.2.2. Setting a bucket class replication policy using a YAML

You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure.

Procedure

  1. Apply the following YAML:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: <desired-app-label>
      name: <desired-bucketclass-name>
      namespace: <desired-namespace>
    spec:
      placementPolicy:
        tiers:
        - backingstores:
          - <backingstore>
          placement: Spread
      replicationPolicy: [{ "rule_id": "<rule id>", "destination_bucket": "first.bucket", "filter": {"prefix": "<object name prefix>"}}]

    This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket.

    <desired-app-label>
    Specify a label for the app.
    <desired-bucketclass-name>
    Specify the bucket class name.
    <desired-namespace>
    Specify the namespace in which the bucket class gets created.
    <backingstore>
    Specify the name of a backingstore. You can pass many backingstores.
    "rule_id"
    Specify the ID number of the rule, for example, `{"rule_id": "rule-1"}.
    "destination_bucket"
    Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"}.
    "prefix"
    Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""}.

8.3. Enabling log based bucket replication

When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data.

Important

This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging. The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket.

Note

This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication.

8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment

You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets.

Prerequisites

  • Ensure that object logging is enabled in AWS. For more information, see the “Using the S3 console” section in Enabling Amazon S3 server access logging.
  • Administrator access to OpenShift Web Console.

Procedure

  1. In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims.
  2. Click Create ObjectBucketClaim.
  3. Enter the name of ObjectBucketName and select StorageClass and BucketClass.
  4. Select the Enable replication check box to enable replication.
  5. In the Replication policy section, select the Optimize replication using event logs checkbox.
  6. Enter the name of the bucket that will contain the logs under Event log Bucket.

    If the logs are not stored in the root of the bucket, provide the full path without s3://

  7. Enter a prefix to replicate only the objects whose name begins with the given prefix.

8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML

You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands.

Procedure

  • Edit the YAML of the bucket’s OBC to enable log based bucket replication. Add the following under spec:

    replicationPolicy: '{"rules":[{"rule_id":"<RULE ID>", "destination_bucket":"<DEST>", "filter": {"prefix": "<PREFIX>"}}], "log_replication_info": {"logs_location": {"logs_bucket": "<LOGS_BUCKET>"}}}'
Note

It is also possible to add this to the YAML of an OBC before it is created.

rule_id
Specify an ID of your choice for identifying the rule
destination_bucket
Specify the name of the target MCG bucket that the objects are copied to
(optional) {"filter": {"prefix": <>}}
Specify a prefix string that you can set to filter the objects that are replicated
log_replication_info
Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs.

8.3.3. Enabling log based bucket replication in Microsoft Azure

Prerequisites

  • Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal:

    1. Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID.

      For information, see Register an application.

    2. Ensure that a new a new client secret is created and the application secret is noted down.
    3. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down.

      For information, see Create a Log Analytics workspace.

    4. Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the previous step is provided.

      For more information, see Assign Azure roles using the Azure portal.

    5. Ensure that a new storage account is created and the Access keys are noted down.
    6. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete, and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added.

      For more information, see Diagnostic settings in Azure Monitor.

    7. Ensure that two new containers for object source and object destination are created.
  • Administrator access to OpenShift Web Console.

Procedure

  1. Create a secret with credentials to be used by the namespacestores.

    apiVersion: v1
    kind: Secret
    metadata:
      name: <namespacestore-secret-name>
    type: Opaque
    data:
        TenantID: <AZURE TENANT ID ENCODED IN BASE64>
        ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64>
        ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64>
        LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64>
        AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64>
        AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>
  2. Create a NamespaceStore backed by a container created in Azure.

    For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface.

  3. Create a new Namespace-Bucketclass and OBC that utilizes it.
  4. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls.
  5. Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec:

    replicationPolicy:'{"rules":[ {"rule_id":"ID goes here", "sync_deletions": "<true or false>"", "destination_bucket":object bucket name"}
     ], "log_replication_info":{"endpoint_type":"AZURE"}}'
    sync_deletion
    Specify a boolean value, true or false.
    destination_bucket
    Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC’s YAML.

Verification steps

  1. Write objects to the source bucket.
  2. Wait until MCG replicates them.
  3. Delete the objects from the source bucket.
  4. Verify the objects were removed from the target bucket.

8.3.4. Enabling log-based bucket replication deletion

Prerequisites

  • Administrator access to OpenShift Web Console.
  • AWS Server Access Logging configured for the desired bucket.

Procedure

  1. In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims.
  2. Click Create new Object bucket claim.
  3. (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately.
  4. Enter the name of the bucket that will contain the logs under Event log Bucket.

    If the logs are not stored in the root of the bucket, provide the full path without s3://

  5. Enter a prefix to replicate only the objects whose name begins with the given prefix.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.