Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Deployment


As a storage administrator, you can deploy the Ceph Object Gateway using the Ceph Orchestrator with the command line interface or the service specification. You can also configure multi-site Ceph Object Gateways, and remove the Ceph Object Gateway using the Ceph Orchestrator.

The cephadm command deploys the Ceph Object Gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multi-site deployment.

Note

With cephadm, the Ceph Object Gateway daemons are configured using the Ceph Monitor configuration database instead of the ceph.conf file or the command line options. If the configuration is not in the client.rgw section, then the Ceph Object Gateway daemons start up with default settings and bind to port 80.

This section covers the following administrative tasks:

Prerequisites

  • A running, and healthy Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Available nodes on the storage cluster.
  • All the managers, monitors, and OSDs are deployed in the storage cluster.

3.1. Deploying the Ceph Object Gateway using the command line interface

Using the Ceph Orchestrator, you can deploy the Ceph Object Gateway with the ceph orch command in the command line interface.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • All manager, monitor and OSD daemons are deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. You can deploy the Ceph object gateway daemons in three different ways:

Method 1

  • Create realm, zone group, zone, and then use the placement specification with the host name:

    1. Create a realm:

      Syntax

      radosgw-admin realm create --rgw-realm=REALM_NAME --default

      Example

      [ceph: root@host01 /]# radosgw-admin realm create --rgw-realm=test_realm --default

    2. Create a zone group:

      Syntax

      radosgw-admin zonegroup create --rgw-zonegroup=ZONE_GROUP_NAME  --master --default

      Example

      [ceph: root@host01 /]# radosgw-admin zonegroup create --rgw-zonegroup=default  --master --default

    3. Create a zone:

      Syntax

      radosgw-admin zone create --rgw-zonegroup=ZONE_GROUP_NAME --rgw-zone=ZONE_NAME --master --default

      Example

      [ceph: root@host01 /]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default

    4. Commit the changes:

      Syntax

      radosgw-admin period update --rgw-realm=REALM_NAME --commit

      Example

      [ceph: root@host01 /]# radosgw-admin period update --rgw-realm=test_realm --commit

    5. Run the ceph orch apply command:

      Syntax

      ceph orch apply rgw NAME [--realm=REALM_NAME] [--zone=ZONE_NAME] [--zonegroup=ZONE_GROUP_NAME] --placement="NUMBER_OF_DAEMONS [HOST_NAME_1 HOST_NAME_2]"

      Example

      [ceph: root@host01 /]# ceph orch apply rgw test --realm=test_realm --zone=test_zone --zonegroup=default --placement="2 host01 host02"

Method 2

  • Use an arbitrary service name to deploy two Ceph Object Gateway daemons for a single cluster deployment:

    Syntax

    ceph orch apply rgw SERVICE_NAME

    Example

    [ceph: root@host01 /]# ceph orch apply rgw foo

Method 3

  • Use an arbitrary service name on a labeled set of hosts:

    Syntax

    ceph orch host label add HOST_NAME_1 LABEL_NAME
    ceph orch host label add HOSTNAME_2 LABEL_NAME
    ceph orch apply rgw SERVICE_NAME --placement="label:LABEL_NAME count-per-host:NUMBER_OF_DAEMONS" --port=8000

    Note

    NUMBER_OF_DAEMONS controls the number of Ceph object gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2.

    Example

    [ceph: root@host01 /]# ceph orch host label add host01 rgw  # the 'rgw' label can be anything
    [ceph: root@host01 /]# ceph orch host label add host02 rgw
    [ceph: root@host01 /]# ceph orch apply rgw foo --placement="label:rgw count-per-host:2" --port=8000

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=rgw

3.2. Deploying NFS service with Ceph Object Storage backend

You can deploy the NFS service in Ceph Object Gateway using the Ceph Orchestrator.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the bootstrapped host.
  • Hosts are added to the cluster.
  • All Manager, Monitor, Ceph Object Gateway, and OSD daemons are deployed.

Procedure

  1. Log into the Cephadm shell:

    [root@host01 ~]# cephadm shell
  2. Create a NFS specification file with the relevant data, including the host on which the NFS service needs to be installed:

    Example

    [root@host01 ~]# cat nfs-conf.yml
    
    service_type: nfs
    service_id: nfs-rgw-service
    placement:
      hosts: ['host1']
    spec:
      port: 2049

  3. Apply the NFS service via the specification file created in step 2:

    Example

    [root@host01 ~]# ceph orch apply -i nfs-conf.yml

  4. Verify if the NFS service has been created successfully:

    Example

    [root@host01 ~]# ceph orch ls --service_name nfs.nfs-rgw-service --service_type nfs

3.3. Deploying the Ceph Object Gateway using the service specification

You can deploy the Ceph Object Gateway using the service specification with either the default or the custom realms, zones, and zone groups.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the bootstrapped host.
  • Hosts are added to the cluster.
  • All manager, monitor, and OSD daemons are deployed.

Procedure

  1. As a root user, create a specification file:

    Example

    [root@host01 ~]# touch radosgw.yml

  2. Configure S3 requests to wait for the duration defined in the rgw_exit_timeout_secs parameter for all outstanding requests to complete by setting rgw_graceful_stop to 'true' during Ceph Object gateway shutdown/restart.

    Syntax

    ceph config set client.rgw rgw_graceful_stop true
    
    ceph config set client.rgw rgw_exit_timeout_secs 120

    Note

    In containerized deployments, an additional extra_container_agrs configuration of --stop-timeout=120 (or the value of rgw_exit_timeout_secs configuration, if not default) is also necessary in order for it to work as expected with ceph orch stop/restart commands.

    [root@host1 ~]$ cat rgw_spec.yaml
    service_type: rgw
    service_id: foo
    placement:
      count_per_host: 1
      hosts:
        - rgw_node
    spec:
      rgw_frontend_port: 8081
    extra_container_args:
      - "--stop-timeout=120"
  3. Edit the radosgw.yml file to include the following details for the default realm, zone, and zone group:

    Syntax

    service_type: rgw
    service_id: REALM_NAME.ZONE_NAME
    placement:
      hosts:
      - HOST_NAME_1
      - HOST_NAME_2
      count_per_host: NUMBER_OF_DAEMONS
    spec:
      rgw_realm: REALM_NAME
      rgw_zone: ZONE_NAME
      rgw_zonegroup: ZONE_GROUP_NAME
      rgw_frontend_port: FRONT_END_PORT
    networks:
      -  NETWORK_CIDR # Ceph Object Gateway service binds to a specific network

    Note

    NUMBER_OF_DAEMONS controls the number of Ceph Object Gateways deployed on each host. To achieve the highest performance without incurring an additional cost, set this value to 2.

    Example

    service_type: rgw
    service_id: default
    placement:
      hosts:
      - host01
      - host02
      - host03
      count_per_host: 2
    spec:
      rgw_realm: default
      rgw_zone: default
      rgw_zonegroup: default
      rgw_frontend_port: 1234
    networks:
      - 192.169.142.0/24

  4. Optional: For custom realm, zone, and zone group, create the resources and then create the radosgw.yml file:

    1. Create the custom realm, zone, and zone group:

      Example

      [root@host01 ~]# radosgw-admin realm create --rgw-realm=test_realm --default
      [root@host01 ~]# radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default
      [root@host01 ~]# radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default
      [root@host01 ~]# radosgw-admin period update --rgw-realm=test_realm --commit

    2. Create the radosgw.yml file with the following details:

      Example

      service_type: rgw
      service_id: test_realm.test_zone
      placement:
        hosts:
        - host01
        - host02
        - host03
        count_per_host: 2
      spec:
        rgw_realm: test_realm
        rgw_zone: test_zone
        rgw_zonegroup: test_zonegroup
        rgw_frontend_port: 1234
      networks:
        - 192.169.142.0/24

  5. Mount the radosgw.yml file under a directory in the container:

    Example

    [root@host01 ~]# cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml

    Note

    Every time you exit the shell, you have to mount the file in the container before deploying the daemon.

  6. Deploy the Ceph Object Gateway using the service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yml

    Example

    [ceph: root@host01 /]# ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=rgw

3.4. Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator

Ceph Orchestrator supports multi-site configuration options for the Ceph Object Gateway.

You can configure each object gateway to work in an active-active zone configuration allowing writes to a non-primary zone. The multi-site configuration is stored within a container called a realm.

The realm stores zone groups, zones, and a time period. The rgw daemons handle the synchronization eliminating the need for a separate synchronization agent, thereby operating with an active-active configuration.

You can also deploy multi-site zones using the command line interface (CLI).

Note

The following configuration assumes at least two Red Hat Ceph Storage clusters are in geographically separate locations. However, the configuration also works on the same site.

Prerequisites

  • At least two running Red Hat Ceph Storage clusters.
  • At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Nodes or containers are added to the storage cluster.
  • All Ceph Manager, Monitor and OSD daemons are deployed.

Procedure

  1. In the cephadm shell, configure the primary zone:

    1. Create a realm:

      Syntax

      radosgw-admin realm create --rgw-realm=REALM_NAME --default

      Example

      [ceph: root@host01 /]# radosgw-admin realm create --rgw-realm=test_realm --default

      If the storage cluster has a single realm, then specify the --default flag.

    2. Create a primary zone group:

      Syntax

      radosgw-admin zonegroup create --rgw-zonegroup=ZONE_GROUP_NAME --endpoints=http://RGW_PRIMARY_HOSTNAME:RGW_PRIMARY_PORT_NUMBER_1 --master --default

      Example

      [ceph: root@host01 /]# radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default

    3. Create a primary zone:

      Syntax

      radosgw-admin zone create --rgw-zonegroup=PRIMARY_ZONE_GROUP_NAME --rgw-zone=PRIMARY_ZONE_NAME --endpoints=http://RGW_PRIMARY_HOSTNAME:RGW_PRIMARY_PORT_NUMBER_1 --access-key=SYSTEM_ACCESS_KEY --secret=SYSTEM_SECRET_KEY

      Example

      [ceph: root@host01 /]# radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ

    4. Optional: Delete the default zone, zone group, and the associated pools.

      Important

      Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Also, removing the default zone group deletes the system user.

      To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands.

      Example

      [ceph: root@host01 /]# radosgw-admin zonegroup delete --rgw-zonegroup=default
      [ceph: root@host01 /]# ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
      [ceph: root@host01 /]# ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
      [ceph: root@host01 /]# ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
      [ceph: root@host01 /]# ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
      [ceph: root@host01 /]# ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it

    5. Create a system user:

      Syntax

      radosgw-admin user create --uid=USER_NAME --display-name="USER_NAME" --access-key=SYSTEM_ACCESS_KEY --secret=SYSTEM_SECRET_KEY --system

      Example

      [ceph: root@host01 /]# radosgw-admin user create --uid=zone.user --display-name="Zone user" --system

      Make a note of the access_key and secret_key.

    6. Add the access key and system key to the primary zone:

      Syntax

      radosgw-admin zone modify --rgw-zone=PRIMARY_ZONE_NAME --access-key=ACCESS_KEY --secret=SECRET_KEY

      Example

      [ceph: root@host01 /]# radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW

    7. Commit the changes:

      Syntax

      radosgw-admin period update --commit

      Example

      [ceph: root@host01 /]# radosgw-admin period update --commit

    8. Outside the cephadm shell, fetch the FSID of the storage cluster and the processes:

      Example

      [root@host01 ~]#  systemctl list-units | grep ceph

    9. Start the Ceph Object Gateway daemon:

      Syntax

      systemctl start ceph-FSID@DAEMON_NAME
      systemctl enable ceph-FSID@DAEMON_NAME

      Example

      [root@host01 ~]# systemctl start ceph-62a081a6-88aa-11eb-a367-001a4a000672@rgw.test_realm.us-east-1.host01.ahdtsw.service
      [root@host01 ~]# systemctl enable ceph-62a081a6-88aa-11eb-a367-001a4a000672@rgw.test_realm.us-east-1.host01.ahdtsw.service

  2. In the Cephadm shell, configure the secondary zone.

    1. Pull the primary realm configuration from the host:

      Syntax

      radosgw-admin realm pull --rgw-realm=PRIMARY_REALM --url=URL_TO_PRIMARY_ZONE_GATEWAY --access-key=ACCESS_KEY --secret-key=SECRET_KEY --default

      Example

      [ceph: root@host04 /]# radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default

    2. Pull the primary period configuration from the host:

      Syntax

      radosgw-admin period pull --url=URL_TO_PRIMARY_ZONE_GATEWAY --access-key=ACCESS_KEY --secret-key=SECRET_KEY

      Example

      [ceph: root@host04 /]# radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ

    3. Configure a secondary zone:

      Syntax

      radosgw-admin zone create --rgw-zonegroup=ZONE_GROUP_NAME \
                   --rgw-zone=SECONDARY_ZONE_NAME --endpoints=http://RGW_SECONDARY_HOSTNAME:RGW_PRIMARY_PORT_NUMBER_1 \
                   --access-key=SYSTEM_ACCESS_KEY --secret=SYSTEM_SECRET_KEY \
                   --endpoints=http://FQDN:80 \
                   [--read-only]

      Example

      [ceph: root@host04 /]# radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ

    4. Optional: Delete the default zone.

      Important

      Do not delete the default zone and its pools if you are using the default zone and zone group to store data.

      To access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands.

      Example

      [ceph: root@host04 /]# radosgw-admin zone rm --rgw-zone=default
      [ceph: root@host04 /]# ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
      [ceph: root@host04 /]# ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
      [ceph: root@host04 /]# ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
      [ceph: root@host04 /]# ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
      [ceph: root@host04 /]# ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it

    5. Update the Ceph configuration database:

      Syntax

      ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME

      Example

      [ceph: root@host04 /]# ceph config set rgw rgw_zone us-east-2

    6. Commit the changes:

      Syntax

      radosgw-admin period update --commit

      Example

      [ceph: root@host04 /]# radosgw-admin period update --commit

    7. Outside the Cephadm shell, fetch the FSID of the storage cluster and the processes:

      Example

      [root@host04 ~]#  systemctl list-units | grep ceph

    8. Start the Ceph Object Gateway daemon:

      Syntax

      systemctl start ceph-FSID@DAEMON_NAME
      systemctl enable ceph-FSID@DAEMON_NAME

      Example

      [root@host04 ~]# systemctl start ceph-62a081a6-88aa-11eb-a367-001a4a000672@rgw.test_realm.us-east-2.host04.ahdtsw.service
      [root@host04 ~]# systemctl enable ceph-62a081a6-88aa-11eb-a367-001a4a000672@rgw.test_realm.us-east-2.host04.ahdtsw.service

  3. Optional: Deploy multi-site Ceph Object Gateways using the placement specification:

    Syntax

    ceph orch apply rgw NAME --realm=REALM_NAME --zone=PRIMARY_ZONE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2"

    Example

    [ceph: root@host04 /]# ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement="2 host01 host02"

Verification

  • Check the synchronization status to verify the deployment:

    Example

    [ceph: root@host04 /]# radosgw-admin sync status

3.5. Removing the Ceph Object Gateway using the Ceph Orchestrator

You can remove the Ceph object gateway daemons using the ceph orch rm command.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the cluster.
  • At least one Ceph object gateway daemon deployed on the hosts.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  3. Remove the service:

    Syntax

    ceph orch rm SERVICE_NAME

    Example

    [ceph: root@host01 /]# ceph orch rm rgw.test_realm.test_zone_bb

Verification

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps

    Example

    [ceph: root@host01 /]# ceph orch ps

Additional Resources

3.6. Using the Ceph Manager rgw module

As a storage administrator, you can deploy Ceph Object Gateway, single site and multi-site, using the rgw module. It helps with bootstrapping and configuring Ceph Object realm, zonegroup, and the different related entities.

You can use the available tokens for the newly created or existing realms. This token is a base64 string that encapsulates the realm information and its master zone endpoint authentication data.

In a multi-site configuration, these tokens can be used to pull a realm to create a secondary zone on a different cluster that syncs with the master zone on the primary cluster by using the rgw zone create command.

3.6.1. Deploying Ceph Object Gateway using the rgw module

Bootstrapping Ceph Object Gateway realm creates a new realm entity, a new zonegroup, and a new zone. The rgw module instructs the orchestrator to create and deploy the corresponding Ceph Object Gateway daemons.

Enable the rgw module using the ceph mgr module enable rgw command. After enabling the rgw module, either pass the arguments in the command line or use the yaml specification file to bootstrap the realm.

Prerequisites

  • A running Red Hat Ceph Storage cluster with at least one OSD deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Enable the` rgw`module:

    Example

    [ceph: root@host01 /]# ceph mgr module enable rgw

  3. Bootstrap the Ceph Object Gateway realm using either the command-line or the yaml specification file:

    • Option 1: Use the command-line interface:

      Syntax

      ceph rgw realm bootstrap [--realm name REALM_NAME] [--zonegroup-name ZONEGROUP_NAME] [--zone-name ZONE_NAME] [--port PORT_NUMBER] [--placement HOSTNAME] [--start-radosgw]

      Example

      [ceph: root@host01 /]# ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement="host01 host02" --start-radosgw
      Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.

    • Option 2: Use the yaml specification file:

      1. As a root user, create the yaml file:

        Syntax

        rgw_realm: REALM_NAME
        rgw_zonegroup: ZONEGROUP_NAME
        rgw_zone: ZONE_NAME
        placement:
          hosts:
           - _HOSTNAME_1_
           - _HOSTNAME_2_

        Example

        [root@host01 ~]# cat rgw.yaml
        
        rgw_realm: myrealm
        rgw_zonegroup: myzonegroup
        rgw_zone: myzone
        placement:
          hosts:
           - host01
           - host02

      2. Optional: You can add the hostnames parameter to the zonegroup during realm bootstrap:

        Syntax

        service_type: rgw
        placement:
          hosts:
          - _host1_
          - _host2_
        spec:
          rgw_realm: my_realm
          rgw_zonegroup: my_zonegroup
          rgw_zone: my_zone
          zonegroup_hostnames:
          - _hostname1_
          - _hostname2_

        Example

        service_type: rgw
        placement:
          hosts:
          - _host1_
          - _host2_
        spec:
          rgw_realm: my_realm
          rgw_zonegroup: my_zonegroup
          rgw_zone: my_zone
          zonegroup_hostnames:
          - foo
          - bar

      3. Mount the YAML file under a directory in the container:

        Example

        [root@host01 ~]# cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml

      4. Bootstrap the realm:

        Example

        [ceph: root@host01 /]# ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml

        Note

        The specification file used by the rgw module has the same format as the one used by the orchestrator. Thus, you can provide any orchestration supported Ceph Object Gateway parameters including advanced configuration features such as SSL certificates.

  4. List the available tokens:

    Example

    [ceph: root@host01 /]# ceph rgw realm tokens | jq
    
    [
      {
        "realm": "myrealm",
        "token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1="
      }
    ]

    Note

    If you run the above command before the Ceph Object Gateway daemons get deployed, it displays a message that there are no tokens as there are no endpoints yet.

Verification

  • Verify Object Gateway deployment:

    Example

    [ceph: root@host01 /]# ceph orch list --daemon-type=rgw
    NAME                                                                HOST                                    PORTS  STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION          IMAGE ID      CONTAINER ID
    rgw.myrealm.myzonegroup.ceph-saya-6-osd-host01.eburst  ceph-saya-6-osd-host01  *:80   running (111m)     9m ago  111m    82.3M        -  17.2.6-22.el9cp  2d5b080de0b0  2f3eaca7e88e

  • Verify the hostnames added via realm bootstrap:

    Syntax

    radosgw-admin zonegroup get --rgw-zonegroup _zone_group_name_

    Example

    [ceph: root@host01 /]# radosgw-admin zonegroup get --rgw-zonegroup my_zonegroup
    
    {
        "id": "02a175e2-7f23-4882-8651-6fbb15d25046",
        "name": "my_zonegroup_ck",
        "api_name": "my_zonegroup_ck",
        "is_master": true,
        "endpoints": [
            "http://vm-00:80"
        ],
        "hostnames": [
            "foo"
            "bar"
        ],
        "hostnames_s3website": [],
        "master_zone": "f42fea84-a89e-4995-996e-61b7223fb0b0",
        "zones": [
            {
                "id": "f42fea84-a89e-4995-996e-61b7223fb0b0",
                "name": "my_zone_ck",
                "endpoints": [
                    "http://vm-00:80"
                ],
                "log_meta": false,
                "log_data": false,
                "bucket_index_max_shards": 11,
                "read_only": false,
                "tier_type": "",
                "sync_from_all": true,
                "sync_from": [],
                "redirect_zone": "",
                "supported_features": [
                    "compress-encrypted",
                    "resharding"
                ]
            }
        ],
        "placement_targets": [
            {
                "name": "default-placement",
                "tags": [],
                "storage_classes": [
                    "STANDARD"
                ]
            }
        ],
        "default_placement": "default-placement",
        "realm_id": "439e9c37-4ddc-43a3-99e9-ea1f3825bb51",
        "sync_policy": {
            "groups": []
        },
        "enabled_features": [
            "resharding"
        ]
    }

    See the hostnames section of the zonegroup for the list of host names specified in zonegroup_hostnames in the Ceph Object Gateway specification file.

3.6.2. Deploying Ceph Object Gateway multi-site using the rgw module

Bootstrapping Ceph Object Gateway realm creates a new realm entity, a new zonegroup, and a new zone. It configures a new system user that can be used for multi-site sync operations. The rgw module instructs the orchestrator to create and deploy the corresponding Ceph Object Gateway daemons.

Enable the rgw module using the ceph mgr module enable rgw command. After enabling the rgw module, either pass the arguments in the command line or use the yaml specification file to bootstrap the realm.

Prerequisites

  • A running Red Hat Ceph Storage cluster with at least one OSD deployed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Enable the` rgw`module:

    Example

    [ceph: root@host01 /]# ceph mgr module enable rgw

  3. Bootstrap the Ceph Object Gateway realm using either the command-line or the yaml specification file:

    • Option 1: Use the command-line interface:

      Syntax

      ceph rgw realm bootstrap [--realm name REALM_NAME] [--zonegroup-name ZONEGROUP_NAME] [--zone-name ZONE_NAME] [--port PORT_NUMBER] [--placement HOSTNAME] [--start-radosgw]

      Example

      [ceph: root@host01 /]# ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement="host01 host02" --start-radosgw
      Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.

    • Option 2: Use the yaml specification file:

      1. As a root user, create the yaml file:

        Syntax

        rgw_realm: REALM_NAME
        rgw_zonegroup: ZONEGROUP_NAME
        rgw_zone: ZONE_NAME
        placement:
          hosts:
           - HOSTNAME_1
           - HOSTNAME_2
        spec:
          rgw_frontend_port: PORT_NUMBER
        zone_endpoints: http://RGW_HOSTNAME_1:RGW_PORT_NUMBER_1, http://RGW_HOSTNAME_2:RGW_PORT_NUMBER_2

        Example

        [root@host01 ~]# cat rgw.yaml
        
        rgw_realm: myrealm
        rgw_zonegroup: myzonegroup
        rgw_zone: myzone
        placement:
          hosts:
           - host01
           - host02
        spec:
          rgw_frontend_port: 5500
        zone_endpoints: http://<rgw_host1>:<rgw_port1>, http://<rgw_host2>:<rgw_port2>

      2. Mount the YAML file under a directory in the container:

        Example

        [root@host01 ~]# cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml

      3. Bootstrap the realm:

        Example

        [ceph: root@host01 /]# ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml

        Note

        The specification file used by the rgw module has the same format as the one used by the orchestrator. Thus, you can provide any orchestration supported Ceph Object Gateway parameters including advanced configuration features such as SSL certificates.

  4. List the available tokens:

    Example

    [ceph: root@host01 /]# ceph rgw realm tokens | jq
    
    [
      {
        "realm": "myrealm",
        "token": "ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1="
      }
    ]

    Note

    If you run the above command before the Ceph Object Gateway daemons get deployed, it displays a message that there are no tokens as there are no endpoints yet.

  5. Create the secondary zone using these tokens and join the existing realms:

    1. As a root user, create the yaml file:

      Example

      [root@host01 ~]# cat zone-spec.yaml
      rgw_zone: my-secondary-zone
      rgw_realm_token: <token>
      placement:
        hosts:
         - ceph-node-1
         - ceph-node-2
      spec:
        rgw_frontend_port: 5500

    2. Mount the zone-spec.yaml file under a directory in the container:

      Example

      [root@host01 ~]# cephadm shell --mount zone-spec.yaml:/var/lib/ceph/radosgw/zone-spec.yaml

    3. Enable the` rgw`module on the secondary zone:

      Example

      [ceph: root@host01 /]# ceph mgr module enable rgw

    4. Create the secondary zone:

      Example

      [ceph: root@host01 /]# ceph rgw zone create -i /var/lib/ceph/radosgw/zone-spec.yaml

Verification

  • Verify Object Gateway multi-site deployment:

    Example

    [ceph: root@host01 /]# radosgw-admin realm list
    {
    "default_info": "d07c00ef-9041-4f6e-8804-7d40240556ae",
    "realms": [
    "myrealm"
    ]
    }

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.