Chapter 3. Administration
Administrators can manage the Ceph Object Gateway using the radosgw-admin command-line interface.
3.1. Administrative Data Storage Copy linkLink copied to clipboard!
A Ceph Object Gateway stores administrative data in a series of pools defined in an instance’s zone configuration. For example, the buckets, users, user quotas and usage statistics discussed in the subsequent sections are stored in pools in the Ceph Storage Cluster. By default, Ceph Object Gateway will create the following pools and map them to the default zone.
-
.rgw -
.rgw.control -
.rgw.gc -
.log -
.intent-log -
.usage -
.users -
.users.email -
.users.swift -
.users.uid
You should consider creating these pools manually so that you can set the CRUSH ruleset and the number of placement groups. In a typical configuration, the pools that store the Ceph Object Gateway’s administrative data will often use the same CRUSH ruleset and use fewer placement groups, because there are 10 pools for the administrative data. See Pools and the Storage Strategies guide for Red Hat Ceph Storage 3 for additional details.
Also see Ceph Placement Groups (PGs) per Pool Calculator for placement group calculation details. The mon_pg_warn_max_per_osd setting warns you if assign too many placement groups to a pool (i.e., 300 by default). You may adjust the value to suit your needs and the capabilities of your hardware where n is the maximum number of PGs per OSD.
mon_pg_warn_max_per_osd = n
mon_pg_warn_max_per_osd = n
3.2. Creating Storage Policies Copy linkLink copied to clipboard!
The Ceph Object Gateway stores the client bucket and object data by identifying placement targets, and storing buckets and objects in the pools associated with a placement target. If you don’t configure placement targets and map them to pools in the instance’s zone configuration, the Ceph Object Gateway will use default targets and pools, for example, default_placement.
Storage policies give Ceph Object Gateway clients a way of accessing a storage strategy, that is, the ability to target a particular type of storage, for example, SSDs, SAS drives, SATA drives. A particular way of ensuring durability, replication, erasure coding, and so on. For details, see the Storage Strategies guide for Red Hat Ceph Storage 3.
To create a storage policy, use the following procedure:
-
Create a new pool
.rgw.buckets.specialwith the desired storage strategy. For example, a pool customized with erasure-coding, a particular CRUSH ruleset, the number of replicas, and thepg_numandpgp_numcount. Get the zone group configuration and store it in a file, for example,
zonegroup.json:Syntax
radosgw-admin zonegroup --rgw-zonegroup=<zonegroup_name> get > zonegroup.json
[root@master-zone]# radosgw-admin zonegroup --rgw-zonegroup=<zonegroup_name> get > zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json
[root@master-zone]# radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a
special-placemententry underplacement_targetin thezonegroup.jsonfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the zone group with the modified
zonegroup.jsonfile:radosgw-admin zonegroup set < zonegroup.json
[root@master-zone]# radosgw-admin zonegroup set < zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the zone configuration and store it in a file, for example,
zone.json:radosgw-admin zone get > zone.json
[root@master-zone]# radosgw-admin zone get > zone.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the zone file and add the new placement policy key under
placement_pool:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new zone configuration.
radosgw-admin zone set < zone.json
[root@master-zone]# radosgw-admin zone set < zone.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the zone group map.
radosgw-admin period update --commit
[root@master-zone]# radosgw-admin period update --commitCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
special-placemententry is listed as aplacement_target.
To specify the storage policy when making a request:
Example:
curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H "X-Storage-Policy: special-placement" -H "X-Auth-Token: AUTH_rgwtxxxxxx"
$ curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H "X-Storage-Policy: special-placement" -H "X-Auth-Token: AUTH_rgwtxxxxxx"
3.3. Creating Indexless Buckets Copy linkLink copied to clipboard!
It is possible to configure a placement target where created buckets do not use the bucket index to store objects index; that is, indexless buckets. Placement targets that do not use data replication or listing may implement indexless buckets.
Indexless buckets provides a mechanism in which the placement target does not track objects in specific buckets. This removes a resource contention that happens whenever an object write happens and reduces the number of round trips that Ceph Object Gateway needs to make to the Ceph Storage cluster. This can have a positive effect on concurrent operations and small object write performance.
To specify a placement target as indexless, use the following procedure:
Get the configuration for
zone.json:radosgw-admin zone get --rgw-zone=<zone> > zone.json
$ radosgw-admin zone get --rgw-zone=<zone> > zone.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify
zone.jsonby adding a new placement target or by modifying an existing one to have"index_type": 1, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the configuration for
zone.json:radosgw-admin zone set --rgw-zone=<zone> --infile zone.json
$ radosgw-admin zone set --rgw-zone=<zone> --infile zone.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure the
zonegrouprefers to the new placement target if you created a new placement target:radosgw-admin zonegroup get --rgw-zonegroup=<zonegroup> > zonegroup.json
$ radosgw-admin zonegroup get --rgw-zonegroup=<zonegroup> > zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the zonegroup’s
default_placement:radosgw-admin zonegroup placement default --placement-id indexless
$ radosgw-admin zonegroup placement default --placement-id indexlessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
zonegroup.jsonas needed. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow radosgw-admin zonegroup set --rgw-zonegroup=<zonegroup> < zonegroup.json
$ radosgw-admin zonegroup set --rgw-zonegroup=<zonegroup> < zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update and commit the period if the cluster is in a multi-site configuration:
radosgw-admin period update --commit
$ radosgw-admin period update --commitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In this example, the buckets created in the "indexless" target will be indexless buckets.
The bucket index will not reflect the correct state of the bucket, and listing these buckets will not correctly return their list of objects. This affects multiple features. Specifically, these buckets will not be synced in a multi-zone environment because the bucket index is not used to store change information. It is not recommended to use S3 object versioning on indexless buckets because the bucket index is necessary for this feature.
Using indexless buckets removes the limit of the max number of objects in a single bucket.
Objects in indexless buckets cannot be listed from NFS
3.4. Configuring Bucket Sharding Copy linkLink copied to clipboard!
The Ceph Object Gateway stores bucket index data in the index pool (index_pool), which defaults to .rgw.buckets.index. When the client puts many objects—hundreds of thousands to millions of objects—in a single bucket without having set quotas for the maximum number of objects per bucket, the index pool can suffer significant performance degradation.
Bucket index sharding helps prevent performance bottlenecks when allowing a high number of objects per bucket.
You can configure bucket index sharding for new buckets or change the bucket index on already existing ones.
To configure bucket index sharding:
-
For new buckets in simple configurations, use the
rgw_override_bucket_index_max_shardsoption. See Section 3.4.2, “Configuring Bucket Index Sharding in Simple Configurations” -
For new buckets in multi-site configurations, use the
bucket_index_max_shardsoption. See Section 3.4.3, “Configuring Bucket Index Sharding in Multisite Configurations”
To reshard a bucket:
- Dynamically, see Section 3.4.4, “Dynamic Bucket Index Resharding”
- Manually, see Section 3.4.5, “Manual Bucket Index Resharding”
- In a multi-site configurations, see Manually Resharding Buckets with Multi-site
3.4.1. Bucket Sharding Limitations Copy linkLink copied to clipboard!
Use the following limitations with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team.
- Maximum number of objects in one bucket before it needs sharding: Red Hat Recommends a maximum of 102,400 objects per bucket index shard. To take full advantage of sharding, provide a sufficient number of OSDs in the Ceph Object Gateway bucket index pool to get maximum parallelism.
- Maximum number of objects when using sharding: Based on prior testing, the number of bucket index shards currently supported is 65521. Red Hat quality assurance has NOT performed full scalability testing on bucket sharding.
3.4.2. Configuring Bucket Index Sharding in Simple Configurations Copy linkLink copied to clipboard!
To enable and configure bucket index sharding on all new buckets, use the rgw_override_bucket_index_max_shards parameter. Set the parameter to:
-
0to disable bucket index sharding. This is the default value. -
A value greater than
0to enable bucket sharding and to set the maximum number of shards.
Prerequisites
- Read the bucket sharding limitations.
Procedure
Calculate the recommended number of shards. To do so, use the following formula:
number of objects expected in a bucket / 100,000
number of objects expected in a bucket / 100,000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that maximum number of shards is 65521.
Add
rgw_override_bucket_index_max_shardsto the Ceph configuration file:rgw_override_bucket_index_max_shards = value
rgw_override_bucket_index_max_shards = valueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace value with the recommended number of shards calculated in the previous step, for example:
rgw_override_bucket_index_max_shards = 10
rgw_override_bucket_index_max_shards = 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
To configure bucket index sharding for all instances of the Ceph Object Gateway, add
rgw_override_bucket_index_max_shardsunder the[global]section. -
To configure bucket index sharding only for a particular instance of the Ceph Object Gateway, add
rgw_override_bucket_index_max_shardsunder the instance.
-
To configure bucket index sharding for all instances of the Ceph Object Gateway, add
Restart the Ceph Object Gateway:
systemctl restart ceph-radosgw.target
# systemctl restart ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
3.4.3. Configuring Bucket Index Sharding in Multisite Configurations Copy linkLink copied to clipboard!
In multisite configurations, each zone can have a different index_pool setting to manage failover. To configure a consistent shard count for zones in one zone group, set the bucket_index_max_shards setting in the configuration for that zone group. Set the parameter to:
-
0to disable bucket index sharding. This is the default value. -
A value greater than
0to enable bucket sharding and to set the maximum number of shards.
Mapping the index pool (for each zone, if applicable) to a CRUSH ruleset of SSD-based OSDs might also help with bucket index performance.
Prerequisites
- Read the bucket sharding limitations.
Procedure
Calculate the recommended number of shards. To do so, use the following formula:
number of objects expected in a bucket / 100,000
number of objects expected in a bucket / 100,000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that maximum number of shards is 65521.
Extract the zone group configuration to the
zonegroup.jsonfile:radosgw-admin zonegroup get > zonegroup.json
$ radosgw-admin zonegroup get > zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
zonegroup.jsonfile, set thebucket_index_max_shardssetting for each named zone.bucket_index_max_shards = value
bucket_index_max_shards = valueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace value with the recommended number of shards calculated in the previous step, for example:
bucket_index_max_shards = 10
bucket_index_max_shards = 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reset the zone group:
radosgw-admin zonegroup set < zonegroup.json
$ radosgw-admin zonegroup set < zonegroup.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the period:
radosgw-admin period update --commit
$ radosgw-admin period update --commitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
3.4.4. Dynamic Bucket Index Resharding Copy linkLink copied to clipboard!
The process for dynamic bucket resharding periodically checks all the Ceph Object Gateway buckets and detects buckets that require resharding. If a bucket has grown larger than the value specified in the rgw_max_objs_per_shard parameter, the Ceph Object Gateway reshards the bucket dynamically in the background. The default value for rgw_max_objs_per_shard is 100k objects per shard.
Currently, Red Hat does not support dynamic bucket resharding in multi-site configurations. To reshard bucket index in such configuration, see Manually Resharding Buckets with Multi-site.
Prerequisites
- Read the bucket sharding limitations.
Procedure
To enable dynamic bucket index resharding
-
Set the
rgw_dynamic_reshardingsetting in the Ceph configuration file totrue, which is the default value. Optional. Change the following parameters in the Ceph configuration file if needed:
-
rgw_reshard_num_logs: The number of shards for the resharding log. The default value is16. -
rgw_reshard_bucket_lock_duration: The duration of the lock on a bucket during resharding. The default value is120seconds. -
rgw_dynamic_resharding: Enables or disables dynamic resharding. The default value istrue. -
rgw_max_objs_per_shard: The maximum number of objects per shard. The default value is100000objects per shard. -
rgw_reshard_thread_interval: The maximum time between rounds of reshard thread processing. The default value is600seconds.
-
-
Set the
To add a bucket to the resharding queue:
radosgw-admin reshard add --bucket BUCKET_NAME --num-shards NUMBER
radosgw-admin reshard add --bucket BUCKET_NAME --num-shards NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- BUCKET_NAME with the name of the bucket to reshard.
- NUMBER with the new number of shards.
Example:
radosgw-admin reshard add --bucket data --num-shards 10
$ radosgw-admin reshard add --bucket data --num-shards 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list the resharding queue:
radosgw-admin reshard list
$ radosgw-admin reshard listCopy to Clipboard Copied! Toggle word wrap Toggle overflow To check bucket resharding status:
radosgw-admin reshard status --bucket BUCKET_NAME
radosgw-admin reshard status --bucket BUCKET_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- BUCKET_NAME with the name of the bucket to reshard
Example:
radosgw-admin reshard status --bucket data
$ radosgw-admin reshard status --bucket dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
radosgw-admin reshard statuscommand will display one of the following status identifiers:-
not-resharding -
in-progress -
done
To process entries on the resharding queue immediately :
radosgw-admin reshard process
$ radosgw-admin reshard processCopy to Clipboard Copied! Toggle word wrap Toggle overflow To cancel pending bucket resharding:
radosgw-admin reshard cancel --bucket BUCKET_NAME
radosgw-admin reshard cancel --bucket BUCKET_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- BUCKET_NAME with the name of the pending bucket.
Example:
radosgw-admin reshard cancel --bucket data
$ radosgw-admin reshard cancel --bucket dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou can only cancel pending resharding operations. Do not cancel ongoing resharding operations.
- If you use Red Hat Ceph Storage 3.1 and previous versions, remove stale bucket entries as described in the Cleaning stale instances after resharding section.
Additional resources
3.4.5. Manual Bucket Index Resharding Copy linkLink copied to clipboard!
If a bucket has grown larger than the initial configuration was optimized for, reshard the bucket index pool by using the radosgw-admin bucket reshard command. This command:
- Creates a new set of bucket index objects for the specified bucket.
- Distributes object entries across these bucket index objects.
- Creates a new bucket instance.
- Links the new bucket instance with the bucket so that all new index operations go through the new bucket indexes.
- Prints the old and the new bucket ID to the command output.
Use this procedure only in simple configurations. To reshard buckets in multi-site configurations, see Manually Resharding Buckets with Multi-site.
Prerequisites
- Read the bucket sharding limitations.
Procedure
Back up the original bucket index:
radosgw-admin bi list --bucket=BUCKET > BUCKET.list.backup
radosgw-admin bi list --bucket=BUCKET > BUCKET.list.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- BUCKET with the name of the bucket to reshard
For example, for a bucket named
data, enter:radosgw-admin bi list --bucket=data > data.list.backup
$ radosgw-admin bi list --bucket=data > data.list.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reshard the bucket index:
radosgw-admin bucket reshard --bucket=BUCKET --num-shards=NUMBER
radosgw-admin bucket reshard --bucket=BUCKET --num-shards=NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- BUCKET with the name of the bucket to reshard
- NUMBER with the new number of shards
For example, for a bucket named
dataand the required number of shards being100, enter:radosgw-admin bucket reshard --bucket=data --num-shards=100
$ radosgw-admin bucket reshard --bucket=data --num-shards=100Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use Red Hat Ceph Storage 3.1 and previous versions, remove stale bucket entries as described in the Cleaning stale instances after resharding section.
3.4.6. Cleaning stale instances after resharding Copy linkLink copied to clipboard!
In Red Hat Ceph Storage 3.1 and previous versions, the resharding process does not clean stale instances of bucket entries automatically. These stale instances can impact performance of the cluster if they are not cleaned manually.
Use this procedure only in simple configurations not in multi-site clusters.
Prerequisites
- Ceph Object Gateway installed.
Procedure
List stale instances:
radosgw-admin reshard stale-instances list
$ radosgw-admin reshard stale-instances listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Clean the stale instances:
radosgw-admin reshard stale-instances rm
$ radosgw-admin reshard stale-instances rmCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Enabling Compression Copy linkLink copied to clipboard!
The Ceph Object Gateway supports server-side compression of uploaded objects using any of Ceph’s compression plugins. These include:
-
zlib: Supported. -
snappy: Technology Preview. -
zstd: Technology Preview.
The snappy and zstd compression plugins are Technology Preview features and as such they are not fully supported, as Red Hat has not completed quality assurance testing on them yet.
Configuration
To enable compression on a zone’s placement target, provide the --compression=<type> option to the radosgw-admin zone placement modify command. The compression type refers to the name of the compression plugin to use when writing new object data.
Each compressed object stores the compression type. Changing the setting does not hinder the ability to decompress existing compressed objects, nor does it force the Ceph Object Gateway to recompress existing objects.
This compression setting applies to all new objects uploaded to buckets using this placement target.
To disable compression on a zone’s placement target, provide the --compression=<type> option to the radosgw-admin zone placement modify command and specify an empty string or none.
For example:
After enabling or disabling compression, restart the Ceph Object Gateway instance so the change will take effect.
Ceph Object Gateway creates a default zone and a set of pools. For production deployments, see the Ceph Object Gateway for Production guide, more specifically, the Creating a Realm section first. See also Multisite.
Statistics
While all existing commands and APIs continue to report object and bucket sizes based on their uncompressed data, the radosgw-admin bucket stats command includes compression statistics for a given bucket.
The size_utilized and size_kb_utilized fields represent the total size of compressed data in bytes and kilobytes respectively.
3.6. User Management Copy linkLink copied to clipboard!
Ceph Object Storage user management refers to users that are client applications of the Ceph Object Storage service; not the Ceph Object Gateway as a client application of the Ceph Storage Cluster. You must create a user, access key and secret to enable client applications to interact with the Ceph Object Gateway service.
There are two user types:
- User: The term 'user' reflects a user of the S3 interface.
- Subuser: The term 'subuser' reflects a user of the Swift interface. A subuser is associated to a user .
You can create, modify, view, suspend and remove users and subusers.
When managing users in a multi-site deployment, ALWAYS execute the radosgw-admin command on a Ceph Object Gateway node within the master zone of the master zone group to ensure that users synchronize throughout the multi-site cluster. DO NOT create, modify or delete users on a multi-site cluster from a secondary zone or a secondary zone group. This document uses [root@master-zone]# as a command line convention for a host in the master zone of the master zone group.
In addition to creating user and subuser IDs, you may add a display name and an email address for a user. You can specify a key and secret, or generate a key and secret automatically. When generating or specifying keys, note that user IDs correspond to an S3 key type and subuser IDs correspond to a swift key type. Swift keys also have access levels of read, write, readwrite and full.
User management command-line syntax generally follows the pattern user <command> <user-id> where <user-id> is either the --uid= option followed by the user’s ID (S3) or the --subuser= option followed by the user name (Swift). For example:
radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid={id}|--subuser={name}> [other-options]
[root@master-zone]# radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid={id}|--subuser={name}> [other-options]
Additional options may be required depending on the command you execute.
3.6.1. Multi Tenancy Copy linkLink copied to clipboard!
In Red Hat Ceph Storage 2 and later, the Ceph Object Gateway supports multi-tenancy for both the S3 and Swift APIs, where each user and bucket lies under a "tenant." Multi tenancy prevents namespace clashing when multiple tenants are using common bucket names, such as "test", "main" and so forth.
Each user and bucket lies under a tenant. For backward compatibility, a "legacy" tenant with an empty name is added. Whenever referring to a bucket without specifically specifying a tenant, the Swift API will assume the "legacy" tenant. Existing users are also stored under the legacy tenant, so they will access buckets and objects the same way as earlier releases.
Tenants as such do not have any operations on them. They appear and and disappear as needed, when users are administered. In order to create, modify, and remove users with explicit tenants, either an additional option --tenant is supplied, or a syntax "<tenant>$<user>" is used in the parameters of the radosgw-admin command.
To create a user testx$tester for S3, execute the following:
radosgw-admin --tenant testx --uid tester \
--display-name "Test User" --access_key TESTER \
--secret test123 user create
[root@master-zone]# radosgw-admin --tenant testx --uid tester \
--display-name "Test User" --access_key TESTER \
--secret test123 user create
To create a user testx$tester for Swift, execute one of the following:
The subuser with explicit tenant had to be quoted in the shell.
3.6.2. Create a User Copy linkLink copied to clipboard!
Use the user create command to create an S3-interface user. You MUST specify a user ID and a display name. You may also specify an email address. If you DO NOT specify a key or secret, radosgw-admin will generate them for you automatically. However, you may specify a key and/or a secret if you prefer not to use generated key/secret pairs.
radosgw-admin user create --uid=<id> \ [--key-type=<type>] [--gen-access-key|--access-key=<key>]\ [--gen-secret | --secret=<key>] \ [--email=<email>] --display-name=<name>
[root@master-zone]# radosgw-admin user create --uid=<id> \
[--key-type=<type>] [--gen-access-key|--access-key=<key>]\
[--gen-secret | --secret=<key>] \
[--email=<email>] --display-name=<name>
For example:
radosgw-admin user create --uid=janedoe --display-name="Jane Doe" --email=jane@example.com
[root@master-zone]# radosgw-admin user create --uid=janedoe --display-name="Jane Doe" --email=jane@example.com
Check the key output. Sometimes radosgw-admin generates a JSON escape (\) character, and some clients do not know how to handle JSON escape characters. Remedies include removing the JSON escape character (\), encapsulating the string in quotes, regenerating the key and ensuring that it does not have a JSON escape character or specify the key and secret manually.
3.6.3. Create a Subuser Copy linkLink copied to clipboard!
To create a subuser (Swift interface), you must specify the user ID (--uid={username}), a subuser ID and the access level for the subuser. If you DO NOT specify a key or secret, radosgw-admin will generate them for you automatically. However, you may specify a key and/or a secret if you prefer not to use generated key/secret pairs.
full is not readwrite, as it also includes the access control policy.
radosgw-admin subuser create --uid={uid} --subuser={uid} --access=[ read | write | readwrite | full ]
[root@master-zone]# radosgw-admin subuser create --uid={uid} --subuser={uid} --access=[ read | write | readwrite | full ]
For example:
radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full
[root@master-zone]# radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full
3.6.4. Get User Information Copy linkLink copied to clipboard!
To get information about a user, you must specify user info and the user ID (--uid={username}).
radosgw-admin user info --uid=janedoe
# radosgw-admin user info --uid=janedoe
3.6.5. Modify User Information Copy linkLink copied to clipboard!
To modify information about a user, you must specify the user ID (--uid={username}) and the attributes you want to modify. Typical modifications are to keys and secrets, email addresses, display names and access levels. For example:
radosgw-admin user modify --uid=janedoe / --display-name="Jane E. Doe"
[root@master-zone]# radosgw-admin user modify --uid=janedoe / --display-name="Jane E. Doe"
To modify subuser values, specify subuser modify and the subuser ID. For example:
radosgw-admin subuser modify --subuser=janedoe:swift / --access=full
[root@master-zone]# radosgw-admin subuser modify --subuser=janedoe:swift / --access=full
3.6.6. Enable and Suspend Users Copy linkLink copied to clipboard!
When you create a user, the user is enabled by default. However, you may suspend user privileges and re-enable them at a later time. To suspend a user, specify user suspend and the user ID.
radosgw-admin user suspend --uid=johndoe
[root@master-zone]# radosgw-admin user suspend --uid=johndoe
To re-enable a suspended user, specify user enable and the user ID. :
radosgw-admin user enable --uid=johndoe
[root@master-zone]# radosgw-admin user enable --uid=johndoe
Disabling the user disables the subuser.
3.6.7. Remove a User Copy linkLink copied to clipboard!
When you remove a user, the user and subuser are removed from the system. However, you may remove just the subuser if you wish. To remove a user (and subuser), specify user rm and the user ID.
radosgw-admin user rm --uid=<uid> [--purge-keys] [--purge-data]
[root@master-zone]# radosgw-admin user rm --uid=<uid> [--purge-keys] [--purge-data]
For example:
radosgw-admin user rm --uid=johndoe --purge-data
[root@master-zone]# radosgw-admin user rm --uid=johndoe --purge-data
To remove the subuser only, specify subuser rm and the subuser name.
radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys
[root@master-zone]# radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys
Options include:
-
Purge Data: The
--purge-dataoption purges all data associated to the UID. -
Purge Keys: The
--purge-keysoption purges all keys associated to the UID.
3.6.8. Remove a Subuser Copy linkLink copied to clipboard!
When you remove a sub user, you are removing access to the Swift interface. The user will remain in the system. The Ceph Object Gateway To remove the subuser, specify subuser rm and the subuser ID.
radosgw-admin subuser rm --subuser=johndoe:test
[root@master-zone]# radosgw-admin subuser rm --subuser=johndoe:test
Options include:
-
Purge Keys: The
--purge-keysoption purges all keys associated to the UID.
3.6.9. Rename a User Copy linkLink copied to clipboard!
To change a name of a user, use the radosgw-admin user rename command. The time that this command takes depends on the number of buckets and objects that the user has. If the number is large, Red Hat recommends to use the command in the Screen utility provided by the screen package.
Prerequisites
- A working Ceph cluster
-
rootorsudoaccess - Installed Ceph Object Gateway
Procedure
Rename a user:
radosgw-admin user rename --uid=current-user-name --new-uid=new-user-name
radosgw-admin user rename --uid=current-user-name --new-uid=new-user-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to rename
user1touser2:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a user is inside a tenant, use the tenant$user-name format:
radosgw-admin user rename --uid=tenant$current-user-name --new-uid=tenant$new-user-name
radosgw-admin user rename --uid=tenant$current-user-name --new-uid=tenant$new-user-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to rename
user1touser2inside atesttenant:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the user has been renamed successfully:
radosgw-admin user info --uid=new-user-name
radosgw-admin user info --uid=new-user-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
radosgw-admin user info --uid=user2
# radosgw-admin user info --uid=user2Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a user is inside a tenant, use the tenant$user-name format:
radosgw-admin user info --uid=tenant$new-user-name
radosgw-admin user info --uid=tenant$new-user-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow radosgw-admin user info --uid=test$user2
# radosgw-admin user info --uid=test$user2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
-
The
screen(1)manual page
3.6.10. Create a Key Copy linkLink copied to clipboard!
To create a key for a user, you must specify key create. For a user, specify the user ID and the s3 key type. To create a key for subuser, you must specify the subuser ID and the swift keytype. For example:
radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret
[root@master-zone]# radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret
3.6.11. Add and Remove Access Keys Copy linkLink copied to clipboard!
Users and subusers must have access keys to use the S3 and Swift interfaces. When you create a user or subuser and you do not specify an access key and secret, the key and secret get generated automatically. You may create a key and either specify or generate the access key and/or secret. You may also remove an access key and secret. Options include:
-
--secret=<key>specifies a secret key (e.g,. manually generated). -
--gen-access-keygenerates random access key (for S3 user by default). -
--gen-secretgenerates a random secret key. -
--key-type=<type>specifies a key type. The options are: swift, s3
To add a key, specify the user:
radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret
[root@master-zone]# radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret
You may also specify a key and a secret.
To remove an access key, you need to specify the user and the key:
Find the access key for the specific user:
radosgw-admin user info --uid=<testid>
[root@master-zone]# radosgw-admin user info --uid=<testid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The access key is the
"access_key"value in the output, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the user ID and the access key from the previous step to remove the access key:
radosgw-admin key rm --uid=<user_id> --access-key <access_key>
[root@master-zone]# radosgw-admin key rm --uid=<user_id> --access-key <access_key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804
[root@master-zone]# radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6.12. Add and Remove Administrative Capabilities Copy linkLink copied to clipboard!
The Ceph Storage Cluster provides an administrative API that enables users to execute administrative functions via the REST API. By default, users DO NOT have access to this API. To enable a user to exercise administrative functionality, provide the user with administrative capabilities.
To add administrative capabilities to a user:
radosgw-admin caps add --uid={uid} --caps={caps}
[root@master-zone]# radosgw-admin caps add --uid={uid} --caps={caps}
You can add read, write or all capabilities to users, buckets, metadata and usage (utilization). For example:
--caps="[users|buckets|metadata|usage|zone]=[*|read|write|read, write]"
--caps="[users|buckets|metadata|usage|zone]=[*|read|write|read, write]"
For example:
radosgw-admin caps add --uid=johndoe --caps="users=*"
[root@master-zone]# radosgw-admin caps add --uid=johndoe --caps="users=*"
To remove administrative capabilities from a user:
radosgw-admin caps rm --uid=johndoe --caps={caps}
[root@master-zone]# radosgw-admin caps rm --uid=johndoe --caps={caps}
3.7. Quota Management Copy linkLink copied to clipboard!
The Ceph Object Gateway enables you to set quotas on users and buckets owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes.
-
Bucket: The
--bucketoption allows you to specify a quota for buckets the user owns. -
Maximum Objects: The
--max-objectssetting allows you to specify the maximum number of objects. A negative value disables this setting. -
Maximum Size: The
--max-sizeoption allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. -
Quota Scope: The
--quota-scopeoption sets the scope for the quota. The options arebucketanduser. Bucket quotas apply to buckets a user owns. User quotas apply to a user.
Buckets with a large number of objects can cause serious performance issues. The recommended maximum number of objects in a one bucket is 100,000. To increase this number, configure bucket index sharding. See Section 3.4, “Configuring Bucket Sharding” for details.
3.7.1. Set User Quotas Copy linkLink copied to clipboard!
Before you enable a quota, you must first set the quota parameters. For example:
radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
[root@master-zone]# radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
For example:
radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024
radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024
A negative value for num objects and / or max size means that the specific quota attribute check is disabled.
3.7.2. Enable and Disable User Quotas Copy linkLink copied to clipboard!
Once you set a user quota, you may enable it. For example:
radosgw-admin quota enable --quota-scope=user --uid=<uid>
[root@master-zone]# radosgw-admin quota enable --quota-scope=user --uid=<uid>
You may disable an enabled user quota. For example:
radosgw-admin quota disable --quota-scope=user --uid=<uid>
[root@master-zone]# radosgw-admin quota disable --quota-scope=user --uid=<uid>
3.7.3. Set Bucket Quotas Copy linkLink copied to clipboard!
Bucket quotas apply to the buckets owned by the specified uid. They are independent of the user.
radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
[root@master-zone]# radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
A negative value for num objects and / or max size means that the specific quota attribute check is disabled.
3.7.4. Enable and Disable Bucket Quotas Copy linkLink copied to clipboard!
Once you set a bucket quota, you can enable it. For example:
radosgw-admin quota enable --quota-scope=bucket --uid=<uid>
[root@master-zone]# radosgw-admin quota enable --quota-scope=bucket --uid=<uid>
To disable an enabled bucket quota:
radosgw-admin quota disable --quota-scope=bucket --uid=<uid>
[root@master-zone]# radosgw-admin quota disable --quota-scope=bucket --uid=<uid>
3.7.5. Get Quota Settings Copy linkLink copied to clipboard!
You may access each user’s quota settings via the user information API. To read user quota setting information with the CLI interface, execute the following:
radosgw-admin user info --uid=<uid>
# radosgw-admin user info --uid=<uid>
3.7.6. Update Quota Stats Copy linkLink copied to clipboard!
Quota stats get updated asynchronously. You can update quota statistics for all users and all buckets manually to retrieve the latest quota stats.
radosgw-admin user stats --uid=<uid> --sync-stats
[root@master-zone]# radosgw-admin user stats --uid=<uid> --sync-stats
3.7.7. Get User Quota Usage Stats Copy linkLink copied to clipboard!
To see how much of the quota a user has consumed, execute the following:
radosgw-admin user stats --uid=<uid>
# radosgw-admin user stats --uid=<uid>
You should execute radosgw-admin user stats with the --sync-stats option to receive the latest data.
3.7.8. Quota Cache Copy linkLink copied to clipboard!
Quota statistics are cached for each Ceph Gateway instance. If there are multiple instances, then the cache can keep quotas from being perfectly enforced, as each instance will have a different view of the quotas. The options that control this are rgw bucket quota ttl, rgw user quota bucket sync interval and rgw user quota sync interval. The higher these values are, the more efficient quota operations are, but the more out-of-sync multiple instances will be. The lower these values are, the closer to perfect enforcement multiple instances will achieve. If all three are 0, then quota caching is effectively disabled, and multiple instances will have perfect quota enforcement. See Chapter 4, Configuration Reference for more details on these options.
3.7.9. Reading and Writing Global Quotas Copy linkLink copied to clipboard!
You can read and write quota settings in a zonegroup map. To get a zonegroup map:
radosgw-admin global quota get
[root@master-zone]# radosgw-admin global quota get
The global quota settings can be manipulated with the global quota counterparts of the quota set, quota enable, and quota disable commands, for example:
radosgw-admin global quota set --quota-scope bucket --max-objects 1024 radosgw-admin global quota enable --quota-scope bucket
[root@master-zone]# radosgw-admin global quota set --quota-scope bucket --max-objects 1024
[root@master-zone]# radosgw-admin global quota enable --quota-scope bucket
In a multi-site configuration, where there is a realm and period present, changes to the global quotas must be committed using period update --commit. If there is no period present, the Ceph Object Gateways must be restarted for the changes to take effect.
3.8. Usage Copy linkLink copied to clipboard!
The Ceph Object Gateway logs usage for each user. You can track user usage within date ranges too.
Options include:
-
Start Date: The
--start-dateoption allows you to filter usage stats from a particular start date (format:yyyy-mm-dd[HH:MM:SS]). -
End Date: The
--end-dateoption allows you to filter usage up to a particular date (format:yyyy-mm-dd[HH:MM:SS]). -
Log Entries: The
--show-log-entriesoption allows you to specify whether or not to include log entries with the usage stats (options:true|false).
You may specify time with minutes and seconds, but it is stored with 1 hour resolution.
3.8.1. Show Usage Copy linkLink copied to clipboard!
To show usage statistics, specify the usage show. To show usage for a particular user, you must specify a user ID. You may also specify a start date, end date, and whether or not to show log entries.
radosgw-admin usage show \
--uid=johndoe --start-date=2012-03-01 \
--end-date=2012-04-01
# radosgw-admin usage show \
--uid=johndoe --start-date=2012-03-01 \
--end-date=2012-04-01
You may also show a summary of usage information for all users by omitting a user ID.
radosgw-admin usage show --show-log-entries=false
# radosgw-admin usage show --show-log-entries=false
3.8.2. Trim Usage Copy linkLink copied to clipboard!
With heavy use, usage logs can begin to take up storage space. You can trim usage logs for all users and for specific users. You may also specify date ranges for trim operations.
radosgw-admin usage trim --start-date=2010-01-01 \
--end-date=2010-12-31
radosgw-admin usage trim --uid=johndoe
radosgw-admin usage trim --uid=johndoe --end-date=2013-12-31
[root@master-zone]# radosgw-admin usage trim --start-date=2010-01-01 \
--end-date=2010-12-31
[root@master-zone]# radosgw-admin usage trim --uid=johndoe
[root@master-zone]# radosgw-admin usage trim --uid=johndoe --end-date=2013-12-31
3.8.3. Finding Orphan Objects Copy linkLink copied to clipboard!
Normally, in a healthy storage cluster you should not have any leaking objects, but in some cases leaky objects can occur. For example, if the RADOS Gateway goes down in the middle of an operation, this may cause some RADOS objects to become orphans. Also, unknown bugs may cause these orphan objects to occur. The radosgw-admin command provides you a tool to search for these orphan objects and clean them up. With the --pool option, you can specify which pool to scan for leaky RADOS objects. With the --num-shards option, you may specify the number of shards to use for keeping temporary scan data.
Create a new log pool:
Example
rados mkpool .log
# rados mkpool .logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Search for orphan objects:
Syntax
radosgw-admin orphans find --pool=<data_pool> --job-id=<job_name> [--num-shards=<num_shards>] [--orphan-stale-secs=<seconds>]
# radosgw-admin orphans find --pool=<data_pool> --job-id=<job_name> [--num-shards=<num_shards>] [--orphan-stale-secs=<seconds>]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin orphans find --pool=.rgw.buckets --job-id=abc123
# radosgw-admin orphans find --pool=.rgw.buckets --job-id=abc123Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the search data:
Syntax
radosgw-admin orphans finish --job-id=<job_name>
# radosgw-admin orphans finish --job-id=<job_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin orphans finish --job-id=abc123
# radosgw-admin orphans finish --job-id=abc123Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Bucket management Copy linkLink copied to clipboard!
As a storage administrator, when using the Ceph Object Gateway you can manage buckets by moving them between users and renaming them.
3.9.1. Moving buckets Copy linkLink copied to clipboard!
The radosgw-admin bucket utility provides the ability to move buckets between users. To do so, link the bucket to a new user and change the ownership of the bucket to the new user.
You can move buckets:
3.9.1.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster
- Ceph Object Gateway is installed
- A bucket
- Various tenanted and non-tenanted users
3.9.1.2. Moving buckets between non-tenanted users Copy linkLink copied to clipboard!
The radosgw-admin bucket chown command provides the ability to change the ownership of buckets and all objects they contain from one user to another. To do so, unlink a bucket from the current user, link it to a new user, and change the ownership of the bucket to the new user.
Procedure
Link the bucket to a new user:
radosgw-admin bucket link --uid=user --bucket=bucket
radosgw-admin bucket link --uid=user --bucket=bucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- user with the user name of the user to link the bucket to
- bucket with the name of the bucket
For example, to link the
databucket to the user nameduser2:radosgw-admin bucket link --uid=user2 --bucket=data
# radosgw-admin bucket link --uid=user2 --bucket=dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the bucket has been linked to
user2successfully:radosgw-admin bucket list --uid=user2
# radosgw-admin bucket list --uid=user2 [ "data" ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the ownership of the bucket to the new user:
radosgw-admin bucket chown --uid=user --bucket=bucket
radosgw-admin bucket chown --uid=user --bucket=bucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- user with the user name of the user to change the bucket ownership to
- bucket with the name of the bucket
For example, to change the ownership of the
databucket touser2:radosgw-admin bucket chown --uid=user2 --bucket=data
# radosgw-admin bucket chown --uid=user2 --bucket=dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the ownership of the
databucket has been successfully changed by checking theownerline in the output of the following command:radosgw-admin bucket list --bucket=data
# radosgw-admin bucket list --bucket=dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9.1.3. Moving buckets between tenanted users Copy linkLink copied to clipboard!
You can move buckets between one tenanted user to another.
Procedure
Link the bucket to a new user:
radosgw-admin bucket link --bucket=current-tenant/bucket --uid=new-tenant$user
radosgw-admin bucket link --bucket=current-tenant/bucket --uid=new-tenant$userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- current-tenant with the name of the tenant the bucket is
- bucket with the name of the bucket to link
- new-tenant with the name of the tenant where the new user is
- user with the user name of the new user
For example, to link the
databucket from thetesttenant to the user nameduser2in thetest2tenant:radosgw-admin bucket link --bucket=test/data --uid=test2$user2
# radosgw-admin bucket link --bucket=test/data --uid=test2$user2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the bucket has been linked to
user2successfully:radosgw-admin bucket list --uid=test$user2
# radosgw-admin bucket list --uid=test$user2 [ "data" ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the ownership of the bucket to the new user:
radosgw-admin bucket chown --bucket=new-tenant/bucket --uid=new-tenant$user
radosgw-admin bucket chown --bucket=new-tenant/bucket --uid=new-tenant$userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- bucket with the name of the bucket to link
- new-tenant with the name of the tenant where the new user is
- user with the user name of the new user
For example, to change the ownership of the
databucket to theuser2inside thetest2tenant:radosgw-admin bucket chown --bucket='test2/data' --uid='test$tuser2'
# radosgw-admin bucket chown --bucket='test2/data' --uid='test$tuser2'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the ownership of the
databucket has been successfully changed by checking theownerline in the output of the following command:radosgw-admin bucket list --bucket=test2/data
# radosgw-admin bucket list --bucket=test2/dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9.1.4. Moving buckets from non-tenanted users to tenanted users Copy linkLink copied to clipboard!
You can move buckets from a non-tenanted user to a tenanted user.
Procedure
Optional. If you do not already have multiple tenants, you can create them by enabling
rgw_keystone_implicit_tenantsand accessing the Ceph Object Gateway from an external tenant:Open and edit the Ceph configuration file, by default
/etc/ceph/ceph.conf. Enable thergw_keystone_implicit_tenantsoption:rgw_keystone_implicit_tenants = true
rgw_keystone_implicit_tenants = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the Ceph Object Gateway from an eternal tenant using either the
s3cmdorswiftcommand:swift list
# swift listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Or use
s3cmd:s3cmd ls
# s3cmd lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The first access from an external tenant creates an equivalent Ceph Object Gateway user.
Move a bucket to a tenanted user:
radosgw-admin bucket link --bucket=/bucket --uid='tenant$user'
radosgw-admin bucket link --bucket=/bucket --uid='tenant$user'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- bucket with the name of the bucket
- tenant with the name of the tenant where the new user is
- user with the user name of the new user
For example, to move the
databucket to thetenanted-userinside thetesttenant:radosgw-admin bucket link --bucket=/data --uid='test$tenanted-user'
# radosgw-admin bucket link --bucket=/data --uid='test$tenanted-user'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
databucket has been linked totenanted-usersuccessfully:radosgw-admin bucket list --uid='test$tenanted-user'
# radosgw-admin bucket list --uid='test$tenanted-user' [ "data" ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the ownership of the bucket to the new user:
radosgw-admin bucket chown --bucket='tenant/bucket name' --uid='tenant$user'
radosgw-admin bucket chown --bucket='tenant/bucket name' --uid='tenant$user'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- bucket with the name of the bucket
- tenant with the name of the tenant where the new user is
- user with the user name of the new user
For example, to change the ownership of the
databucket totenanted-userthat is inside thetesttenant:radosgw-admin bucket chown --bucket='test/data' --uid='test$tenanted-user'
# radosgw-admin bucket chown --bucket='test/data' --uid='test$tenanted-user'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the ownership of the
databucket has been successfully changed by checking theownerline in the output of the following command:radosgw-admin bucket list --bucket=test/data
# radosgw-admin bucket list --bucket=test/dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9.2. Renaming buckets Copy linkLink copied to clipboard!
You can rename buckets.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ceph Object Gateway is installed.
- A bucket.
Procedure
List the buckets:
radosgw-admin bucket list
radosgw-admin bucket listCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, note a bucket from the output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Rename the bucket:
radosgw-admin bucket link --bucket=original-name --bucket-new-name=new-name --uid=user-ID
radosgw-admin bucket link --bucket=original-name --bucket-new-name=new-name --uid=user-IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to rename the
s3bucket1bucket tos3newb:radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuser
# radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuserCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is inside a tenant, specify the tenant as well:
radosgw-admin bucket link --bucket=tenant/original-name --bucket-new-name=new-name --uid=tenant$user-ID
radosgw-admin bucket link --bucket=tenant/original-name --bucket-new-name=new-name --uid=tenant$user-IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=test$testuser
# radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=test$testuserCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the bucket was renamed:
radosgw-admin bucket list
radosgw-admin bucket listCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, a bucket named
s3newbexists now:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9.3. Additional Resources Copy linkLink copied to clipboard!
- See Using Keystone to Authenticate Ceph Object Gateway Users for more information.
- See the Developer Guide for more information.
3.10. Optimize the Ceph Object Gateway’s garbage collection Copy linkLink copied to clipboard!
When new data objects are written into the storage cluster, the Ceph Object Gateway immediately allocates the storage for these new objects. After you delete or overwrite data objects in the storage cluster, the Ceph Object Gateway deletes those objects from the bucket index. Some time afterward, the Ceph Object Gateway then purges the space that was used to store the objects in the storage cluster. The process of purging the deleted object data from the storage cluster is known as Garbage Collection, or GC.
Garbage collection operations typically run in the background. You can configure these operations to either execute continuously, or to run only during intervals of low activity and light workloads. By default, the Ceph Object Gateway conducts GC operations continuously. Because GC operations are a normal part of Ceph Object Gateway operations, deleted objects that are eligible for garbage collection exist most of the time.
3.10.1. Viewing the garbage collection queue Copy linkLink copied to clipboard!
Before you purge deleted and overwritten objects from the storage cluster, use radosgw-admin to view the objects awaiting garbage collection.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph Object Gateway.
Procedure
To view the queue of objects awaiting garbage collection:
Example
[root@rgw ~] radosgw-admin gc list
[root@rgw ~] radosgw-admin gc listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To list all entries in the queue, including unexpired entries, use the --include-all option.
3.10.2. Adjusting garbage collection for delete-heavy workloads Copy linkLink copied to clipboard!
Some workloads may temporarily or permanently outpace the rate of garbage collection (GC) activity. This is especially true of delete-heavy workloads, where many objects get stored for a short period of time and are then deleted. For these types of workloads, consider increasing the priority of garbage collection operations relative to other operations. Contact Red Hat Support with any additional questions about Ceph Object Gateway Garbage Collection.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all nodes in the storage cluster.
Procedure
-
Open
/etc/ceph/ceph.conffor editing. Set the value of
rgw_gc_max_concurrent_ioto 20, and the value ofrgw_gc_max_trim_chunkto 64.rgw_gc_max_concurrent_io = 20 rgw_gc_max_trim_chunk = 64
rgw_gc_max_concurrent_io = 20 rgw_gc_max_trim_chunk = 64Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save and close the file.
- Restart the Ceph Object Gateway to allow the changed settings to take effect.
- Monitor the storage cluster during GC activity to verify that the increased values do not adversely affect performance.
Never modify the value for the rgw_gc_max_objs option in a running cluster. You should only change this value before deploying the RGW nodes.
Additional Resources