Developer Guide
Using the various application programming interfaces for Red Hat Ceph Storage
Abstract
Chapter 1. Ceph Object Gateway administrative API Copy linkLink copied to clipboard!
As a developer, you can administer the Ceph Object Gateway by interacting with the RESTful application programing interface (API). The Ceph Object Gateway makes available the features of the radosgw-admin command in a RESTful API. You can manage users, data, quotas and usage which you can integrate with other management platforms.
Red Hat recommends using the command-line interface when configuring the Ceph Object Gateway.
The administrative API provides the following functionality:
- Authentication Requests
User Account Management
User Capabilities Management
Key Management
Bucket Management
Object Management
- Getting Usage Information
- Removing Usage Information
- Standard Error Responses
1.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
1.2. Administration operations Copy linkLink copied to clipboard!
An administrative Application Programming Interface (API) request will be done on a URI that starts with the configurable 'admin' resource entry point. Authorization for the administrative API duplicates the S3 authorization mechanism. Some operations require that the user holds special administrative capabilities. The response entity type, either XML or JSON, might be specified as the 'format' option in the request and defaults to JSON if not specified.
Example
1.3. Administration authentication requests Copy linkLink copied to clipboard!
Amazon’s S3 service uses the access key and a hash of the request header and the secret key to authenticate the request. It has the benefit of providing an authenticated request, especially large uploads, without SSL overhead.
Most use cases for the S3 API involve using open source S3 clients such as the AmazonS3Client in the Amazon SDK for Java or Python Boto. These libraries do not support the Ceph Object Gateway Admin API. You can subclass and extend these libraries to support the Ceph Admin API. Alternatively, you can create a unique Gateway client.
Creating an execute() method
The CephAdminAPI example class in this section illustrates how to create an execute() method that can take request parameters, authenticate the request, call the Ceph Admin API and receive a response.
The CephAdminAPI class example is not supported or intended for commercial use. It is for illustrative purposes only.
Calling the Ceph Object Gateway
The client code contains five calls to the Ceph Object Gateway to demonstrate CRUD operations:
- Create a User
- Get a User
- Modify a User
- Create a Subuser
- Delete a User
To use this example, get the httpcomponents-client-4.5.3 Apache HTTP components. You can download it for example here: http://hc.apache.org/downloads.cgi. Then unzip the tar file, navigate to its lib directory and copy the contents to the /jre/lib/ext directory of the JAVA_HOME directory, or a custom classpath.
As you examine the CephAdminAPI class example, notice that the execute() method takes an HTTP method, a request path, an optional subresource, null if not specified, and a map of parameters. To execute with subresources, for example, subuser, and key, you will need to specify the subresource as an argument in the execute() method.
The example method:
- Builds a URI.
- Builds an HTTP header string.
-
Instantiates an HTTP request, for example,
PUT,POST,GET,DELETE. -
Adds the
Dateheader to the HTTP header string and the request header. -
Adds the
Authorizationheader to the HTTP request header. - Instantiates an HTTP client and passes it the instantiated HTTP request.
- Makes a request.
- Returns a response.
Building the header string
Building the header string is the portion of the process that involves Amazon’s S3 authentication procedure. Specifically, the example method does the following:
-
Adds a request type, for example,
PUT,POST,GET,DELETE. - Adds the date.
- Adds the requestPath.
The request type should be upper case with no leading or trailing white space. If you do not trim white space, authentication will fail. The date MUST be expressed in GMT, or authentication will fail.
The exemplary method does not have any other headers. The Amazon S3 authentication procedure sorts x-amz headers lexicographically. So if you are adding x-amz headers, be sure to add them lexicographically.
Once you have built the header string, the next step is to instantiate an HTTP request and pass it the URI. The examplary method uses PUT for creating a user and subuser, GET for getting a user, POST for modifying a user and DELETE for deleting a user.
Once you instantiate a request, add the Date header followed by the Authorization header. Amazon’s S3 authentication uses the standard Authorization header, and has the following structure:
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
The CephAdminAPI example class has a base64Sha1Hmac() method, which takes the header string and the secret key for the admin user, and returns a SHA1 HMAC as a base-64 encoded string. Each execute() call will invoke the same line of code to build the Authorization header:
httpRequest.addHeader("Authorization", "AWS " + this.getAccessKey() + ":" + base64Sha1Hmac(headerString.toString(), this.getSecretKey()));
httpRequest.addHeader("Authorization", "AWS " + this.getAccessKey() + ":" + base64Sha1Hmac(headerString.toString(), this.getSecretKey()));
The following CephAdminAPI example class requires you to pass the access key, secret key and an endpoint to the constructor. The class provides accessor methods to change them at runtime.
Example
The subsequent CephAdminAPIClient example illustrates how to instantiate the CephAdminAPI class, build a map of request parameters, and use the execute() method to create, get, update and delete a user.
Example
Additional Resources
- See the S3 Authentication section in the Red Hat Ceph Storage Developer Guide for additional details.
- For a more extensive explanation of the Amazon S3 authentication procedure, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation.
1.4. Creating an administrative user Copy linkLink copied to clipboard!
To run the radosgw-admin command from the Ceph Object Gateway node, ensure the node has the admin key. The admin key can be copied from any Ceph Monitor node.
Prerequisites
- Root-level access to the Ceph Object Gateway node.
Procedure
Create an object gateway user:
Syntax
radosgw-admin user create --uid="USER_NAME" --display-name="DISPLAY_NAME"
radosgw-admin user create --uid="USER_NAME" --display-name="DISPLAY_NAME"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin user create --uid="admin-api-user" --display-name="Admin API User"
[user@client ~]$ radosgw-admin user create --uid="admin-api-user" --display-name="Admin API User"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
radosgw-admincommand-line interface will return the user.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign administrative capabilities to the user you create:
Syntax
radosgw-admin caps add --uid="USER_NAME" --caps="users=*"
radosgw-admin caps add --uid="USER_NAME" --caps="users=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin caps add --uid=admin-api-user --caps="users=*"
[user@client ~]$ radosgw-admin caps add --uid=admin-api-user --caps="users=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
radosgw-admincommand-line interface will return the user. The"caps":will have the capabilities you assigned to the user:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now you have a user with administrative privileges.
1.5. Get user information Copy linkLink copied to clipboard!
Get the user’s information.
Capabilities
users=read
users=read
Syntax
GET /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
GET /admin/user?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user for which the information is requested. | String |
| Yes |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| A container for the user data information. | Container | N/A |
|
| The user ID. | String |
|
|
| Display name for the user. | String |
|
|
| True if the user is suspended. | Boolean |
|
|
| The maximum number of buckets to be owned by the user. | Integer |
|
|
| Subusers associated with this user account. | Container |
|
|
| S3 keys associated with this user account. | Container |
|
|
| Swift keys associated with this user account. | Container |
|
|
| User capabilities. | Container |
|
If successful, the response contains the user information.
Special Error Responses
None.
1.6. Create a user Copy linkLink copied to clipboard!
Create a new user. By Default, a S3 key pair will be created automatically and returned in the response. If only one of access-key or secret-key is provided, the omitted key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified.
Capabilities
`users=write`
`users=write`
Syntax
PUT /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
PUT /admin/user?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to be created. | String |
| Yes |
|
| The display name of the user to be created. | String |
| Yes |
|
| The email address associated with the user. | String |
| No |
|
| Key type to be generated, options are: swift, s3 (default). | String |
| No |
|
| Specify access key. | String |
| No |
|
| Specify secret key. | String |
| No |
|
| User capabilities. | String |
| No |
|
| Generate a new key pair and add to the existing keyring. | Boolean | True [True] | No |
|
| Specify the maximum number of buckets the user can own. | Integer | 500 [1000] | No |
|
| Specify whether the user should be suspended. | Boolean | False [False] | No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| A container for the user data information. | Container | N/A |
|
| The user ID. | String |
|
|
| Display name for the user. | String |
|
|
| True if the user is suspended. | Boolean |
|
|
| The maximum number of buckets to be owned by the user. | Integer |
|
|
| Subusers associated with this user account. | Container |
|
|
| S3 keys associated with this user account. | Container |
|
|
| Swift keys associated with this user account. | Container |
|
|
| User capabilities. | Container |
|
If successful, the response contains the user information.
| Name | Description | Code |
|---|---|---|
|
| Attempt to create existing user. | 409 Conflict |
|
| Invalid access key specified. | 400 Bad Request |
|
| Invalid key type specified. | 400 Bad Request |
|
| Invalid secret key specified. | 400 Bad Request |
|
| Invalid key type specified. | 400 Bad Request |
|
| Provided access key exists and belongs to another user. | 409 Conflict |
|
| Provided email address exists. | 409 Conflict |
|
| Attempt to grant invalid admin capability. | 400 Bad Request |
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for creating subusers.
1.7. Modify a user Copy linkLink copied to clipboard!
Modify an existing user.
Capabilities
`users=write`
`users=write`
Syntax
POST /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
POST /admin/user?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to be modified. | String |
| Yes |
|
| The display name of the user to be modified. | String |
| No |
|
| The email address to be associated with the user. | String |
| No |
|
| Generate a new key pair and add to the existing keyring. | Boolean | True [False] | No |
|
| Specify access key. | String |
| No |
|
| Specify secret key. | String |
| No |
|
| Key type to be generated, options are: swift, s3 (default). | String |
| No |
|
| User capabilities. | String |
| No |
|
| Specify the maximum number of buckets the user can own. | Integer | 500 [1000] | No |
|
| Specify whether the user should be suspended. | Boolean | False [False] | No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| A container for the user data information. | Container | N/A |
|
| The user ID. | String |
|
|
| Display name for the user. | String |
|
|
| True if the user is suspended. | Boolean |
|
|
| The maximum number of buckets to be owned by the user. | Integer |
|
|
| Subusers associated with this user account. | Container |
|
|
| S3 keys associated with this user account. | Container |
|
|
| Swift keys associated with this user account. | Container |
|
|
| User capabilities. | Container |
|
If successful, the response contains the user information.
| Name | Description | Code |
|---|---|---|
|
| Invalid access key specified. | 400 Bad Request |
|
| Invalid key type specified. | 400 Bad Request |
|
| Invalid secret key specified. | 400 Bad Request |
|
| Provided access key exists and belongs to another user. | 409 Conflict |
|
| Provided email address exists. | 409 Conflict |
|
| Attempt to grant invalid admin capability. | 400 Bad Request |
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for modifying subusers.
1.8. Remove a user Copy linkLink copied to clipboard!
Remove an existing user.
Capabilities
`users=write`
`users=write`
Syntax
DELETE /admin/user?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/user?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to be removed. | String |
| Yes. |
|
| When specified the buckets and objects belonging to the user will also be removed. | Boolean | True | No |
Response Entities
None.
Special Error Responses
None.
Additional Resources
- See Red Hat Ceph Storage Developer Guide for removing subusers.
1.9. Create a subuser Copy linkLink copied to clipboard!
Create a new subuser, primarily useful for clients using the Swift API.
Either gen-subuser or subuser is required for a valid request. In general, for a subuser to be useful, it must be granted permissions by specifying access. As with user creation if subuser is specified without secret, then a secret key will be automatically generated.
Capabilities
`users=write`
`users=write`
Syntax
PUT /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
PUT /admin/user?subuser&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID under which a subuser is to be created. | String |
| Yes |
|
| Specify the subuser ID to be created. | String |
|
Yes (or |
|
| Specify the subuser ID to be created. | String |
|
Yes (or |
|
| Specify secret key. | String |
| No |
|
| Key type to be generated, options are: swift (default), s3. | String |
| No |
|
|
Set access permissions for sub-user, should be one of | String |
| No |
|
| Generate the secret key. | Boolean | True [False] | No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Subusers associated with the user account. | Container | N/A |
|
| Subuser ID. | String |
|
|
| Subuser access to user account. | String |
|
If successful, the response contains the subuser information.
| Name | Description | Code |
|---|---|---|
|
| Specified subuser exists. | 409 Conflict |
|
| Invalid key type specified. | 400 Bad Request |
|
| Invalid secret key specified. | 400 Bad Request |
|
| Invalid subuser access specified. | 400 Bad Request |
1.10. Modify a subuser Copy linkLink copied to clipboard!
Modify an existing subuser.
Capabilities
`users=write`
`users=write`
Syntax
POST /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
POST /admin/user?subuser&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID under which the subuser is to be modified. | String |
| Yes |
|
| The subuser ID to be modified. | String |
| Yes |
|
| Generate a new secret key for the subuser, replacing the existing key. | Boolean | True [False] | No |
|
| Specify secret key. | String |
| No |
|
| Key type to be generated, options are: swift (default), s3. | String |
| No |
|
|
Set access permissions for sub-user, should be one of | String |
| No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Subusers associated with the user account. | Container | N/A |
|
| Subuser ID. | String |
|
|
| Subuser access to user account. | String |
|
If successful, the response contains the subuser information.
| Name | Description | Code |
|---|---|---|
|
| Invalid key type specified. | 400 Bad Request |
|
| Invalid secret key specified. | 400 Bad Request |
|
| Invalid subuser access specified. | 400 Bad Request |
1.11. Remove a subuser Copy linkLink copied to clipboard!
Remove an existing subuser.
Capabilities
`users=write`
`users=write`
Syntax
DELETE /admin/user?subuser&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/user?subuser&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID under which the subuser is to be removed. | String |
| Yes |
|
| The subuser ID to be removed. | String |
| Yes |
|
| Remove keys belonging to the subuser. | Boolean | True [True] | No |
Response Entities
None.
Special Error Responses
None.
1.12. Add capabilities to a user Copy linkLink copied to clipboard!
Add an administrative capability to a specified user.
Capabilities
`users=write`
`users=write`
Syntax
PUT /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
PUT /admin/user?caps&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to add an administrative capability to. | String |
| Yes |
|
| The administrative capability to add to the user. | String |
| Yes |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| A container for the user data information. | Container | N/A |
|
| The user ID. | String |
|
|
| User capabilities. | Container |
|
If successful, the response contains the user’s capabilities.
| Name | Description | Code |
|---|---|---|
|
| Attempt to grant invalid admin capability. | 400 Bad Request |
1.13. Remove capabilities from a user Copy linkLink copied to clipboard!
Remove an administrative capability from a specified user.
Capabilities
`users=write`
`users=write`
Syntax
DELETE /admin/user?caps&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/user?caps&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to remove an administrative capability from. | String |
| Yes |
|
| The administrative capabilities to remove from the user. | String |
| Yes |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| A container for the user data information. | Container | N/A |
|
| The user ID. | String |
|
|
| User capabilities. | Container |
|
If successful, the response contains the user’s capabilities.
| Name | Description | Code |
|---|---|---|
|
| Attempt to remove an invalid admin capability. | 400 Bad Request |
|
| User does not possess specified capability. | 404 Not Found |
1.14. Create a key Copy linkLink copied to clipboard!
Create a new key. If a subuser is specified then by default created keys will be swift type. If only one of access-key or secret-key is provided the committed key will be automatically generated, that is if only secret-key is specified then access-key will be automatically generated. By default, a generated key is added to the keyring without replacing an existing key pair. If access-key is specified and refers to an existing key owned by the user then it will be modified. The response is a container listing all keys of the same type as the key created.
When creating a swift key, specifying the option access-key will have no effect. Additionally, only one swift key might be held by each user or subuser.
Capabilities
`users=write`
`users=write`
Syntax
PUT /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
PUT /admin/user?key&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user ID to receive the new key. | String |
| Yes |
|
| The subuser ID to receive the new key. | String |
| No |
|
| Key type to be generated, options are: swift, s3 (default). | String |
| No |
|
| Specify the access key. | String |
| No |
|
| Specify the secret key. | String |
| No |
|
| Generate a new key pair and add to the existing keyring. | Boolean |
True [ | No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Keys of type created associated with this user account. | Container | N/A |
|
| The user account associated with the key. | String |
|
|
| The access key. | String |
|
|
| The secret key | String |
|
| Name | Description | Code |
|---|---|---|
|
| Invalid access key specified. | 400 Bad Request |
|
| Invalid key type specified. | 400 Bad Request |
|
| Invalid secret key specified. | 400 Bad Request |
|
| Invalid key type specified. | 400 Bad Request |
|
| Provided access key exists and belongs to another user. | 409 Conflict |
1.15. Remove a key Copy linkLink copied to clipboard!
Remove an existing key.
Capabilities
`users=write`
`users=write`
Syntax
DELETE /admin/user?key&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/user?key&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The S3 access key belonging to the S3 key pair to remove. | String |
| Yes |
|
| The user to remove the key from. | String |
| No |
|
| The subuser to remove the key from. | String |
| No |
|
| Key type to be removed, options are: swift, s3. Note Required to remove swift key. | String |
| No |
Special Error Responses
None.
Response Entities
None.
1.16. Bucket notifications Copy linkLink copied to clipboard!
As a storage administrator, you can use these APIs to provide configuration and control interfaces for the bucket notification mechanism. The API topics are named objects that contain the definition of a specific endpoint. Bucket notifications associate topics with a specific bucket. The S3 bucket operations section gives more details on bucket notifications.
In all topic actions, the parameters are URL encoded, and sent in the message body using application/x-www-form-urlencoded content type.
Any bucket notification already associated with the topic needs to be re-created for the topic update to take effect.
1.16.1. Prerequisites Copy linkLink copied to clipboard!
- Create bucket notifications on the Ceph Object Gateway.
1.16.2. Creating a topic Copy linkLink copied to clipboard!
You can create topics before creating bucket notifications. A topic is a Simple Notification Service (SNS) entity and all the topic operations, that is, create, delete, list and get, are SNS operations. The topic needs to have endpoint parameters that are used when a bucket notification is created. Once the request is successful, the response includes the topic Amazon Resource Name (ARN) that can be used later to reference this topic in the bucket notification request.
A topic_arn provides the bucket notification configuration, and is generated after a topic is created.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access.
- Installation of the Ceph Object Gateway.
- User access key and secret key.
- Endpoint parameters.
Procedure
Create a topic with the following request format:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here are the request parameters:
-
Endpoint: URL of an endpoint to send notifications to. -
OpaqueData: opaque data is set in the topic configuration and added to all notifications triggered by the topic. HTTP endpoint:
-
URL: http[s]://FQDN[: PORT ] -
port defaults to: Use 80/443 for HTTP[S] accordingly. -
verify-ssl: Indicates whether the server certificate is validated by the client or not. By default , it istrue.
-
AMQP0.9.1 endpoint:
-
URL: amqp://[USER : PASSWORD @] FQDN [: PORT][/VHOST]. -
User and password defaults to:
guestandguestrespectively. - User and password can only be provided with HTTPS. Otherwise, the topic creation request is rejected.
-
port defaults to: 5672. -
vhostdefaults to: “/” -
amqp-exchange: The exchanges must exist and be able to route messages based on topics. This is a mandatory parameter for AMQP0.9.1. Different topics pointing to the same endpoint must use the same exchange. amqp-ack-level: No end to end acknowledgement is required, as messages may persist in the broker before being delivered into their final destination. Three acknowledgement methods exist:-
none: Message is considereddeliveredif sent to the broker. -
broker: By default the message is considereddeliveredif acknowledged by the broker. routable: Message is considereddeliveredif the broker can route to a consumer.NoteThe key and value of a specific parameter does not have to reside in the same line, or in any specific order, but must use the same index. Attribute indexing does not need to be sequential or start from any specific value.
NoteThe
topic-nameis used for the AMQP topic.
-
-
Kafka endpoint:
-
URL: kafka://[USER: PASSWORD @] FQDN[: PORT]. -
If
use-sslis set tofalseby default. Ifuse-sslis set totrue, secure connection is used for connecting with the broker. -
If
ca-locationis provided, and secure connection is used, the specified CA will be used, instead of the default one, to authenticate the broker. - User and password can only be provided over HTTP[S]. If not,topic creation request will be rejected.
-
User and password may only be provided together with
use-ssl, if not, connection to the broker would fail. -
port defaults to: 9092. kafka-ack-level: no end to end acknowledgement required, as messages may persist in the broker before being delivered into their final destination. Two acknowledgement methods exist:-
none: message is considereddeliveredif sent to the broker. -
broker: By default, the message is considereddeliveredif acknowledged by the broker.
-
-
-
Create a response in the following format:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe topic Amazon Resource Name (ARN) in the response will have the following format:
arn:aws:sns:ZONE_GROUP:TENANT:TOPICThe following is an example of AMQP0.9.1 endpoint:
Syntax
"client.create_topic(Name='my-topic' , Attributes={'push-endpoint': 'amqp://127.0.0.1:5672', 'amqp-exchange': 'ex1', 'amqp-ack-level': 'broker'})""client.create_topic(Name='my-topic' , Attributes={'push-endpoint': 'amqp://127.0.0.1:5672', 'amqp-exchange': 'ex1', 'amqp-ack-level': 'broker'})"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.16.3. Getting topic information Copy linkLink copied to clipboard!
Returns information about specific topic. This can include endpoint information if it is provided.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access.
- Installation of the Ceph Object Gateway.
- User access key and secret key.
- Endpoint parameters.
Procedure
Get topic information with the following request format:
Syntax
POST Action=GetTopic &TopicArn=TOPIC_ARN
POST Action=GetTopic &TopicArn=TOPIC_ARNCopy to Clipboard Copied! Toggle word wrap Toggle overflow Here is an example of the response format:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These are the tags and their definitions:
-
User: Name of the user that created the topic. -
Name: Name of the topic. -
EndpointAddress: The endpoint URL. If the endpoint URL contains user and password information, request must be made over HTTPS. If not, the topic get request will be rejected. -
EndPointArgs: The endpoint arguments. -
EndpointTopic: The topic name that will be sent to the endpoint, can be different than the above topic name. -
TopicArn: Topic ARN.
-
1.16.4. Listing topics Copy linkLink copied to clipboard!
List the topics that the user has defined.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access.
- Installation of the Ceph Object Gateway.
- User access key and secret key.
- Endpoint parameters.
Procedure
List topic information with the following request format:
POST Action=ListTopics
POST Action=ListTopicsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Here is an example of the response format:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf endpoint URL contains user and password information, in any of the topics, the request must be made over HTTPS. If not, topic list request is rejected.
1.16.5. Deleting topics Copy linkLink copied to clipboard!
Removing a deleted topic results with no operation and not a failure.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access.
- Installation of the Ceph Object Gateway.
- User access key and secret key.
- Endpoint parameters.
Procedure
Delete a topic with the following request format:
Syntax
POST Action=DeleteTopic &TopicArn=TOPIC_ARN
POST Action=DeleteTopic &TopicArn=TOPIC_ARNCopy to Clipboard Copied! Toggle word wrap Toggle overflow Here is an example of the response format:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.16.6. Event record Copy linkLink copied to clipboard!
An event holds information about the operation done by the Ceph Object Gateway and is sent as a payload over the chosen endpoint, such as, HTTP, HTTPS, Kafka or AMQ0.9.1. The event record is in a JSON format.
Example
These are the event record keys and their definitions:
-
awsRegion: Zonegroup. -
eventTime: Timestamp that indicates when the event was triggered. -
eventName: The type of the event. -
userIdentity.principalId: The identity of the user that triggered the event. -
requestParameters.sourceIPAddress: The IP address of the client that triggered the event. This field is not supported. -
responseElements.x-amz-request-id: The request ID that triggered the event. -
responseElements.x_amz_id_2: The identity of the Ceph Object Gateway on which the event was triggered. The identity format is RGWID-ZONE-ZONEGROUP. -
s3.configurationId: The notification ID that created the event. -
s3.bucket.name: The name of the bucket. -
s3.bucket.ownerIdentity.principalId: The owner of the bucket. -
s3.bucket.arn: Amazon Resource Name (ARN) of the bucket. -
s3.bucket.id: Identity of the bucket. -
s3.object.key: The object key. -
s3.object.size: The size of the object. -
s3.object.eTag: The object etag. -
s3.object.version: The object version in a versioned bucket. -
s3.object.sequencer: Monotonically increasing identifier of the change per object in the hexadecimal format. -
s3.object.metadata: Any metadata set on the object sent asx-amz-meta. -
s3.object.tags: Any tags set on the object. -
s3.eventId: Unique identity of the event. -
s3.opaqueData: Opaque data is set in the topic configuration and added to all notifications triggered by the topic.
Additional Resources
- See the Event Message Structure for more information.
- See the Supported event types section of the Red Hat Ceph Storage Developer Guide for more information.
1.16.7. Supported event types Copy linkLink copied to clipboard!
The following event types are supported:
-
s3:ObjectCreated:* -
s3:ObjectCreated:Put -
s3:ObjectCreated:Post -
s3:ObjectCreated:Copy -
s3:ObjectCreated:CompleteMultipartUpload -
s3:ObjectRemoved:* -
s3:ObjectRemoved:Delete -
s3:ObjectRemoved:DeleteMarkerCreated
1.16.8. Additional Resources Copy linkLink copied to clipboard!
- See the Creating bucket notifications section is the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details.
1.17. Get bucket information Copy linkLink copied to clipboard!
Get information about a subset of the existing buckets. If uid is specified without bucket then all buckets belonging to the user will be returned. If bucket alone is specified, information for that particular bucket will be retrieved.
Capabilities
`buckets=read`
`buckets=read`
Syntax
GET /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
GET /admin/bucket?format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to return info on. | String |
| No |
|
| The user to retrieve bucket information for. | String |
| No |
|
| Return bucket statistics. | Boolean | True [False] | No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Per bucket information. | Container | N/A |
|
| Contains a list of one or more bucket containers. | Container |
|
| Container for single bucket information. | Container |
|
|
| The name of the bucket. | String |
|
|
| The pool the bucket is stored in. | String |
|
|
| The unique bucket ID. | String |
|
|
| Internal bucket tag. | String |
|
|
| The user ID of the bucket owner. | String |
|
|
| Storage usage information. | Container |
|
|
If successful the request returns a buckets container containing the desired bucket information.
| Name | Description | Code |
|---|---|---|
|
| Bucket index repair failed. | 409 Conflict |
1.18. Check a bucket index Copy linkLink copied to clipboard!
Check the index of an existing bucket.
To check multipart object accounting with check-objects, fix must be set to True.
Capabilities
buckets=write
Syntax
GET /admin/bucket?index&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
GET /admin/bucket?index&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to return info on. | String |
| Yes |
|
| Check multipart object accounting. | Boolean | True [False] | No |
|
| Also fix the bucket index when checking. | Boolean | False [False] | No |
| Name | Description | Type |
|---|---|---|
|
| Status of bucket index. | String |
| Name | Description | Code |
|---|---|---|
|
| Bucket index repair failed. | 409 Conflict |
1.19. Remove a bucket Copy linkLink copied to clipboard!
Removes an existing bucket.
Capabilities
`buckets=write`
`buckets=write`
Syntax
DELETE /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/bucket?format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to remove. | String |
| Yes |
|
| Remove a buckets objects before deletion. | Boolean | True [False] | No |
Response Entities
None.
| Name | Description | Code |
|---|---|---|
|
| Attempted to delete non-empty bucket. | 409 Conflict |
|
| Unable to remove objects. | 409 Conflict |
1.20. Link a bucket Copy linkLink copied to clipboard!
Link a bucket to a specified user, unlinking the bucket from any previous user.
Capabilities
`buckets=write`
`buckets=write`
Syntax
PUT /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
PUT /admin/bucket?format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to unlink. | String |
| Yes |
|
| The user ID to link the bucket to. | String |
| Yes |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Container for single bucket information. | Container | N/A |
|
| The name of the bucket. | String |
|
|
| The pool the bucket is stored in. | String |
|
|
| The unique bucket ID. | String |
|
|
| Internal bucket tag. | String |
|
|
| The user ID of the bucket owner. | String |
|
|
| Storage usage information. | Container |
|
|
| Status of bucket index. | String |
|
| Name | Description | Code |
|---|---|---|
|
| Unable to unlink bucket from specified user. | 409 Conflict |
|
| Unable to link bucket to specified user. | 409 Conflict |
1.21. Unlink a bucket Copy linkLink copied to clipboard!
Unlink a bucket from a specified user. Primarily useful for changing bucket ownership.
Capabilities
`buckets=write`
`buckets=write`
Syntax
POST /admin/bucket?format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
POST /admin/bucket?format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to unlink. | String |
| Yes |
|
| The user ID to unlink the bucket from. | String |
| Yes |
Response Entities
None.
| Name | Description | Code |
|---|---|---|
|
| Unable to unlink bucket from specified user. | 409 Conflict |
1.22. Get a bucket or object policy Copy linkLink copied to clipboard!
Read the policy of an object or bucket.
Capabilities
`buckets=read`
`buckets=read`
Syntax
GET /admin/bucket?policy&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
GET /admin/bucket?policy&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket to read the policy from. | String |
| Yes |
|
| The object to read the policy from. | String |
| No |
| Name | Description | Type | Parent |
|---|---|---|---|
|
| Access control policy. | Container | N/A |
If successful, returns the object or bucket policy
| Name | Description | Code |
|---|---|---|
|
| Either bucket was not specified for a bucket policy request or bucket and object were not specified for an object policy request. | 400 Bad Request |
1.23. Remove an object Copy linkLink copied to clipboard!
Remove an existing object.
Does not require owner to be non-suspended.
Capabilities
`buckets=write`
`buckets=write`
Syntax
DELETE /admin/bucket?object&format=json HTTP/1.1 Host FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/bucket?object&format=json HTTP/1.1
Host FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The bucket containing the object to be removed. | String |
| Yes |
|
| The object to remove. | String |
| Yes |
Response Entities
None.
| Name | Description | Code |
|---|---|---|
|
| Specified object does not exist. | 404 Not Found |
|
| Unable to remove objects. | 409 Conflict |
1.24. Quotas Copy linkLink copied to clipboard!
The administrative Operations API enables you to set quotas on users and on bucket owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes.
To view quotas, the user must have a users=read capability. To set, modify or disable a quota, the user must have users=write capability.
Valid parameters for quotas include:
-
Bucket: The
bucketoption allows you to specify a quota for buckets owned by a user. -
Maximum Objects: The
max-objectssetting allows you to specify the maximum number of objects. A negative value disables this setting. -
Maximum Size: The
max-sizeoption allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. -
Quota Scope: The
quota-scopeoption sets the scope for the quota. The options arebucketanduser.
1.25. Get a user quota Copy linkLink copied to clipboard!
To get a quota, the user must have users capability set with read permission.
Syntax
GET /admin/user?quota&uid=UID"a-type=user
GET /admin/user?quota&uid=UID"a-type=user
1.26. Set a user quota Copy linkLink copied to clipboard!
To set a quota, the user must have users capability set with write permission.
Syntax
PUT /admin/user?quota&uid=UID"a-type=user
PUT /admin/user?quota&uid=UID"a-type=user
The content must include a JSON representation of the quota settings as encoded in the corresponding read operation.
1.27. Get a bucket quota Copy linkLink copied to clipboard!
To get a bucket quota, the user must have users capability set with read permission.
Syntax
GET /admin/user?quota&uid=UID"a-type=bucket
GET /admin/user?quota&uid=UID"a-type=bucket
1.28. Set a bucket quota Copy linkLink copied to clipboard!
To set a quota, the user must have users capability set with write permission.
Syntax
PUT /admin/user?quota&uid=UID"a-type=bucket
PUT /admin/user?quota&uid=UID"a-type=bucket
The content must include a JSON representation of the quota settings as encoded in the corresponding read operation.
1.29. Set quota for an individual bucket Copy linkLink copied to clipboard!
To set a quota, the user must have buckets capability set with write permission.
Syntax
PUT /admin/bucket?quota&uid=UID&bucket=BUCKET_NAME"a
PUT /admin/bucket?quota&uid=UID&bucket=BUCKET_NAME"a
The content must include a JSON representation of the quota settings.
1.30. Get usage information Copy linkLink copied to clipboard!
Requesting bandwidth usage information.
Capabilities
`usage=read`
`usage=read`
Syntax
GET /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
GET /admin/usage?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Required |
|---|---|---|---|
|
| The user for which the information is requested. | String. | Yes |
|
|
Date and (optional) time that specifies the start time of the requested data. E.g., | String | No |
|
|
Date and (optional) time that specifies the end time of the requested data (non-inclusive). E.g., | String | No |
|
| Specifies whether data entries should be returned. | Boolean | No |
|
| Specifies whether data summary should be returned. | Boolean | No |
| Name | Description | Type |
|---|---|---|
|
| A container for the usage information. | Container |
|
| A container for the usage entries information. | Container |
|
| A container for the user data information. | Container |
|
| The name of the user that owns the buckets. | String |
|
| The bucket name. | String |
|
| Time lower bound for which data is being specified (rounded to the beginning of the first relevant hour). | String |
|
|
The time specified in seconds since | String |
|
| A container for stats categories. | Container |
|
| A container for stats entry. | Container |
|
| Name of request category for which the stats are provided. | String |
|
| Number of bytes sent by the Ceph Object Gateway. | Integer |
|
| Number of bytes received by the Ceph Object Gateway. | Integer |
|
| Number of operations. | Integer |
|
| Number of successful operations. | Integer |
|
| A container for stats summary. | Container |
|
| A container for stats summary aggregated total. | Container |
If successful, the response contains the requested information.
1.31. Remove usage information Copy linkLink copied to clipboard!
Remove usage information. With no dates specified, removes all usage information.
Capabilities
`usage=write`
`usage=write`
Syntax
DELETE /admin/usage?format=json HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME
DELETE /admin/usage?format=json HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
| Name | Description | Type | Example | Required |
|---|---|---|---|---|
|
| The user for which the information is requested. | String |
| No |
|
| Date and (optional) time that specifies the start time of the requested data. | String |
| No |
|
| Date and (optional) time that specifies the end time of the requested data (none inclusive). | String |
| No |
|
|
Required when | Boolean | True [False] | No |
1.32. Standard error responses Copy linkLink copied to clipboard!
The following table details standard error responses and their descriptions.
| Name | Description | Code |
|---|---|---|
|
| Access denied. | 403 Forbidden |
|
| Internal server error. | 500 Internal Server Error |
|
| User does not exist. | 404 Not Found |
|
| Bucket does not exist. | 404 Not Found |
|
| No such access key. | 404 Not Found |
Chapter 2. Ceph Object Gateway and the S3 API Copy linkLink copied to clipboard!
As a developer, you can use a RESTful application programing interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway.
2.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.2. S3 limitations Copy linkLink copied to clipboard!
The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team.
-
Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single
PUTis 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. - Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes.
- The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection.
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields.
2.3. Accessing the Ceph Object Gateway with the S3 API Copy linkLink copied to clipboard!
As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API.
2.3.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A running Ceph Object Gateway.
- A RESTful client.
2.3.2. S3 authentication Copy linkLink copied to clipboard!
Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs.
For most use cases, clients use existing open source libraries like the Amazon SDK’s AmazonS3Client for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too.
Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach.
Example
In the above example, replace ACCESS_KEY with the value for the access key ID followed by a colon (:). Replace HASH_OF_HEADER_AND_SECRET with a hash of a canonicalized header string and the secret corresponding to the access key ID.
Generate hash of header string and secret
To generate the hash of the header string and secret:
- Get the value of the header string.
- Normalize the request header string into canonical form.
- Generate an HMAC using a SHA-1 hashing algorithm.
-
Encode the
hmacresult as base-64.
Normalize header
To normalize the header into canonical form:
-
Get all
content-headers. -
Remove all
content-headers except forcontent-typeandcontent-md5. -
Ensure the
content-header names are lowercase. -
Sort the
content-headers lexicographically. -
Ensure you have a
Dateheader AND ensure the specified date uses GMT and not an offset. -
Get all headers beginning with
x-amz-. -
Ensure that the
x-amz-headers are all lowercase. -
Sort the
x-amz-headers lexicographically. - Combine multiple instances of the same field name into a single field and separate the field values with a comma.
- Replace white space and line breaks in header values with a single space.
- Remove white space before and after colons.
- Append a new line after each header.
- Merge the headers back into the request header.
Replace the HASH_OF_HEADER_AND_SECRET with the base-64 encoded HMAC string.
Additional Resources
- For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation.
2.3.3. S3 server-side encryption Copy linkLink copied to clipboard!
The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programing interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form.
Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO).
To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators may disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, setting it to false in the Ceph configuration file and restarting the gateway instance, or setting it to false in the Ansible configuration files and replaying the Ansible playbooks for the Ceph Object Gateway.
In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption.
For information about how to configure HTTP with server-side encryption, see the Additional Resources section below.
There are two options for the management of encryption keys:
Customer-provided Keys
When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer’s responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object.
Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification.
Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode.
Key Management Service
When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data.
Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification.
Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems.
Additional Resources
2.3.4. S3 access control lists Copy linkLink copied to clipboard!
Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object:
| Permission | Bucket | Object |
|---|---|---|
|
| Grantee can list the objects in the bucket. | Grantee can read the object. |
|
| Grantee can write or delete objects in the bucket. | N/A |
|
| Grantee can read bucket ACL. | Grantee can read the object ACL. |
|
| Grantee can write bucket ACL. | Grantee can write to the object ACL. |
|
| Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
2.3.5. Preparing access to the Ceph Object Gateway using S3 Copy linkLink copied to clipboard!
You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server.
DO NOT modify the Ceph configuration file to use port 80 and let Civetweb use the default Ansible configured port of 8080.
Prerequisites
- Installation of the Ceph Object Gateway software.
- Root-level access to the Ceph Object Gateway node.
Procedure
As
root, open port8080on firewall:firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --reload
[root@rgw ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent [root@rgw ~]# firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide.
You can also set up the gateway node for local DNS caching. To do so, execute the following steps:
As
root, install and setupdnsmasq:yum install dnsmasq echo "address=/.FQDN_OF_GATEWAY_NODE/IP_OF_GATEWAY_NODE" | tee --append /etc/dnsmasq.conf systemctl start dnsmasq systemctl enable dnsmasq
[root@rgw ~]# yum install dnsmasq [root@rgw ~]# echo "address=/.FQDN_OF_GATEWAY_NODE/IP_OF_GATEWAY_NODE" | tee --append /etc/dnsmasq.conf [root@rgw ~]# systemctl start dnsmasq [root@rgw ~]# systemctl enable dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
IP_OF_GATEWAY_NODEandFQDN_OF_GATEWAY_NODEwith the IP address and FQDN of the gateway node.As
root, stop NetworkManager:systemctl stop NetworkManager systemctl disable NetworkManager
[root@rgw ~]# systemctl stop NetworkManager [root@rgw ~]# systemctl disable NetworkManagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, set the gateway server’s IP as the nameserver:echo "DNS1=IP_OF_GATEWAY_NODE" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 echo "IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE" | tee --append /etc/hosts systemctl restart network systemctl enable network systemctl restart dnsmasq
[root@rgw ~]# echo "DNS1=IP_OF_GATEWAY_NODE" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 [root@rgw ~]# echo "IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE" | tee --append /etc/hosts [root@rgw ~]# systemctl restart network [root@rgw ~]# systemctl enable network [root@rgw ~]# systemctl restart dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
IP_OF_GATEWAY_NODEandFQDN_OF_GATEWAY_NODEwith the IP address and FQDN of the gateway node.Verify subdomain requests:
ping mybucket.FQDN_OF_GATEWAY_NODE
[user@rgw ~]$ ping mybucket.FQDN_OF_GATEWAY_NODECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
FQDN_OF_GATEWAY_NODEwith the FQDN of the gateway node.WarningSetting up the gateway server for local DNS caching is for testing purposes only. You won’t be able to access outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node.
-
Create the
radosgwuser forS3access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generatedaccess_keyandsecret_key. You will need these keys forS3access and subsequent bucket management tasks.
2.3.6. Accessing the Ceph Object Gateway using Ruby AWS S3 Copy linkLink copied to clipboard!
You can use Ruby programming language along with aws-s3 gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to the node accessing the Ceph Object Gateway.
- Internet access.
Procedure
Install the
rubypackage:yum install ruby
[root@dev ~]# yum install rubyCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe above command will install
rubyand it’s essential dependencies likerubygemsandruby-libs. If somehow the command does not install all the dependencies, install them separately.Install the
aws-s3Ruby package:gem install aws-s3
[root@dev ~]# gem install aws-s3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project directory:
mkdir ruby_aws_s3 cd ruby_aws_s3
[user@dev ~]$ mkdir ruby_aws_s3 [user@dev ~]$ cd ruby_aws_s3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the connection file:
vim conn.rb
[user@dev ~]$ vim conn.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the
conn.rbfile:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
FQDN_OF_GATEWAY_NODEwith the FQDN of the Ceph Object Gateway node. ReplaceMY_ACCESS_KEYandMY_SECRET_KEYwith theaccess_keyandsecret_keythat was generated when you created theradosgwuser forS3access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x conn.rb
[user@dev ~]$ chmod +x conn.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./conn.rb | echo $?
[user@dev ~]$ ./conn.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have provided the values correctly in the file, the output of the command will be
0.Create a new file for creating a bucket:
vim create_bucket.rb
[user@dev ~]$ vim create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x create_bucket.rb
[user@dev ~]$ chmod +x create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./create_bucket.rb
[user@dev ~]$ ./create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of the command is
trueit would mean that bucketmy-new-bucket1was created successfully.Create a new file for listing owned buckets:
vim list_owned_buckets.rb
[user@dev ~]$ vim list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x list_owned_buckets.rb
[user@dev ~]$ chmod +x list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./list_owned_buckets.rb
[user@dev ~]$ ./list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should look something like this:
my-new-bucket1 2020-01-21 10:33:19 UTC
my-new-bucket1 2020-01-21 10:33:19 UTCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for creating an object:
vim create_object.rb
[user@dev ~]$ vim create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x create_object.rb
[user@dev ~]$ chmod +x create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./create_object.rb
[user@dev ~]$ ./create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will create a file
hello.txtwith the stringHello World!.Create a new file for listing a bucket’s content:
vim list_bucket_content.rb
[user@dev ~]$ vim list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable.
chmod +x list_bucket_content.rb
[user@dev ~]$ chmod +x list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./list_bucket_content.rb
[user@dev ~]$ ./list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output will look something like this:
hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT
hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for deleting an empty bucket:
vim del_empty_bucket.rb
[user@dev ~]$ vim del_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x del_empty_bucket.rb
[user@dev ~]$ chmod +x del_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./del_empty_bucket.rb | echo $?
[user@dev ~]$ ./del_empty_bucket.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is successfully deleted, the command will return
0as output.NoteEdit the
create_bucket.rbfile to create empty buckets, for example:my-new-bucket4,my-new-bucket5. Next, edit the above mentioneddel_empty_bucket.rbfile accordingly before trying to delete empty buckets.Create a new file for deleting non-empty buckets:
vim del_non_empty_bucket.rb
[user@dev ~]$ vim del_non_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x del_non_empty_bucket.rb
[user@dev ~]$ chmod +x del_non_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./del_non_empty_bucket.rb | echo $?
[user@dev ~]$ ./del_non_empty_bucket.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is successfully deleted, the command will return
0as output.Create a new file for deleting an object:
vim delete_object.rb
[user@dev ~]$ vim delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x delete_object.rb
[user@dev ~]$ chmod +x delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./delete_object.rb
[user@dev ~]$ ./delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will delete the object
hello.txt.
2.3.7. Accessing the Ceph Object Gateway using Ruby AWS SDK Copy linkLink copied to clipboard!
You can use the Ruby programming language along with aws-sdk gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to the node accessing the Ceph Object Gateway.
- Internet access.
Procedure
Install the
rubypackage:yum install ruby
[root@dev ~]# yum install rubyCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe above command will install
rubyand it’s essential dependencies likerubygemsandruby-libs. If somehow the command does not install all the dependencies, install them separately.Install the
aws-sdkRuby package:gem install aws-sdk
[root@dev ~]# gem install aws-sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project directory:
mkdir ruby_aws_sdk cd ruby_aws_sdk
[user@dev ~]$ mkdir ruby_aws_sdk [user@dev ~]$ cd ruby_aws_sdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the connection file:
vim conn.rb
[user@ruby_aws_sdk]$ vim conn.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the
conn.rbfile:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
FQDN_OF_GATEWAY_NODEwith the FQDN of the Ceph Object Gateway node. ReplaceMY_ACCESS_KEYandMY_SECRET_KEYwith theaccess_keyandsecret_keythat was generated when you created theradosgwuser forS3access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x conn.rb
[user@ruby_aws_sdk]$ chmod +x conn.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./conn.rb | echo $?
[user@ruby_aws_sdk]$ ./conn.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have provided the values correctly in the file, the output of the command will be
0.Create a new file for creating a bucket:
vim create_bucket.rb
[user@ruby_aws_sdk]$ vim create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x create_bucket.rb
[user@ruby_aws_sdk]$ chmod +x create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./create_bucket.rb
[user@ruby_aws_sdk]$ ./create_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of the command is
true, this means that bucketmy-new-bucket2was created successfully.Create a new file for listing owned buckets:
vim list_owned_buckets.rb
[user@ruby_aws_sdk]$ vim list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x list_owned_buckets.rb
[user@ruby_aws_sdk]$ chmod +x list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./list_owned_buckets.rb
[user@ruby_aws_sdk]$ ./list_owned_buckets.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should look something like this:
my-new-bucket2 2022-04-21 10:33:19 UTC
my-new-bucket2 2022-04-21 10:33:19 UTCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for creating an object:
vim create_object.rb
[user@ruby_aws_sdk]$ vim create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x create_object.rb
[user@ruby_aws_sdk]$ chmod +x create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./create_object.rb
[user@ruby_aws_sdk]$ ./create_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will create a file
hello.txtwith the stringHello World!.Create a new file for listing a bucket’s content:
vim list_bucket_content.rb
[user@ruby_aws_sdk]$ vim list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable.
chmod +x list_bucket_content.rb
[user@ruby_aws_sdk]$ chmod +x list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./list_bucket_content.rb
[user@ruby_aws_sdk]$ ./list_bucket_content.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output will look something like this:
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for deleting an empty bucket:
vim del_empty_bucket.rb
[user@ruby_aws_sdk]$ vim del_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x del_empty_bucket.rb
[user@ruby_aws_sdk]$ chmod +x del_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./del_empty_bucket.rb | echo $?
[user@ruby_aws_sdk]$ ./del_empty_bucket.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is successfully deleted, the command will return
0as output.NoteEdit the
create_bucket.rbfile to create empty buckets, for example:my-new-bucket6,my-new-bucket7. Next, edit the above mentioneddel_empty_bucket.rbfile accordingly before trying to delete empty buckets.Create a new file for deleting a non-empty bucket:
vim del_non_empty_bucket.rb
[user@ruby_aws_sdk]$ vim del_non_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x del_non_empty_bucket.rb
[user@ruby_aws_sdk]$ chmod +x del_non_empty_bucket.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./del_non_empty_bucket.rb | echo $?
[user@ruby_aws_sdk]$ ./del_non_empty_bucket.rb | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is successfully deleted, the command will return
0as output.Create a new file for deleting an object:
vim delete_object.rb
[user@ruby_aws_sdk]$ vim delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Make the file executable:
chmod +x delete_object.rb
[user@ruby_aws_sdk]$ chmod +x delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the file:
./delete_object.rb
[user@ruby_aws_sdk]$ ./delete_object.rbCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will delete the object
hello.txt.
2.3.8. Accessing the Ceph Object Gateway using PHP Copy linkLink copied to clipboard!
You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object.
The examples given below are tested against php v5.4.16 and aws-sdk v2.8.24. DO NOT use the latest version of aws-sdk for php as it requires php >= 5.5+.php 5.5 is not available in the default repositories of RHEL 7. If you want to use php 5.5, you will have to enable epel and other third party repositories. Also, the configuration options for php 5.5 and latest version of aws-sdk are different.
Prerequisites
- Root-level access to a development workstation.
- Internet access.
Procedure
Install the
phppackage:yum install php
[root@dev ~]# yum install phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Download the zip archive of
aws-sdkfor PHP and extract it. Create a project directory:
mkdir php_s3 cd php_s3
[user@dev ~]$ mkdir php_s3 [user@dev ~]$ cd php_s3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the extracted
awsdirectory to the project directory. For example:cp -r ~/Downloads/aws/ ~/php_s3/
[user@php_s3]$ cp -r ~/Downloads/aws/ ~/php_s3/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the connection file:
vim conn.php
[user@php_s3]$ vim conn.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents in the
conn.phpfile:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
FQDN_OF_GATEWAY_NODEwith the FQDN of the gateway node. ReplaceMY_ACCESS_KEYandMY_SECRET_KEYwith theaccess_keyandsecret_keythat was generated when creating theradosgwuser forS3access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide. ReplacePATH_TO_AWSwith the absolute path to the extractedawsdirectory that you copied to thephpproject directory.Save the file and exit the editor.
Run the file:
php -f conn.php | echo $?
[user@php_s3]$ php -f conn.php | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have provided the values correctly in the file, the output of the command will be
0.Create a new file for creating a bucket:
vim create_bucket.php
[user@php_s3]$ vim create_bucket.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the new file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f create_bucket.php
[user@php_s3]$ php -f create_bucket.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for listing owned buckets:
vim list_owned_buckets.php
[user@php_s3]$ vim list_owned_buckets.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f list_owned_buckets.php
[user@php_s3]$ php -f list_owned_buckets.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should look similar to this:
my-new-bucket3 2022-04-21 10:33:19 UTC
my-new-bucket3 2022-04-21 10:33:19 UTCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an object by first creating a source file named
hello.txt:echo "Hello World!" > hello.txt
[user@php_s3]$ echo "Hello World!" > hello.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new php file:
vim create_object.php
[user@php_s3]$ vim create_object.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f create_object.php
[user@php_s3]$ php -f create_object.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will create the object
hello.txtin bucketmy-new-bucket3.Create a new file for listing a bucket’s content:
vim list_bucket_content.php
[user@php_s3]$ vim list_bucket_content.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following content into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f list_bucket_content.php
[user@php_s3]$ php -f list_bucket_content.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output will look similar to this:
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file for deleting an empty bucket:
vim del_empty_bucket.php
[user@php_s3]$ vim del_empty_bucket.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f del_empty_bucket.php | echo $?
[user@php_s3]$ php -f del_empty_bucket.php | echo $?Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the bucket is successfully deleted, the command will return
0as output.NoteEdit the
create_bucket.phpfile to create empty buckets, for example:my-new-bucket4,my-new-bucket5. Next, edit the above mentioneddel_empty_bucket.phpfile accordingly before trying to delete empty buckets.ImportantDeleting a non-empty bucket is currently not supported in PHP 2 and newer versions of
aws-sdk.Create a new file for deleting an object:
vim delete_object.php
[user@php_s3]$ vim delete_object.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following contents into the file:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file and exit the editor.
Run the file:
php -f delete_object.php
[user@php_s3]$ php -f delete_object.phpCopy to Clipboard Copied! Toggle word wrap Toggle overflow This will delete the object
hello.txt.
2.3.9. Accessing the Ceph Object Gateway using AWS CLI Copy linkLink copied to clipboard!
You can use the AWS CLI for S3 access. This procedure provides steps for installing AWS CLI and some example commands to perform various tasks, such as deleting an object from an MFA-Delete enabled bucket.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to a development workstation.
-
A multi-factor authentication (MFA) TOTP token was created using
radosgw-admin mfa create
Procedure
Install the
awsclipackage:pip3 install --user awscli
[user@dev]$ pip3 install --user awscliCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure
awsclito access Ceph Object Storage using AWS CLI:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
MY_PROFILE_NAMEwith the name you want to use to identify this profile. ReplaceMY_ACCESS_KEYandMY_SECRET_KEYwith theaccess_keyandsecret_keythat was generated when creating theradosgwuser forS3access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an alias to point to the FQDN of your Ceph Object Gateway node:
Syntax
alias aws="aws --endpoint-url=http://FQDN_OF_GATEWAY_NODE:8080"
alias aws="aws --endpoint-url=http://FQDN_OF_GATEWAY_NODE:8080"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
FQDN_OF_GATEWAY_NODEwith the FQDN of the Ceph Object Gateway node.Example
alias aws="aws --endpoint-url=http://testclient.englab.pnq.redhat.com:8080"
[user@dev]$ alias aws="aws --endpoint-url=http://testclient.englab.pnq.redhat.com:8080"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api create-bucket --bucket BUCKET_NAME
aws --profile=MY_PROFILE_NAME s3api create-bucket --bucket BUCKET_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
MY_PROFILE_NAMEwith the name you created to use this profile. ReplaceBUCKET_NAMEwith a name for your new bucket.Example
aws --profile=ceph s3api create-bucket --bucket mybucket
[user@dev]$ aws --profile=ceph s3api create-bucket --bucket mybucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow List owned buckets:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-buckets
aws --profile=MY_PROFILE_NAME s3api list-bucketsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
MY_PROFILE_NAMEwith the name you created to use this profile.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a bucket for MFA-Delete:
Syntax
aws --profile=MY_PROFILE_NAME s3api put-bucket-versioning --bucket BUCKET_NAME --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'TOTP_SERIAL TOTP_PIN'aws --profile=MY_PROFILE_NAME s3api put-bucket-versioning --bucket BUCKET_NAME --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'TOTP_SERIAL TOTP_PIN'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
MY_PROFILE_NAMEwith the name you created to use this profile. -
Replace
BUCKET_NAMEwith the name of your new bucket. -
Replace
TOTP_SERIALwith the string the represents the ID for the TOTP token and replaceTOTP_PINwith the current pin displayed on your MFA authentication device. -
The
TOTP_SERIALis the string that was specified when you created the radosgw user for S3. - See the Creating a new multi-factor authentication TOTP token section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on creating a MFA TOTP token.
See the Creating a seed for multi-factor authentication using oathtool section in the Red Hat Ceph Storage Developer Guide for details on creating a MFA seed with oathtool.
Example
aws --profile=ceph s3api put-bucket-versioning --bucket mybucket --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'MFAtest 232009'[user@dev]$ aws --profile=ceph s3api put-bucket-versioning --bucket mybucket --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'MFAtest 232009'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
View MFA-Delete status of the bucket versioning state:
Syntax
aws --profile=MY_PROFILE_NAME s3api get-bucket-versioning --bucket BUCKET_NAME
aws --profile=MY_PROFILE_NAME s3api get-bucket-versioning --bucket BUCKET_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
MY_PROFILE_NAMEwith the name you created to use this profile. ReplaceBUCKET_NAMEwith the name of your new bucket.Example
aws --profile=ceph s3api get-bucket-versioning --bucket mybucket { "Status": "Enabled", "MFADelete": "Enabled" }[user@dev]$ aws --profile=ceph s3api get-bucket-versioning --bucket mybucket { "Status": "Enabled", "MFADelete": "Enabled" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add an object to the MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api put-object --bucket BUCKET_NAME --key OBJECT_KEY --body LOCAL_FILE
aws --profile=MY_PROFILE_NAME s3api put-object --bucket BUCKET_NAME --key OBJECT_KEY --body LOCAL_FILECopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
MY_PROFILE_NAMEwith the name you created to use this profile. -
Replace
BUCKET_NAMEwith the name of your new bucket. -
Replace
OBJECT_KEYwith the name that will uniquely identify the object in a bucket. Replace
LOCAL_FILEwith the name of the local file to upload.Example
aws --profile=ceph s3api put-object --bucket mybucket --key example --body testfile { "ETag": "\"5679b828547a4b44cfb24a23fd9bb9d5\"", "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }[user@dev]$ aws --profile=ceph s3api put-object --bucket mybucket --key example --body testfile { "ETag": "\"5679b828547a4b44cfb24a23fd9bb9d5\"", "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
List the object versions for a specific object:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJEC_KEY]
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJEC_KEY]Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
MY_PROFILE_NAMEwith the name you created to use this profile. -
Replace
BUCKET_NAMEwith the name of your new bucket. Replace
OBJECT_KEYwith the name that was specified to uniquely identify the object in a bucket.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
Delete an object in an MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api delete-object --bucket BUCKET_NAME --key OBJECT_KEY --version-id VERSION_ID --mfa 'TOTP_SERIAL TOTP_PIN'
aws --profile=MY_PROFILE_NAME s3api delete-object --bucket BUCKET_NAME --key OBJECT_KEY --version-id VERSION_ID --mfa 'TOTP_SERIAL TOTP_PIN'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
MY_PROFILE_NAMEwith the name you created to use this profile. -
Replace
BUCKET_NAMEwith the name of your bucket that contains the object to delete. -
Replace
OBJECT_KEYwith the name that uniquely identifies the object in a bucket. -
Replace
VERSION_IDwith the VersionID of the specific version of the object you want to delete. Replace
TOTP_SERIALwith the string that represents the ID for the TOTP token andTOTP_PINto the current pin displayed on your MFA authentication device.Example
aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r --mfa 'MFAtest 420797' { "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }[user@dev]$ aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r --mfa 'MFAtest 420797' { "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the MFA token is not included, the request fails with the error shown below.
Example
aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r An error occurred (AccessDenied) when calling the DeleteObject operation: Unknown
[user@dev]$ aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r An error occurred (AccessDenied) when calling the DeleteObject operation: UnknownCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
List object versions to verify object was deleted from MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJECT_KEY
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJECT_KEYCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
MY_PROFILE_NAMEwith the name you created to use this profile. -
Replace
BUCKET_NAMEwith the name of your bucket. Replace
OBJECT_KEYwith the name that uniquely identifies the object in a bucket.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
2.3.10. Creating a seed for multi-factor authentication using the oathtool command Copy linkLink copied to clipboard!
To set up multi-factor authentication (MFA), you must create a secret seed for use by the time-based one time password (TOTP) generator and the back-end MFA system. You can use oathtool to generate the hexadecimal seed and optionally qrencode to create a QR code to import the token into your MFA device.
Prerequisites
- A Linux system.
- Access to the command line shell.
-
rootorsudoaccess to the Linux system.
Procedure
Install the
oathtoolpackage:dnf install oathtool
[root@dev]# dnf install oathtoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
qrencodepackage:dnf install qrencode
[root@dev]# dnf install qrencodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a 30 character seed from the
urandomLinux device file and store it in the shell variableSEED:Example
SEED=$(head -10 /dev/urandom | sha512sum | cut -b 1-30)
[user@dev]$ SEED=$(head -10 /dev/urandom | sha512sum | cut -b 1-30)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Print the seed by running echo on the
SEEDvariable:Example
echo $SEED BA6GLJBJIKC3D7W7YFYXXAQ7
[user@dev]$ echo $SEED BA6GLJBJIKC3D7W7YFYXXAQ7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Feed the
SEEDinto the oathtool command:Syntax
oathtool -v -d6 $SEED
oathtool -v -d6 $SEEDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe base32 secret is needed to add a token to the authenticator application on your MFA device. You can either use the QR code to import the token into the authenticator application or use the base32 secret manually add it.
Optional: Create a QR code image file to add the token to the authenticator:
Syntax
qrencode -o /tmp/user.png 'otpauth://totp/TOTP_SERIAL?secret=_BASE32_SECRET'
qrencode -o /tmp/user.png 'otpauth://totp/TOTP_SERIAL?secret=_BASE32_SECRET'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
TOTP_SERIALwith the string that represents the ID for the (TOTP) token andBASE32_SECRETwith the Base32 secret generated by oathtool.Example
qrencode -o /tmp/user.png 'otpauth://totp/MFAtest?secret=BA6GLJBJIKC3D7W7YFYXXAQ7'
[user@dev]$ qrencode -o /tmp/user.png 'otpauth://totp/MFAtest?secret=BA6GLJBJIKC3D7W7YFYXXAQ7'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Scan the generated QR code image file to add the token to the authenticator application on your MFA device.
-
Create the multi-factor authentication TOTP token for the user using the
radowgw-admincommand.
2.3.11. Secure Token Service Copy linkLink copied to clipboard!
The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. The Ceph Object Gateway implements a subset of the STS application programming interfaces (APIs) to provide temporary credentials for identity and access management (IAM). Using these temporary credentials authenticates S3 calls by utilizing the STS engine in the Ceph Object Gateway. You can restrict temporary credentials even further by using an IAM policy, which is a parameter passed to the STS APIs.
Additional Resources
- Amazon Web Services Secure Token Service welcome page.
- See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone.
- See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone.
2.3.11.1. The Secure Token Service application programming interfaces Copy linkLink copied to clipboard!
The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs):
AssumeRole
This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional.
RoleArn- Description
- The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters.
- Type
- String
- Required
- Yes
RoleSessionName- Description
-
Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter’s value has a length of 2 to 64 characters. The
=,,,.,@, and-characters are allowed, but no spaces allowed. - Type
- String
- Required
- Yes
Policy- Description
- An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter’s value has a length of 1 to 2048 characters.
- Type
- String
- Required
- No
DurationSeconds- Description
-
The duration of the session in seconds, with a minimum value of
900seconds to a maximum value of43200seconds. The default value is3600seconds. - Type
- Integer
- Required
- No
ExternalId- Description
- When assuming a role for another account, provide the unique external identifier if available. This parameter’s value has a length of 2 to 1224 characters.
- Type
- String
- Required
- No
SerialNumber- Description
- A user’s identification number from their associated multi-factor authentication (MFA) device. The parameter’s value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters.
- Type
- String
- Required
- No
TokenCode- Description
- The value generated from the multi-factor authentication (MFA) device, if the trust policy requires a MFA. If a MFA device is required, and if this parameter’s value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter’s value has a fixed length of 6 characters.
- Type
- String
- Required
- No
AssumeRoleWithWebIdentity
This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional.
RoleArn- Description
- The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters.
- Type
- String
- Required
- Yes
RoleSessionName- Description
-
Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter’s value has a length of 2 to 64 characters. The
=,,,.,@, and-characters are allowed, but no spaces allowed. - Type
- String
- Required
- Yes
Policy- Description
- An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter’s value has a length of 1 to 2048 characters.
- Type
- String
- Required
- No
DurationSeconds- Description
-
The duration of the session in seconds, with a minimum value of
900seconds to a maximum value of43200seconds. The default value is3600seconds. - Type
- Integer
- Required
- No
ProviderId- Description
- The fully qualified host component of the domain name from the identity provider. This parameter’s value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters.
- Type
- String
- Required
- No
WebIdentityToken- Description
- The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter’s value has a length of 4 to 2048 characters.
- Type
- String
- Required
- No
Additional Resources
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
- Amazon Web Services Security Token Service, the AssumeRole action.
- Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action.
2.3.11.2. Configuring the Secure Token Service Copy linkLink copied to clipboard!
Configure the Secure Token Service (STS) for use with the Ceph Object Gateway using Ceph Ansible.
The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway.
Prerequisites
- A Ceph Ansible administration node.
- A running Red Hat Ceph Storage cluster.
- A running Ceph Object Gateway.
Procedure
Open for editing the
group_vars/rgws.ymlfile.Add the following lines:
rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true
rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
STS_KEYwith the key used to encrypted the session token.
-
-
Save the changes to the
group_vars/rgws.ymlfile. Rerun the appropriate Ceph Ansible playbook:
Bare-metal deployments:
ansible-playbook site.yml --limit rgws
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgwsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Container deployments:
ansible-playbook site-docker.yml --limit rgws
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgwsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
2.3.11.3. Creating a user for an OpenID Connect provider Copy linkLink copied to clipboard!
To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy.
Prerequisites
- User-level access to the Ceph Object Gateway node.
Procedure
Create a new Ceph user:
Syntax
radosgw-admin --uid USER_NAME --display-name "DISPLAY_NAME" --access_key USER_NAME --secret SECRET user create
radosgw-admin --uid USER_NAME --display-name "DISPLAY_NAME" --access_key USER_NAME --secret SECRET user createCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
[user@rgw ~]$ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user createCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Ceph user capabilities:
Syntax
radosgw-admin caps add --uid="USER_NAME" --caps="oidc-provider=*"
radosgw-admin caps add --uid="USER_NAME" --caps="oidc-provider=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin caps add --uid="TESTER" --caps="oidc-provider=*"
[user@rgw ~]$ radosgw-admin caps add --uid="TESTER" --caps="oidc-provider=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a condition to the role trust policy using the Secure Token Service (STS) API:
Syntax
"{\"Version\":\"2020-01-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/IDP_URL\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"IDP_URL:app_id\":\"AUD_FIELD\"\}\}\}\]\}""{\"Version\":\"2020-01-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/IDP_URL\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"IDP_URL:app_id\":\"AUD_FIELD\"\}\}\}\]\}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
app_idin the syntax example above must match theAUD_FIELDfield of the incoming token.
Additional Resources
- See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon’s website.
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
2.3.11.4. Obtaining a thumbprint of an OpenID Connect provider Copy linkLink copied to clipboard!
To get the OpenID Connect provider’s (IDP) configuration document.
Prerequisites
-
Installation of the
opensslandcurlpackages.
Procedure
Get the configuration document from the IDP’s URL:
Syntax
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL:8000/CONTEXT/realms/REALM/.well-known/openid-configuration" \ | jq .curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL:8000/CONTEXT/realms/REALM/.well-known/openid-configuration" \ | jq .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration" \ | jq .[user@client ~]$ curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration" \ | jq .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the IDP certificate:
Syntax
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL/CONTEXT/realms/REALM/protocol/openid-connect/certs" \ | jq .curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL/CONTEXT/realms/REALM/protocol/openid-connect/certs" \ | jq .Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs" \ | jq .[user@client ~]$ curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs" \ | jq .Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy the result of the "x5c" response from the previous command and paste it into the
certificate.crtfile. Include—–BEGIN CERTIFICATE—–at the beginning and—–END CERTIFICATE—–at the end. Get the certificate thumbprint:
Syntax
openssl x509 -in CERT_FILE -fingerprint -noout
openssl x509 -in CERT_FILE -fingerprint -nooutCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87
[user@client ~]$ openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request.
Additional Resources
- See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon’s website.
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
2.3.11.5. Configuring and using STS Lite with Keystone (Technology Preview) Copy linkLink copied to clipboard!
The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options.
Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway.
Prerequisites
- Red Hat Ceph Storage 3.2 or higher.
- A running Ceph Object Gateway.
- Installation of the Boto Python module, version 3 or higher.
Procedure
Open and edit the
group_vars/rgws.ymlfile with the following options:rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true
rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
STS_KEYwith the key used to encrypted the session token.
-
Rerun the appropriate Ceph Ansible playbook:
Bare-metal deployments:
ansible-playbook site.yml --limit rgws
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgwsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Container deployments:
ansible-playbook site-docker.yml --limit rgws
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgwsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate the EC2 credentials:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtaining the temporary credentials can be used for making S3 calls:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new S3Access role and configure a policy.
Assign a user with administrative CAPS:
Syntax
radosgw-admin caps add --uid="USER" --caps="roles=*"
radosgw-admin caps add --uid="USER" --caps="roles=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin caps add --uid="gwadmin" --caps="roles=*"
[user@client]$ radosgw-admin caps add --uid="gwadmin" --caps="roles=*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the S3Access role:
Syntax
radosgw-admin role create --role-name=ROLE_NAME --path=PATH --assume-role-policy-doc=TRUST_POLICY_DOC
radosgw-admin role create --role-name=ROLE_NAME --path=PATH --assume-role-policy-doc=TRUST_POLICY_DOCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}[user@client]$ radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach a permission policy to the S3Access role:
Syntax
radosgw-admin role-policy put --role-name=ROLE_NAME --policy-name=POLICY_NAME --policy-doc=PERMISSION_POLICY_DOC
radosgw-admin role-policy put --role-name=ROLE_NAME --policy-name=POLICY_NAME --policy-doc=PERMISSION_POLICY_DOCCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:*\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}[user@client]$ radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:*\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Now another user can assume the role of the
gwadminuser. For example, thegwuseruser can assume the permissions of thegwadminuser. Make a note of the assuming user’s
access_keyandsecret_keyvalues.Example
radosgw-admin user info --uid=gwuser | grep -A1 access_key
[user@client]$ radosgw-admin user info --uid=gwuser | grep -A1 access_keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the AssumeRole API call, providing the
access_keyandsecret_keyvalues from the assuming user:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe AssumeRole API requires the S3Access role.
Additional Resources
- See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module.
- See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information.
2.3.11.6. Working around the limitations of using STS Lite with Keystone (Technology Preview) Copy linkLink copied to clipboard!
A limitation with Keystone is that it does not supports STS requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified.
Prerequisites
- A running Red Hat Ceph Storage cluster, version 3.2 or higher.
- A running Ceph Object Gateway.
- Installation of Boto Python module, version 3 or higher.
Procedure
Open and edit Boto’s
auth.pyfile.Add the following four lines to the code block:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following two lines to the code block:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module.
2.3.12. Session tags for Attribute-based access control (ABAC) in STS Copy linkLink copied to clipboard!
Session tags are key-value pairs that can be passed while federating a user. They are passed as aws:PrincipalTag in the session or temporary credentials that are returned back by secure token service (STS). These principal tags consist of session tags that come in as part of the web token and tags that are attached to the role being assumed.
Currently, the session tags are only supported as part of the web token passed to AssumeRoleWithWebIdentity.
The tags have to be always specified in the following namespace: https://aws.amazon.com/tags.
The trust policy must have sts:TagSession permission if the web token passed in by the federated user contains session tags. Otherwise, the AssumeRoleWithWebIdentity action fails.
Example of the trust policy with sts:TagSession:
Properties
The following are the properties of session tags:
Session tags can be multi-valued.
NoteMulti-valued session tags are not supported in Amazon Web Service (AWS).
- Keycloak can be set up as an OpenID Connect Identity Provider (IDP) with a maximum of 50 session tags.
- The maximum size of a key allowed is 128 characters.
- The maximum size of a value allowed is 256 characters.
-
The tag or the value cannot start with
aws:.
2.3.12.1. Tag keys Copy linkLink copied to clipboard!
The following are the tag keys that can be used in the role trust policy or the role permission policy.
aws:RequestTag- Description
Compares the key-value pair passed in the request with the key-value pair in the role’s trust policy.
In the case of
AssumeRoleWithWebIdentity, session tags can be used asaws:RequestTagin the role trust policy. Those session tags are passed by Keycloak in the web token. As a result, a federated user can assume a role.
aws:PrincipalTag- Description
Compares the key-value pair attached to the principal with the key-value pair in the policy.
In the case of
AssumeRoleWithWebIdentity, session tags appear as principal tags in the temporary credentials once a user is authenticated. Those session tags are passed by Keycloak in the web token. They can be used asaws:PrincipalTagin the role permission policy.
iam:ResourceTag- Description
Compares the key-value pair attached to the resource with the key-value pair in the policy.
In the case of
AssumeRoleWithWebIdentity, tags attached to the role are compared with those in the trust policy to allow a user to assume a role.NoteThe Ceph Object Gateway now supports RESTful APIs for tagging, listing tags, and untagging actions on a role.
aws:TagKeys- Description
Compares tags in the request with the tags in the policy.
In the case of
AssumeRoleWithWebIdentity, tags are used to check the tag keys in a role trust policy or permission policy before a user is allowed to assume a role.
s3:ResourceTag- Description
Compares tags present on the S3 resource, that is bucket or object, with the tags in the role’s permission policy.
It can be used for authorizing an S3 operation in the Ceph Object Gateway. However, this is not allowed in AWS.
It is a key used to refer to tags that have been attached to an object or a bucket. Tags can be attached to an object or a bucket using RESTful APIs available for the same.
2.3.12.2. S3 resource tags Copy linkLink copied to clipboard!
The following list shows which S3 resource tag type is supported for authorizing a particular operation.
- Tag type: Object tags
- Operations
-
GetObject,GetObjectTags,DeleteObjectTags,DeleteObject,PutACLs,InitMultipart,AbortMultipart , `ListMultipart,GetAttrs,PutObjectRetention,GetObjectRetention,PutObjectLegalHold,GetObjectLegalHold
- Tag type: Bucket tags
- Operations
-
PutObjectTags,GetBucketTags,PutBucketTags,DeleteBucketTags,GetBucketReplication,DeleteBucketReplication,GetBucketVersioning,SetBucketVersioning,GetBucketWebsite,SetBucketWebsite,DeleteBucketWebsite,StatBucket,ListBucket,GetBucketLogging,GetBucketLocation,DeleteBucket,GetLC,PutLC,DeleteLC,GetCORS,PutCORS,GetRequestPayment,SetRequestPayment.PutBucketPolicy,GetBucketPolicy,DeleteBucketPolicy,PutBucketObjectLock,GetBucketObjectLock,GetBucketPolicyStatus,PutBucketPublicAccessBlock,GetBucketPublicAccessBlock,DeleteBucketPublicAccessBlock
- Tag type: Bucket tags for bucket ACLs, Object tags for object ACLs
- Operations
-
GetACLs,PutACLs
- Tag type: Object tags of source object, Bucket tags of destination bucket
- Operations
-
PutObject,CopyObject
2.4. S3 bucket operations Copy linkLink copied to clipboard!
As a developer, you can perform bucket operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway.
The following table list the Amazon S3 functional operations for buckets, along with the function’s support status.
| Feature | Status | Notes |
|---|---|---|
| Supported | ||
| Supported | Different set of canned ACLs. | |
| Partially Supported |
| |
| Partially Supported |
| |
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | Different set of canned ACLs | |
| Supported | Different set of canned ACLs | |
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Partially Supported | ||
| Supported | ||
| Supported | ||
| Supported |
2.4.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.4.2. S3 create bucket notifications Copy linkLink copied to clipboard!
Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated and ObjectRemoved. These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations.
To create a bucket notification for s3:objectCreate and s3:objectRemove events,use PUT:
Example
Red Hat supports ObjectCreate events, such as, put, post, multipartUpload, and copy. Red Hat also supports ObjectRemove events, such as, object_delete and s3_multi_object_delete.
Request Entities
NotificationConfiguration- Description
-
list of
TopicConfigurationentities. - Type
- Container
- Required
- Yes
TopicConfiguration- Description
-
Id,Topicandlistof Event entities. - Type
- Container
- Required
- Yes
id- Description
- Name of the notification.
- Type
- String
- Required
- Yes
Topic- Description
Topic Amazon Resource Name(ARN)
NoteThe topic must be created beforehand.
- Type
- String
- Required
- Yes
Event- Description
- List of supported events. Multiple event entities can be used. If omitted, all events are handled.
- Type
- String
- Required
- No
Filter- Description
-
S3Key,S3MetadataandS3Tagsentities. - Type
- Container
- Required
- No
S3Key- Description
-
A list of
FilterRuleentities, for filtering based on the object key. At most, 3 entities may be in the list, for exampleNamewould beprefix,suffixorregex. All filter rules in the list must match for the filter to match. - Type
- Container
- Required
- No
S3Metadata- Description
-
A list of
FilterRuleentities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. - Type
- Container
- Required
- No
S3Tags- Description
-
A list of
FilterRuleentities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. - Type
- Container
- Required
- No
S3Key.FilterRule- Description
-
NameandValueentities. Name is :prefix,suffixorregex. TheValuewould hold the key prefix, key suffix or a regular expression for matching the key, accordingly. - Type
- Container
- Required
- Yes
S3Metadata.FilterRule- Description
-
NameandValueentities. Name is the name of the metadata attribute for examplex-amz-meta-xxx. The value is the expected value for this attribute. - Type
- Container
- Required
- Yes
S3Tags.FilterRule- Description
-
NameandValueentities. Name is the tag key, and the value is the tag value. - Type
- Container
- Required
- Yes
HTTP response
400- Status Code
-
MalformedXML - Description
- The XML is not well-formed.
400- Status Code
-
InvalidArgument - Description
- Missing Id or missing or invalid topic ARN or invalid event.
404- Status Code
-
NoSuchBucket - Description
- The bucket does not exist.
404- Status Code
-
NoSuchKey - Description
- The topic does not exist.
id="s3-get-bucket-notifications_dev"]
2.4.3. S3 get bucket notifications Copy linkLink copied to clipboard!
Get a specific notification or list all the notifications configured on a bucket.
Syntax
Get /BUCKET?notification=NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Get /BUCKET?notification=NOTIFICATION_ID HTTP/1.1
Host: cname.domain.com
Date: date
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Example
Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Get /testbucket?notification=testnotificationID HTTP/1.1
Host: cname.domain.com
Date: date
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Example Response
The notification subresource returns the bucket notification configuration or an empty NotificationConfiguration element. The caller must be the bucket owner.
Request Entities
notification-id- Description
- Name of the notification. All notifications are listed if the ID is not provided.
- Type
- String
NotificationConfiguration- Description
-
list of
TopicConfigurationentities. - Type
- Container
- Required
- Yes
TopicConfiguration- Description
-
Id,Topicandlistof Event entities. - Type
- Container
- Required
- Yes
id- Description
- Name of the notification.
- Type
- String
- Required
- Yes
Topic- Description
Topic Amazon Resource Name(ARN)
NoteThe topic must be created beforehand.
- Type
- String
- Required
- Yes
Event- Description
- Handled event. Multiple event entities may exist.
- Type
- String
- Required
- Yes
Filter- Description
- The filters for the specified configuration.
- Type
- Container
- Required
- No
HTTP response
404- Status Code
-
NoSuchBucket - Description
- The bucket does not exist.
404- Status Code
-
NoSuchKey - Description
- The notification does not exist if it has been provided.
2.4.4. S3 delete bucket notifications Copy linkLink copied to clipboard!
Delete a specific or all notifications from a bucket.
Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete, is not considered an error.
To delete a specific or all notifications use DELETE:
Syntax
DELETE /BUCKET?notification=NOTIFICATION_ID HTTP/1.1
DELETE /BUCKET?notification=NOTIFICATION_ID HTTP/1.1
Example
DELETE /testbucket?notification=testnotificationID HTTP/1.1
DELETE /testbucket?notification=testnotificationID HTTP/1.1
Request Entities
notification-id- Description
- Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided.
- Type
- String
HTTP response
404- Status Code
-
NoSuchBucket - Description
- The bucket does not exist.
2.4.5. Accessing bucket host names Copy linkLink copied to clipboard!
There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI.
Example
GET /mybucket HTTP/1.1 Host: cname.domain.com
GET /mybucket HTTP/1.1
Host: cname.domain.com
The second method identifies the bucket via a virtual bucket host name.
Example
GET / HTTP/1.1 Host: mybucket.cname.domain.com
GET / HTTP/1.1
Host: mybucket.cname.domain.com
Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards.
2.4.6. S3 list buckets Copy linkLink copied to clipboard!
GET / returns a list of buckets created by the user making the request. GET / only returns buckets created by an authenticated user. You cannot make an anonymous request.
Syntax
GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET / HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| Name | Type | Description |
|---|---|---|
|
| Container | Container for list of buckets. |
|
| Container | Container for bucket information. |
|
| String | Bucket name. |
|
| Date | UTC time when the bucket was created. |
|
| Container | A container for the result. |
|
| Container |
A container for the bucket owner’s |
|
| String | The bucket owner’s ID. |
|
| String | The bucket owner’s display name. |
2.4.7. S3 return a list of bucket objects Copy linkLink copied to clipboard!
Returns a list of bucket objects.
Syntax
GET /BUCKET?max-keys=25 HTTP/1.1 Host: cname.domain.com
GET /BUCKET?max-keys=25 HTTP/1.1
Host: cname.domain.com
| Name | Type | Description |
|---|---|---|
|
| String | Only returns objects that contain the specified prefix. |
|
| String | The delimiter between the prefix and the rest of the object name. |
|
| String | A beginning index for the list of objects returned. |
|
| Integer | The maximum number of keys to return. Default is 1000. |
| HTTP Status | Status Code | Description |
|---|---|---|
|
| OK | Buckets retrieved |
GET /BUCKET returns a container for buckets with the following fields:
| Name | Type | Description |
|---|---|---|
|
| Entity | The container for the list of objects. |
|
| String | The name of the bucket whose contents will be returned. |
|
| String | A prefix for the object keys. |
|
| String | A beginning index for the list of objects returned. |
|
| Integer | The maximum number of keys returned. |
|
| String |
If set, objects with the same prefix will appear in the |
|
| Boolean |
If |
|
| Container | If multiple objects contain the same prefix, they will appear in this list. |
The ListBucketResult contains objects, where each object is within a Contents container.
| Name | Type | Description |
|---|---|---|
|
| Object | A container for the object. |
|
| String | The object’s key. |
|
| Date | The object’s last-modified date/time. |
|
| String | An MD-5 hash of the object. (entity tag) |
|
| Integer | The object’s size. |
|
| String |
Should always return |
2.4.8. S3 create a new bucket Copy linkLink copied to clipboard!
Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user.
Constraints
In general, bucket names should follow domain name constraints.
- Bucket names must be unique.
- Bucket names must begin and end with a lowercase letter.
- Bucket names can contain a dash (-).
Syntax
PUT /BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
PUT /BUCKET HTTP/1.1
Host: cname.domain.com
x-amz-acl: public-read-write
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| Name | Description | Valid Values | Required |
|---|---|---|---|
|
| Canned ACLs. |
| No |
HTTP Response
If the bucket name is unique, within constraints and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail.
| HTTP Status | Status Code | Description |
|---|---|---|
|
| BucketAlreadyExists | Bucket already exists under different user’s ownership. |
2.4.9. S3 delete a bucket Copy linkLink copied to clipboard!
Deletes a bucket. You can reuse bucket names following a successful bucket removal.
Syntax
DELETE /BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
DELETE /BUCKET HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| HTTP Status | Status Code | Description |
|---|---|---|
|
| No Content | Bucket removed. |
2.4.10. S3 bucket lifecycle Copy linkLink copied to clipboard!
You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions:
-
Expiration: This defines the lifespan of objects within a bucket. It takes the number of days the object should live or an expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn’t enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. -
NoncurrentVersionExpiration: This defines the lifespan of non-current object versions within a bucket. To use this feature, the bucket must enable versioning. It takes the number of days a non-current object should live, at which point Ceph Object Gateway will delete the non-current object. -
AbortIncompleteMultipartUpload: This defines the number of days an incomplete multipart upload should live before it is aborted.
The lifecycle configuration contains one or more rules using the <Rule> element.
Example
A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you specify in the lifecycle rule. You can specify a filter several ways:
- Key prefixes
- Object tags
- Both key prefix and one or more object tags
Key prefixes
You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/> would apply to objects that begin with keypre/:
You can also apply different lifecycle rules to objects with different key prefixes:
Object tags
You can apply a lifecycle rule to only objects with a specific tag using the <Key> and <Value> elements:
Both prefix and one or more tags
In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And> element. A filter can have only one prefix, and zero or more tags:
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle.
- See the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle.
- See the Red Hat Ceph Storage Developer Guide for details to delete a bucket lifecycle.
2.4.11. S3 GET bucket lifecycle Copy linkLink copied to clipboard!
To get a bucket lifecycle, use GET and specify a destination bucket.
Syntax
GET /BUCKET?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?lifecycle HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Request Headers
See the Common Request Headers for more information.
Response
The response contains the bucket lifecycle and its elements.
2.4.12. S3 create or replace a bucket lifecycle Copy linkLink copied to clipboard!
To create or replace a bucket lifecycle, use PUT and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality.
Syntax
| Name | Description | Valid Values | Required |
|---|---|---|---|
| content-md5 | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on common Amazon S3 request headers.
- See the Red Hat Ceph Storage Developer Guide for details on Amazon S3 bucket lifecycles.
2.4.13. S3 delete a bucket lifecycle Copy linkLink copied to clipboard!
To delete a bucket lifecycle, use DELETE and specify a destination bucket.
Syntax
DELETE /BUCKET?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
DELETE /BUCKET?lifecycle HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Request Headers
The request does not contain any special elements.
Response
The response returns common response status.
Additional Resources
- See Appendix A for Amazon S3 common request headers.
- See Appendix B for Amazon S3 common response status codes.
2.4.14. S3 get bucket location Copy linkLink copied to clipboard!
Retrieves the bucket’s zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint during a PUT request.
Add the location subresource to bucket resource as shown below.
Syntax
GET /BUCKET?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?location HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| Name | Type | Description |
|---|---|---|
|
| String | The zone group where bucket resides, empty string for default zone group |
2.4.15. S3 get bucket versioning Copy linkLink copied to clipboard!
Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this.
Add the versioning subresource to bucket resource as shown below.
Syntax
GET /BUCKET?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?versioning HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.16. S3 put the bucket versioning Copy linkLink copied to clipboard!
This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value.
Setting the bucket versioning state:
Enabled : Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended : Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null.
Syntax
PUT /BUCKET?versioning HTTP/1.1
PUT /BUCKET?versioning HTTP/1.1
| Name | Type | Description |
|---|---|---|
|
| container | A container for the request. |
|
| String | Sets the versioning state of the bucket. Valid Values: Suspended/Enabled |
2.4.17. S3 get bucket access control lists Copy linkLink copied to clipboard!
Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket.
Add the acl subresource to the bucket request as shown below.
Syntax
GET /BUCKET?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?acl HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the response. |
|
| Container | A container for the ACL information. |
|
| Container |
A container for the bucket owner’s |
|
| String | The bucket owner’s ID. |
|
| String | The bucket owner’s display name. |
|
| Container |
A container for |
|
| Container |
A container for the |
|
| String |
The permission given to the |
2.4.18. S3 put bucket Access Control Lists Copy linkLink copied to clipboard!
Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP permission on the bucket.
Add the acl subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?acl HTTP/1.1
PUT /BUCKET?acl HTTP/1.1
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the request. |
|
| Container | A container for the ACL information. |
|
| Container |
A container for the bucket owner’s |
|
| String | The bucket owner’s ID. |
|
| String | The bucket owner’s display name. |
|
| Container |
A container for |
|
| Container |
A container for the |
|
| String |
The permission given to the |
2.4.19. S3 get bucket cors Copy linkLink copied to clipboard!
Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket.
Add the cors subresource to the bucket request as shown below.
Syntax
GET /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?cors HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.20. S3 put bucket cors Copy linkLink copied to clipboard!
Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket.
Add the cors subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
PUT /BUCKET?cors HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.21. S3 delete a bucket cors Copy linkLink copied to clipboard!
Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket.
Add the cors subresource to the bucket request as shown below.
Syntax
DELETE /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
DELETE /BUCKET?cors HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.22. S3 list bucket object versions Copy linkLink copied to clipboard!
Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket.
Add the versions subresource to the bucket request as shown below.
Syntax
GET /BUCKET?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?versions HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
You can specify parameters for GET /BUCKET?versions, but none of them are required.
| Name | Type | Description |
|---|---|---|
|
| String | Returns in-progress uploads whose keys contains the specified prefix. |
|
| String | The delimiter between the prefix and the rest of the object name. |
|
| String | The beginning marker for the list of uploads. |
|
| Integer | The maximum number of in-progress uploads. The default is 1000. |
|
| String | Specifies the object version to begin the list. |
| Name | Type | Description |
|---|---|---|
|
| String |
The key marker specified by the |
|
| String |
The key marker to use in a subsequent request if |
|
| String |
The upload ID marker to use in a subsequent request if |
|
| Boolean |
If |
|
| Integer | The size of the uploaded part. |
|
| String | The owners’s display name. |
|
| String | The owners’s ID. |
|
| Container |
A container for the |
|
| String |
The method used to store the resulting object. |
|
| Container | Container for the version information. |
|
| String | Version ID of an object. |
|
| String | The last version of the key in a truncated response. |
2.4.23. S3 head bucket Copy linkLink copied to clipboard!
Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK if the bucket exists and the caller has permissions; 404 Not Found if the bucket does not exist; and, 403 Forbidden if the bucket exists but the caller does not have access permissions.
Syntax
HEAD /BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
HEAD /BUCKET HTTP/1.1
Host: cname.domain.com
Date: date
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.24. S3 list multipart uploads Copy linkLink copied to clipboard!
GET /?uploads returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn’t completed all the uploads yet.
Syntax
GET /BUCKET?uploads HTTP/1.1
GET /BUCKET?uploads HTTP/1.1
You can specify parameters for GET /BUCKET?uploads, but none of them are required.
| Name | Type | Description |
|---|---|---|
|
| String | Returns in-progress uploads whose keys contains the specified prefix. |
|
| String | The delimiter between the prefix and the rest of the object name. |
|
| String | The beginning marker for the list of uploads. |
|
| Integer | The maximum number of in-progress uploads. The default is 1000. |
|
| Integer | The maximum number of multipart uploads. The range from 1-1000. The default is 1000. |
|
| String |
Ignored if |
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the results. |
|
| String |
The prefix specified by the |
|
| String | The bucket that will receive the bucket contents. |
|
| String |
The key marker specified by the |
|
| String |
The marker specified by the |
|
| String |
The key marker to use in a subsequent request if |
|
| String |
The upload ID marker to use in a subsequent request if |
|
| Integer |
The max uploads specified by the |
|
| String |
If set, objects with the same prefix will appear in the |
|
| Boolean |
If |
|
| Container |
A container for |
|
| String | The key of the object once the multipart upload is complete. |
|
| String |
The |
|
| Container |
Contains the |
|
| String | The initiator’s display name. |
|
| String | The initiator’s ID. |
|
| Container |
A container for the |
|
| String |
The method used to store the resulting object. |
|
| Date | The date and time the user initiated the upload. |
|
| Container | If multiple objects contain the same prefix, they will appear in this list. |
|
| String |
The substring of the key after the prefix as defined by the |
2.4.25. S3 bucket policies Copy linkLink copied to clipboard!
The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets.
Creation and Removal
Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin CLI tool.
Administrators may use the s3cmd command to set or delete a policy.
Example
Limitations
Ceph Object Gateway only supports the following S3 actions:
-
s3:AbortMultipartUpload -
s3:CreateBucket -
s3:DeleteBucketPolicy -
s3:DeleteBucket -
s3:DeleteBucketWebsite -
s3:DeleteObject -
s3:DeleteObjectVersion -
s3:GetBucketAcl -
s3:GetBucketCORS -
s3:GetBucketLocation -
s3:GetBucketPolicy -
s3:GetBucketRequestPayment -
s3:GetBucketVersioning -
s3:GetBucketWebsite -
s3:GetLifecycleConfiguration -
s3:GetObjectAcl -
s3:GetObject -
s3:GetObjectTorrent -
s3:GetObjectVersionAcl -
s3:GetObjectVersion -
s3:GetObjectVersionTorrent -
s3:ListAllMyBuckets -
s3:ListBucketMultiPartUploads -
s3:ListBucket -
s3:ListBucketVersions -
s3:ListMultipartUploadParts -
s3:PutBucketAcl -
s3:PutBucketCORS -
s3:PutBucketPolicy -
s3:PutBucketRequestPayment -
s3:PutBucketVersioning -
s3:PutBucketWebsite -
s3:PutLifecycleConfiguration -
s3:PutObjectAcl -
s3:PutObject -
s3:PutObjectVersionAcl
Ceph Object Gateway does not support setting policies on users, groups, or roles.
The Ceph Object Gateway uses the RGW ‘tenant’ identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users.
With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket in the S3 request.
In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users.
Granting an entire account access to a bucket grants access to ALL users in that account.
Bucket policies do NOT support string interpolation.
Ceph Object Gateway supports the following condition keys:
-
aws:CurrentTime -
aws:EpochTime -
aws:PrincipalType -
aws:Referer -
aws:SecureTransport -
aws:SourceIp -
aws:UserAgent -
aws:username
Ceph Object Gateway ONLY supports the following condition keys for the ListBucket action:
-
s3:prefix -
s3:delimiter -
s3:max-keys
Impact on Swift
Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that have been set with the S3 API govern Swift as well as S3 operations.
Ceph Object Gateway matches Swift credentials against Principals specified in a policy.
2.4.26. S3 get the request payment configuration on a bucket Copy linkLink copied to clipboard!
Uses the requestPayment subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket.
Add the requestPayment subresource to the bucket request as shown below.
Syntax
GET /BUCKET?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
GET /BUCKET?requestPayment HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.27. S3 set the request payment configuration on a bucket Copy linkLink copied to clipboard!
Uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket.
Add the requestPayment subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?requestPayment HTTP/1.1 Host: cname.domain.com
PUT /BUCKET?requestPayment HTTP/1.1
Host: cname.domain.com
| Name | Type | Description |
|---|---|---|
|
| Enum | Specifies who pays for the download and request fees. |
|
| Container |
A container for |
2.4.28. Multi-tenant bucket operations Copy linkLink copied to clipboard!
When a client application accesses buckets, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi tenancy is completely backward compatible with previous releases, as long as the referred buckets and referring user belong to the same tenant.
Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used.
In the following example, a colon character separates tenant and bucket. Thus a sample URL would be:
https://rgw.domain.com/tenant:bucket
https://rgw.domain.com/tenant:bucket
By contrast, a simple Python example separates the tenant and bucket in the bucket method itself:
Example
It’s not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path format has to be used with multi-tenancy.
Additional Resources
- See Multi Tenancy for additional details.
2.4.29. Additional Resources Copy linkLink copied to clipboard!
- See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on configuring a bucket website.
2.5. S3 object operations Copy linkLink copied to clipboard!
As a developer, you can perform object operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway.
The following table list the Amazon S3 functional operations for objects, along with the function’s support status.
| Get Object | Supported | |
|---|---|---|
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Multi-Tenancy | Supported |
2.5.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.5.2. S3 get an object from a bucket Copy linkLink copied to clipboard!
Retrieves an object from a bucket:
Syntax
GET /BUCKET/OBJECT HTTP/1.1
GET /BUCKET/OBJECT HTTP/1.1
Add the versionId subresource to retrieve a particular version of the object:
Syntax
GET /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
GET /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
| Name | Description | Valid Values | Required |
|---|---|---|---|
| range | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
| if-modified-since | Gets only if modified since the timestamp. | Timestamp | No |
| if-unmodified-since | Gets only if not modified since the timestamp. | Timestamp | No |
| if-match | Gets only if object ETag matches ETag. | Entity Tag | No |
| if-none-match | Gets only if object ETag matches ETag. | Entity Tag | No |
| Name | Description |
|---|---|
| Content-Range | Data range, will only be returned if the range header field was specified in the request |
| x-amz-version-id | Returns the version ID or null. |
2.5.3. S3 get information on an object Copy linkLink copied to clipboard!
Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload.
Retrieves the current version of the object:
Syntax
HEAD /BUCKET/OBJECT HTTP/1.1
HEAD /BUCKET/OBJECT HTTP/1.1
Add the versionId subresource to retrieve info for a particular version:
Syntax
HEAD /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
HEAD /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
| Name | Description | Valid Values | Required |
|---|---|---|---|
| range | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
| if-modified-since | Gets only if modified since the timestamp. | Timestamp | No |
| if-unmodified-since | Gets only if not modified since the timestamp. | Timestamp | No |
| if-match | Gets only if object ETag matches ETag. | Entity Tag | No |
| if-none-match | Gets only if object ETag matches ETag. | Entity Tag | No |
| Name | Description |
|---|---|
| x-amz-version-id | Returns the version ID or null. |
2.5.4. S3 add an object to a bucket Copy linkLink copied to clipboard!
Adds an object to a bucket. You must have write permissions on the bucket to perform this operation.
Syntax
PUT /BUCKET/OBJECT HTTP/1.1
PUT /BUCKET/OBJECT HTTP/1.1
| Name | Description | Valid Values | Required |
|---|---|---|---|
| content-md5 | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
| content-type | A standard MIME type. |
Any MIME type. Default: | No |
| x-amz-meta-<…> | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
| x-amz-acl | A canned ACL. |
| No |
| Name | Description |
|---|---|
| x-amz-version-id | Returns the version ID or null. |
2.5.5. S3 delete an object Copy linkLink copied to clipboard!
Removes an object. Requires WRITE permission set on the containing bucket.
Deletes an object. If object versioning is on, it creates a marker.
Syntax
DELETE /BUCKET/OBJECT HTTP/1.1
DELETE /BUCKET/OBJECT HTTP/1.1
To delete an object when versioning is on, you must specify the versionId subresource and the version of the object to delete.
DELETE /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
DELETE /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
2.5.6. S3 delete multiple objects Copy linkLink copied to clipboard!
This API call deletes multiple objects from a bucket.
Syntax
POST /BUCKET/OBJECT?delete HTTP/1.1
POST /BUCKET/OBJECT?delete HTTP/1.1
2.5.7. S3 get an object’s Access Control List (ACL) Copy linkLink copied to clipboard!
Returns the ACL for the current version of the object:
Syntax
GET /BUCKET/OBJECT?acl HTTP/1.1
GET /BUCKET/OBJECT?acl HTTP/1.1
Add the versionId subresource to retrieve the ACL for a particular version:
Syntax
GET /BUCKET/OBJECT?versionId=VERSION_ID&acl HTTP/1.1
GET /BUCKET/OBJECT?versionId=VERSION_ID&acl HTTP/1.1
| Name | Description |
|---|---|
| x-amz-version-id | Returns the version ID or null. |
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the response. |
|
| Container | A container for the ACL information. |
|
| Container |
A container for the object owner’s |
|
| String | The object owner’s ID. |
|
| String | The object owner’s display name. |
|
| Container |
A container for |
|
| Container |
A container for the |
|
| String |
The permission given to the |
2.5.8. S3 set an object’s Access Control List (ACL) Copy linkLink copied to clipboard!
Sets an object ACL for the current version of the object.
Syntax
PUT /BUCKET/OBJECT?acl
PUT /BUCKET/OBJECT?acl
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the response. |
|
| Container | A container for the ACL information. |
|
| Container |
A container for the object owner’s |
|
| String | The object owner’s ID. |
|
| String | The object owner’s display name. |
|
| Container |
A container for |
|
| Container |
A container for the |
|
| String |
The permission given to the |
2.5.9. S3 copy an object Copy linkLink copied to clipboard!
To copy an object, use PUT and specify a destination bucket and the object name.
Syntax
PUT /DEST_BUCKET/DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET/SOURCE_OBJECT
PUT /DEST_BUCKET/DEST_OBJECT HTTP/1.1
x-amz-copy-source: SOURCE_BUCKET/SOURCE_OBJECT
| Name | Description | Valid Values | Required |
|---|---|---|---|
| x-amz-copy-source | The source bucket name + object name. | BUCKET/OBJECT | Yes |
| x-amz-acl | A canned ACL. |
| No |
| x-amz-copy-if-modified-since | Copies only if modified since the timestamp. | Timestamp | No |
| x-amz-copy-if-unmodified-since | Copies only if unmodified since the timestamp. | Timestamp | No |
| x-amz-copy-if-match | Copies only if object ETag matches ETag. | Entity Tag | No |
| x-amz-copy-if-none-match | Copies only if object ETag doesn’t match. | Entity Tag | No |
| Name | Type | Description |
|---|---|---|
| CopyObjectResult | Container | A container for the response elements. |
| LastModified | Date | The last modified date of the source object. |
| Etag | String | The ETag of the new object. |
Additional Resources
- <additional resource 1>
- <additional resource 2>
2.5.10. S3 add an object to a bucket using HTML forms Copy linkLink copied to clipboard!
Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation.
Syntax
POST /BUCKET/OBJECT HTTP/1.1
POST /BUCKET/OBJECT HTTP/1.1
2.5.11. S3 determine options for a request Copy linkLink copied to clipboard!
A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers.
Syntax
OPTIONS /OBJECT HTTP/1.1
OPTIONS /OBJECT HTTP/1.1
2.5.12. S3 initiate a multipart upload Copy linkLink copied to clipboard!
Initiates a multi-part upload process. Returns a UploadId, which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload.
Syntax
POST /BUCKET/OBJECT?uploads
POST /BUCKET/OBJECT?uploads
| Name | Description | Valid Values | Required |
|---|---|---|---|
|
| A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
|
| A standard MIME type. |
Any MIME type. Default: | No |
|
| User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
|
| A canned ACL. |
| No |
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the results. |
|
| String | The bucket that will receive the object contents. |
|
| String |
The key specified by the |
|
| String |
The ID specified by the |
2.5.13. S3 add a part to a multipart upload Copy linkLink copied to clipboard!
Adds a part to a multi-part upload.
Specify the uploadId subresource and the upload ID to add a part to a multi-part upload:
Syntax
PUT /BUCKET/OBJECT?partNumber=&uploadId=UPLOAD_ID HTTP/1.1
PUT /BUCKET/OBJECT?partNumber=&uploadId=UPLOAD_ID HTTP/1.1
The following HTTP response might be returned:
| HTTP Status | Status Code | Description |
|---|---|---|
|
| NoSuchUpload | Specified upload-id does not match any initiated upload on this object |
2.5.14. S3 list the parts of a multipart upload Copy linkLink copied to clipboard!
Specify the uploadId subresource and the upload ID to list the parts of a multi-part upload:
Syntax
GET /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
GET /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the results. |
|
| String | The bucket that will receive the object contents. |
|
| String |
The key specified by the |
|
| String |
The ID specified by the |
|
| Container |
Contains the |
|
| String | The initiator’s ID. |
|
| String | The initiator’s display name. |
|
| Container |
A container for the |
|
| String |
The method used to store the resulting object. |
|
| String |
The part marker to use in a subsequent request if |
|
| String |
The next part marker to use in a subsequent request if |
|
| Integer |
The max parts allowed in the response as specified by the |
|
| Boolean |
If |
|
| Container |
A container for |
|
| Integer | The identification number of the part. |
|
| String | The part’s entity tag. |
|
| Integer | The size of the uploaded part. |
2.5.15. S3 assemble the uploaded parts Copy linkLink copied to clipboard!
Assembles uploaded parts and creates a new object, thereby completing a multipart upload.
Specify the uploadId subresource and the upload ID to complete a multi-part upload:
Syntax
POST /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
POST /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
| Name | Type | Description | Required |
|---|---|---|---|
|
| Container | A container consisting of one or more parts. | Yes |
|
| Container |
A container for the | Yes |
|
| Integer | The identifier of the part. | Yes |
|
| String | The part’s entity tag. | Yes |
| Name | Type | Description |
|---|---|---|
|
| Container | A container for the response. |
|
| URI | The resource identifier (path) of the new object. |
|
| String | The name of the bucket that contains the new object. |
|
| String | The object’s key. |
|
| String | The entity tag of the new object. |
2.5.16. S3 copy a multipart upload Copy linkLink copied to clipboard!
Uploads a part by copying data from an existing object as data source.
Specify the uploadId subresource and the upload ID to perform a multi-part upload copy:
Syntax
PUT /BUCKET/OBJECT?partNumber=PartNumber&uploadId=UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
PUT /BUCKET/OBJECT?partNumber=PartNumber&uploadId=UPLOAD_ID HTTP/1.1
Host: cname.domain.com
Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
| Name | Description | Valid Values | Required |
|---|---|---|---|
|
| The source bucket name and object name. | BUCKET/OBJECT | Yes |
|
| The range of bytes to copy from the source object. |
Range: | No |
| Name | Type | Description |
|---|---|---|
|
| Container | A container for all response elements. |
|
| String | Returns the ETag of the new part. |
|
| String | Returns the date the part was last modified. |
.Additional Resources
.Additional Resources
- For more information about this feature, see the Amazon S3 site.
2.5.17. S3 abort a multipart upload Copy linkLink copied to clipboard!
Aborts a multipart upload.
Specify the uploadId subresource and the upload ID to abort a multi-part upload:
Syntax
DELETE /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
DELETE /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
2.5.18. S3 Hadoop interoperability Copy linkLink copied to clipboard!
For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway.
Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3.
2.5.19. Additional Resources Copy linkLink copied to clipboard!
- See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on multi-tenancy.
2.6. Additional Resources Copy linkLink copied to clipboard!
- See Appendix A for Amazon S3 common request headers.
- See Appendix B for Amazon S3 common response status codes.
- See Appendix C for unsupported header fields.
Chapter 3. Ceph Object Gateway and the Swift API Copy linkLink copied to clipboard!
As a developer, you can use a RESTful application programing interface (API) that is compatible with the Swift API data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway.
The following table describes the support status for current Swift functional features:
| Feature | Status | Remarks |
|---|---|---|
| Supported | ||
| Get Account Metadata | Supported | No custom metadata |
| Supported | Supports a subset of Swift ACLs | |
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Get Container Metadata | Supported | |
| Supported | ||
| Delete Container Metadata | Supported | |
| Supported | ||
| Supported | ||
| Create Large Object | Supported | |
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| Supported | ||
| CORS | Not Supported | |
| Expiring Objects | Supported | |
| Object Versioning | Not Supported | |
| Static Website | Not Supported |
3.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
3.2. Swift API limitations Copy linkLink copied to clipboard!
The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team.
- Maximum object size when using Swift API: 5GB
- Maximum metadata size when using Swift API: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes.
3.3. Create a Swift user Copy linkLink copied to clipboard!
To test the Swift interface, create a Swift subuser. Creating a Swift user is a two step process. The first step is to create the user. The second step is to create the secret key.
In a multi-site deployment, always create a user on a host in the master zone of the master zone group.
Prerequisites
- Installation of the Ceph Object Gateway.
- Root-level access to the Ceph Object Gateway node.
Procedure
Create the Swift user:
Syntax
radosgw-admin subuser create --uid=NAME --subuser=NAME:swift --access=full
radosgw-admin subuser create --uid=NAME --subuser=NAME:swift --access=fullCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
NAMEwith the Swift user name, for example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret key:
Syntax
radosgw-admin key create --subuser=NAME:swift --key-type=swift --gen-secret
radosgw-admin key create --subuser=NAME:swift --key-type=swift --gen-secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
NAMEwith the Swift user name, for example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Swift authenticating a user Copy linkLink copied to clipboard!
To authenticate a user, make a request containing an X-Auth-User and a X-Auth-Key in the header.
Syntax
GET /auth HTTP/1.1 Host: swift.example.com X-Auth-User: johndoe X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K
GET /auth HTTP/1.1
Host: swift.example.com
X-Auth-User: johndoe
X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K
Example Response
You can retrieve data about Ceph’s Swift-compatible service by executing GET requests using the X-Storage-Url value during authentication.
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for Swift request headers.
- See the Red Hat Ceph Storage Developer Guide for Swift response headers.
3.5. Swift container operations Copy linkLink copied to clipboard!
As a developer, you can perform container operations with the Swift application programing interface (API) through the Ceph Object Gateway. You can list, create, update, and delete containers. You can also add or update the container’s metadata.
3.5.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
3.5.2. Swift container operations Copy linkLink copied to clipboard!
A container is a mechanism for storing data objects. An account can have many containers, but container names must be unique. This API enables a client to create a container, set access controls and metadata, retrieve a container’s contents, and delete a container. Since this API makes requests related to information in a particular user’s account, all requests in this API must be authenticated unless a container’s access control is deliberately made publicly accessible, that is, allows anonymous requests.
The Amazon S3 API uses the term 'bucket' to describe a data container. When you hear someone refer to a 'bucket' within the Swift API, the term 'bucket' might be construed as the equivalent of the term 'container.'
One facet of object storage is that it does not support hierarchical paths or directories. Instead, it supports one level consisting of one or more containers, where each container might have objects. The RADOS Gateway’s Swift-compatible API supports the notion of 'pseudo-hierarchical containers', which is a means of using object naming to emulate a container, or directory hierarchy without actually implementing one in the storage system. You can name objects with pseudo-hierarchical names, for example, photos/buildings/empire-state.jpg, but container names cannot contain a forward slash (/) character.
When uploading large objects to versioned Swift containers, use the --leave-segments option with the python-swiftclient utility. Not using --leave-segments overwrites the manifest file. Consequently, an existing object is overwritten, which leads to data loss.
3.5.3. Swift update a container’s Access Control List (ACL) Copy linkLink copied to clipboard!
When a user creates a container, the user has read and write access to the container by default. To allow other users to read a container’s contents or write to a container, you must specifically enable the user. You can also specify * in the X-Container-Read or X-Container-Write settings, which effectively enables all users to either read from or write to the container. Setting * makes the container public. That is it enables anonymous users to either read from or write to the container.
Syntax
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Read: * X-Container-Write: UID1, UID2, UID3
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
X-Container-Read: *
X-Container-Write: UID1, UID2, UID3
| Name | Description | Type | Required |
|---|---|---|---|
|
| The user IDs with read permissions for the container. | Comma-separated string values of user IDs. | No |
|
| The user IDs with write permissions for the container. | Comma-separated string values of user IDs. | No |
3.5.4. Swift list containers Copy linkLink copied to clipboard!
A GET request that specifies the API version and the account will return a list of containers for a particular user account. Since the request returns a particular user’s containers, the request requires an authentication token. The request cannot be made anonymously.
Syntax
GET /API_VERSION/ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
GET /API_VERSION/ACCOUNT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Required | Valid Values |
|---|---|---|---|---|
|
| Limits the number of results to the specified value. | Integer | No | N/A |
|
| Defines the format of the result. | String | No |
|
|
| Returns a list of results greater than the marker value. | String | No | N/A |
The response contains a list of containers, or returns with an HTTP 204 response code
| Name | Description | Type |
|---|---|---|
|
| A list for account information. | Container |
|
| The list of containers. | Container |
|
| The name of a container. | String |
|
| The size of the container. | Integer |
3.5.5. Swift list a container’s objects Copy linkLink copied to clipboard!
To list the objects within a container, make a GET request with the with the API version, account, and the name of the container. You can specify query parameters to filter the full list, or leave out the parameters to return a list of the first 10,000 object names stored in the container.
Syntax
GET /AP_VERSION/TENANT:CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
GET /AP_VERSION/TENANT:CONTAINER HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Valid Values | Required |
|---|---|---|---|---|
|
| Defines the format of the result. | String |
| No |
|
| Limits the result set to objects beginning with the specified prefix. | String | N/A | No |
|
| Returns a list of results greater than the marker value. | String | N/A | No |
|
| Limits the number of results to the specified value. | Integer | 0 - 10,000 | No |
|
| The delimiter between the prefix and the rest of the object name. | String | N/A | No |
|
| The pseudo-hierarchical path of the objects. | String | N/A | No |
| Name | Description | Type |
|---|---|---|
|
| The container. | Container |
|
| An object within the container. | Container |
|
| The name of an object within the container. | String |
|
| A hash code of the object’s contents. | String |
|
| The last time the object’s contents were modified. | Date |
|
| The type of content within the object. | String |
3.5.6. Swift create a container Copy linkLink copied to clipboard!
To create a new container, make a PUT request with the API version, account, and the name of the new container. The container name must be unique, must not contain a forward-slash (/) character, and should be less than 256 bytes. You can include access control headers and metadata headers in the request. You can also include a storage policy identifying a key for a set of placement pools. For example, execute radosgw-admin zone get to see a list of available keys under placement_pools. A storage policy enables you to specify a special set of pools for the container, for example, SSD-based storage. The operation is idempotent. If you make a request to create a container that already exists, it will return with a HTTP 202 return code, but will not create another container.
Syntax
| Name | Description | Type | Required |
|---|---|---|---|
|
| The user IDs with read permissions for the container. | Comma-separated string values of user IDs. | No |
|
| The user IDs with write permissions for the container. | Comma-separated string values of user IDs. | No |
|
| A user-defined meta data key that takes an arbitrary string value. | String | No |
|
|
The key that identifies the storage policy under | String | No |
If a container with the same name already exists, and the user is the container owner then the operation will succeed. Otherwise the operation will fail.
| Name | Description | Status Code |
|---|---|---|
|
| The container already exists under a different user’s ownership. |
|
3.5.7. Swift delete a container Copy linkLink copied to clipboard!
To delete a container, make a DELETE request with the API version, account, and the name of the container. The container must be empty. If you’d like to check if the container is empty, execute a HEAD request against the container. Once you’ve successfully removed the container, you’ll be able to reuse the container name.
Syntax
DELETE /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
DELETE /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Status Code |
|---|---|---|
|
| The container was removed. |
|
3.5.8. Swift add or update the container metadata Copy linkLink copied to clipboard!
To add metadata to a container, make a POST request with the API version, account, and container name. You must have write permissions on the container to add or update metadata.
Syntax
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN X-Container-Meta-Color: red X-Container-Meta-Taste: salty
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
X-Container-Meta-Color: red
X-Container-Meta-Taste: salty
| Name | Description | Type | Required |
|---|---|---|---|
|
| A user-defined meta data key that takes an arbitrary string value. | String | No |
3.6. Swift object operations Copy linkLink copied to clipboard!
As a developer, you can perform object operations with the Swift application programing interface (API) through the Ceph Object Gateway. You can list, create, update, and delete objects. You can also add or update the object’s metadata.
3.6.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
3.6.2. Swift object operations Copy linkLink copied to clipboard!
An object is a container for storing data and metadata. A container might have many objects, but the object names must be unique. This API enables a client to create an object, set access controls and metadata, retrieve an object’s data and metadata, and delete an object. Since this API makes requests related to information in a particular user’s account, all requests in this API must be authenticated. Unless the container or object’s access control is deliberately made publicly accessible, that is, allows anonymous requests.
3.6.3. Swift get an object Copy linkLink copied to clipboard!
To retrieve an object, make a GET request with the API version, account, container and object name. You must have read permissions on the container to retrieve an object within it.
Syntax
GET /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
GET /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Required |
|---|---|---|---|
|
| To retrieve a subset of an object’s contents, you can specify a byte range. | Date | No |
|
|
Only copies if modified since the date/time of the source object’s | Date | No |
|
|
Only copies if not modified since the date/time of the source object’s | Date | No |
|
| Copies only if the ETag in the request matches the source object’s ETag. | ETag. | No |
|
| Copies only if the ETag in the request does not match the source object’s ETag. | ETag. | No |
| Name | Description |
|---|---|
|
| The range of the subset of object contents. Returned only if the range header field was specified in the request. |
3.6.4. Swift create or update an object Copy linkLink copied to clipboard!
To create a new object, make a PUT request with the API version, account, container name and the name of the new object. You must have write permission on the container to create or update an object. The object name must be unique within the container. The PUT request is not idempotent, so if you do not use a unique name, the request will update the object. However, you can use pseudo-hierarchical syntax in the object name to distinguish it from another object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request.
Syntax
PUT /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
PUT /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Required | Valid Values |
|---|---|---|---|---|
|
| An MD5 hash of the object’s contents. Recommended. | String | No | N/A |
|
| The type of content the object contains. | String | No | N/A |
|
| Indicates whether the object is part of a larger aggregate object. | String | No |
|
3.6.5. Swift delete an object Copy linkLink copied to clipboard!
To delete an object, make a DELETE request with the API version, account, container and object name. You must have write permissions on the container to delete an object within it. Once you’ve successfully deleted the object, you will be able to reuse the object name.
Syntax
DELETE /API_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
DELETE /API_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
3.6.6. Swift copy an object Copy linkLink copied to clipboard!
Copying an object allows you to make a server-side copy of an object, so that you do not have to download it and upload it under another container. To copy the contents of one object to another object, you can make either a PUT request or a COPY request with the API version, account, and the container name.
For a PUT request, use the destination container and object name in the request, and the source container and object in the request header.
For a Copy request, use the source container and object in the request, and the destination container and object in the request header. You must have write permission on the container to copy an object. The destination object name must be unique within the container. The request is not idempotent, so if you do not use a unique name, the request will update the destination object. You can use pseudo-hierarchical syntax in the object name to distinguish the destination object from the source object of the same name if it is under a different pseudo-hierarchical directory. You can include access control headers and metadata headers in the request.
Syntax
PUT /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1 X-Copy-From: TENANT:SOURCE_CONTAINER/SOURCE_OBJECT Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
PUT /AP_VERSION/ACCOUNT/TENANT:CONTAINER HTTP/1.1
X-Copy-From: TENANT:SOURCE_CONTAINER/SOURCE_OBJECT
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
or alternatively:
Syntax
COPY /AP_VERSION/ACCOUNT/TENANT:SOURCE_CONTAINER/SOURCE_OBJECT HTTP/1.1 Destination: TENANT:DEST_CONTAINER/DEST_OBJECT
COPY /AP_VERSION/ACCOUNT/TENANT:SOURCE_CONTAINER/SOURCE_OBJECT HTTP/1.1
Destination: TENANT:DEST_CONTAINER/DEST_OBJECT
| Name | Description | Type | Required |
|---|---|---|---|
|
|
Used with a | String |
Yes, if using |
|
|
Used with a | String |
Yes, if using |
|
|
Only copies if modified since the date/time of the source object’s | Date | No |
|
|
Only copies if not modified since the date/time of the source object’s | Date | No |
|
| Copies only if the ETag in the request matches the source object’s ETag. | ETag. | No |
|
| Copies only if the ETag in the request does not match the source object’s ETag. | ETag. | No |
3.6.7. Swift get object metadata Copy linkLink copied to clipboard!
To retrieve an object’s metadata, make a HEAD request with the API version, account, container and object name. You must have read permissions on the container to retrieve metadata from an object within the container. This request returns the same header information as the request for the object itself, but it does not return the object’s data.
Syntax
HEAD /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
HEAD /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
3.6.8. Swift add or update object metadata Copy linkLink copied to clipboard!
To add metadata to an object, make a POST request with the API version, account, container and object name. You must have write permissions on the parent container to add or update metadata.
Syntax
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
POST /AP_VERSION/ACCOUNT/TENANT:CONTAINER/OBJECT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Required |
|---|---|---|---|
|
| A user-defined meta data key that takes an arbitrary string value. | String | No |
3.7. Swift temporary URL operations Copy linkLink copied to clipboard!
To allow temporary access, temp url functionality is supported by swift endpoint of radosgw. For example GET requests, to objects without the need to share credentials.
For this functionality, initially the value of X-Account-Meta-Temp-URL-Key and optionally X-Account-Meta-Temp-URL-Key-2 should be set. The Temp URL functionality relies on a HMAC-SHA1 signature against these secret keys.
3.7.1. Swift get temporary URL objects Copy linkLink copied to clipboard!
Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes the following elements:
- The value of the Request method, "GET" for instance
- The expiry time, in format of seconds since the epoch, that is, Unix time
- The request path starting from "v1" onwards
The above items are normalized with newlines appended between them, and a HMAC is generated using the SHA-1 hashing algorithm against one of the Temp URL Keys posted earlier.
A sample python script to demonstrate the above is given below:
Example
Example Output
https://objectstore.example.com/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992
https://objectstore.example.com/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992
3.7.2. Swift POST temporary URL keys Copy linkLink copied to clipboard!
A POST request to the swift account with the required Key will set the secret temp URL key for the account against which temporary URL access can be provided to accounts. Up to two keys are supported, and signatures are checked against both the keys, if present, so that keys can be rotated without invalidating the temporary URLs.
Syntax
POST /API_VERSION/ACCOUNT HTTP/1.1 Host: FULLY_QUALIFIED_DOMAIN_NAME X-Auth-Token: AUTH_TOKEN
POST /API_VERSION/ACCOUNT HTTP/1.1
Host: FULLY_QUALIFIED_DOMAIN_NAME
X-Auth-Token: AUTH_TOKEN
| Name | Description | Type | Required |
|---|---|---|---|
|
| A user-defined key that takes an arbitrary string value. | String | Yes |
|
| A user-defined key that takes an arbitrary string value. | String | No |
3.8. Swift multi-tenancy container operations Copy linkLink copied to clipboard!
When a client application accesses containers, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every container operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi tenancy is completely backward compatible with previous releases, as long as the referred containers and referring user belong to the same tenant.
Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used.
A colon character separates tenant and container, thus a sample URL would be:
Example
https://rgw.domain.com/tenant:container
https://rgw.domain.com/tenant:container
By contrast, in a create_container() method, simply separate the tenant and container in the container method itself:
Example
create_container("tenant:container")
create_container("tenant:container")
3.9. Additional Resources Copy linkLink copied to clipboard!
- See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on multi-tenancy.
- See Appendix D for Swift request headers.
- See Appendix E for Swift response headers.
Appendix A. S3 common request headers Copy linkLink copied to clipboard!
The following table lists the valid common request headers and their descriptions.
| Request Header | Description |
|---|---|
|
| Length of the request body. |
|
| Request time and date (in UTC). |
|
| The name of the host server. |
|
| Authorization token. |
Appendix B. S3 common response status codes Copy linkLink copied to clipboard!
The following table lists the valid common HTTP response status and its corresponding code.
| HTTP Status | Response Code |
|---|---|
|
| Continue |
|
| Success |
|
| Created |
|
| Accepted |
|
| NoContent |
|
| Partial content |
|
| NotModified |
|
| InvalidArgument |
|
| InvalidDigest |
|
| BadDigest |
|
| InvalidBucketName |
|
| InvalidObjectName |
|
| UnresolvableGrantByEmailAddress |
|
| InvalidPart |
|
| InvalidPartOrder |
|
| RequestTimeout |
|
| EntityTooLarge |
|
| AccessDenied |
|
| UserSuspended |
|
| RequestTimeTooSkewed |
|
| NoSuchKey |
|
| NoSuchBucket |
|
| NoSuchUpload |
|
| MethodNotAllowed |
|
| RequestTimeout |
|
| BucketAlreadyExists |
|
| BucketNotEmpty |
|
| MissingContentLength |
|
| PreconditionFailed |
|
| InvalidRange |
|
| UnprocessableEntity |
|
| InternalError |
Appendix C. S3 unsupported header fields Copy linkLink copied to clipboard!
| Name | Type |
|---|---|
| x-amz-security-token | Request |
| Server | Response |
| x-amz-delete-marker | Response |
| x-amz-id-2 | Response |
| x-amz-request-id | Response |
| x-amz-version-id | Response |
Appendix D. Swift request headers Copy linkLink copied to clipboard!
| Name | Description | Type | Required |
|---|---|---|---|
|
| The key Ceph Object Gateway username to authenticate. | String | Yes |
|
| The key associated to a Ceph Object Gateway username. | String | Yes |
Appendix E. Swift response headers Copy linkLink copied to clipboard!
The response from the server should include an X-Auth-Token value. The response might also contain a X-Storage-Url that provides the API_VERSION/ACCOUNT prefix that is specified in other requests throughout the API documentation.
| Name | Description | Type |
|---|---|---|
|
|
The authorization token for the | String |
|
|
The URL and | String |
Appendix F. Examples using the Secure Token Service APIs Copy linkLink copied to clipboard!
These examples are using Python’s boto3 module to interface with the Ceph Object Gateway’s implementation of the Secure Token Service (STS). In these examples, TESTER2 assumes a role created by TESTER1, as to access S3 resources owned by TESTER1 based on the permission policy attached to the role.
The AssumeRole example creates a role, assigns a policy to the role, then assumes a role to get temporary credentials and access to S3 resources using those temporary credentials.
The AssumeRoleWithWebIdentity example authenticates users using an external application with Keycloak, an OpenID Connect identity provider, assumes a role to get temporary credentials and access S3 resources according to the permission policy of the role.
AssumeRole Example
AssumeRoleWithWebIdentity Example
Additional Resources
-
See the Test S3 Access section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on using Python’s
botomodule.
Appendix G. Examples using session tags for Attribute-based access control in STS Copy linkLink copied to clipboard!
The following list contains examples of the usage of session tags for Attribute-based access control (ABAC) in STS.
Example of session tags that are passed in by Keycloak in the web token
Example of aws:RequestTag
Example of aws:PrincipalTag
Example of aws:ResourceTag
Example of aws:TagKeys
- 1
ForAllValues:StringEqualstests whether every tag key in the request is a subset of the tag keys in the policy. Therefore, the condition restricts the tag keys passed in the request.
Example of s3:ResourceTag
- 1
- For the above to work, you need to attach the ‘Department=Engineering’ tag to the bucket or object on which you want this policy to be applied.
Example of aws:RequestTag with iam:ResourceTag
- 1
- This is to assume a role by matching the tags in the incoming request with the tag attached to the role.
aws:RequestTagis the incoming tag in the JSON Web Token (JWT) andiam:ResourceTagis the tag attached to the role being assumed.
Example of aws:PrincipalTag with s3:ResourceTag
- 1
- This is to evaluate a role permission policy by matching principal tags with S3 resource tags.
aws:PrincipalTagis the tag passed in along with the temporary credentials ands3:ResourceTagis the tag attached to the S3 resource, that is object or bucket.
Appendix H. Sample code demonstrating usage of session tags Copy linkLink copied to clipboard!
The following is a sample code for tagging a role, bucket, or an object and using tag keys in a role trust and role permission policy.
The example assumes that a tag Department=Engineering is passed in the JSON Web Token (JWT) access token by Keycloak.