Chapter 2. Ceph Object Gateway and the S3 API
As a developer, you can use a RESTful application programing interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway.
2.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.2. S3 limitations
The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team.
-
Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single
PUT
is 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. - Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes.
- The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection.
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields.
2.3. Accessing the Ceph Object Gateway with the S3 API
As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API.
2.3.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- A running Ceph Object Gateway.
- A RESTful client.
2.3.2. S3 authentication
Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs.
For most use cases, clients use existing open source libraries like the Amazon SDK’s AmazonS3Client
for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too.
Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach.
Example
HTTP/1.1 PUT /buckets/bucket/object.mpeg Host: cname.domain.com Date: Mon, 2 Jan 2012 00:01:01 +0000 Content-Encoding: mpeg Content-Length: 9999999 Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
In the above example, replace ACCESS_KEY
with the value for the access key ID followed by a colon (:
). Replace HASH_OF_HEADER_AND_SECRET
with a hash of a canonicalized header string and the secret corresponding to the access key ID.
Generate hash of header string and secret
To generate the hash of the header string and secret:
- Get the value of the header string.
- Normalize the request header string into canonical form.
- Generate an HMAC using a SHA-1 hashing algorithm.
-
Encode the
hmac
result as base-64.
Normalize header
To normalize the header into canonical form:
-
Get all
content-
headers. -
Remove all
content-
headers except forcontent-type
andcontent-md5
. -
Ensure the
content-
header names are lowercase. -
Sort the
content-
headers lexicographically. -
Ensure you have a
Date
header AND ensure the specified date uses GMT and not an offset. -
Get all headers beginning with
x-amz-
. -
Ensure that the
x-amz-
headers are all lowercase. -
Sort the
x-amz-
headers lexicographically. - Combine multiple instances of the same field name into a single field and separate the field values with a comma.
- Replace white space and line breaks in header values with a single space.
- Remove white space before and after colons.
- Append a new line after each header.
- Merge the headers back into the request header.
Replace the HASH_OF_HEADER_AND_SECRET
with the base-64 encoded HMAC string.
Additional Resources
- For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation.
2.3.3. S3 server-side encryption
The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programing interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form.
Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO).
To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators may disable SSL during testing by setting the rgw_crypt_require_ssl
configuration setting to false
at runtime, setting it to false
in the Ceph configuration file and restarting the gateway instance, or setting it to false
in the Ansible configuration files and replaying the Ansible playbooks for the Ceph Object Gateway.
In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption.
For information about how to configure HTTP with server-side encryption, see the Additional Resources section below.
There are two options for the management of encryption keys:
Customer-provided Keys
When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer’s responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object.
Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification.
Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode.
Key Management Service
When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data.
Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification.
Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems.
Additional Resources
2.3.4. S3 access control lists
Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object:
Permission | Bucket | Object |
---|---|---|
| Grantee can list the objects in the bucket. | Grantee can read the object. |
| Grantee can write or delete objects in the bucket. | N/A |
| Grantee can read bucket ACL. | Grantee can read the object ACL. |
| Grantee can write bucket ACL. | Grantee can write to the object ACL. |
| Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
2.3.5. Preparing access to the Ceph Object Gateway using S3
You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server.
DO NOT modify the Ceph configuration file to use port 80
and let Civetweb
use the default Ansible configured port of 8080
.
Prerequisites
- Installation of the Ceph Object Gateway software.
- Root-level access to the Ceph Object Gateway node.
Procedure
As
root
, open port8080
on firewall:[root@rgw ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent [root@rgw ~]# firewall-cmd --reload
Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide.
You can also set up the gateway node for local DNS caching. To do so, execute the following steps:
As
root
, install and setupdnsmasq
:[root@rgw ~]# yum install dnsmasq [root@rgw ~]# echo "address=/.FQDN_OF_GATEWAY_NODE/IP_OF_GATEWAY_NODE" | tee --append /etc/dnsmasq.conf [root@rgw ~]# systemctl start dnsmasq [root@rgw ~]# systemctl enable dnsmasq
Replace
IP_OF_GATEWAY_NODE
andFQDN_OF_GATEWAY_NODE
with the IP address and FQDN of the gateway node.As
root
, stop NetworkManager:[root@rgw ~]# systemctl stop NetworkManager [root@rgw ~]# systemctl disable NetworkManager
As
root
, set the gateway server’s IP as the nameserver:[root@rgw ~]# echo "DNS1=IP_OF_GATEWAY_NODE" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 [root@rgw ~]# echo "IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE" | tee --append /etc/hosts [root@rgw ~]# systemctl restart network [root@rgw ~]# systemctl enable network [root@rgw ~]# systemctl restart dnsmasq
Replace
IP_OF_GATEWAY_NODE
andFQDN_OF_GATEWAY_NODE
with the IP address and FQDN of the gateway node.Verify subdomain requests:
[user@rgw ~]$ ping mybucket.FQDN_OF_GATEWAY_NODE
Replace
FQDN_OF_GATEWAY_NODE
with the FQDN of the gateway node.WarningSetting up the gateway server for local DNS caching is for testing purposes only. You won’t be able to access outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node.
-
Create the
radosgw
user forS3
access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generatedaccess_key
andsecret_key
. You will need these keys forS3
access and subsequent bucket management tasks.
2.3.6. Accessing the Ceph Object Gateway using Ruby AWS S3
You can use Ruby programming language along with aws-s3
gem for S3
access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3
.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to the node accessing the Ceph Object Gateway.
- Internet access.
Procedure
Install the
ruby
package:[root@dev ~]# yum install ruby
NoteThe above command will install
ruby
and it’s essential dependencies likerubygems
andruby-libs
. If somehow the command does not install all the dependencies, install them separately.Install the
aws-s3
Ruby package:[root@dev ~]# gem install aws-s3
Create a project directory:
[user@dev ~]$ mkdir ruby_aws_s3 [user@dev ~]$ cd ruby_aws_s3
Create the connection file:
[user@dev ~]$ vim conn.rb
Paste the following contents into the
conn.rb
file:Syntax
#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'FQDN_OF_GATEWAY_NODE', :port => '8080', :access_key_id => 'MY_ACCESS_KEY', :secret_access_key => 'MY_SECRET_KEY' )
Replace
FQDN_OF_GATEWAY_NODE
with the FQDN of the Ceph Object Gateway node. ReplaceMY_ACCESS_KEY
andMY_SECRET_KEY
with theaccess_key
andsecret_key
that was generated when you created theradosgw
user forS3
access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'testclient.englab.pnq.redhat.com', :port => '8080', :access_key_id => '98J4R9P22P5CDL65HKP8', :secret_access_key => '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049' )
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x conn.rb
Run the file:
[user@dev ~]$ ./conn.rb | echo $?
If you have provided the values correctly in the file, the output of the command will be
0
.Create a new file for creating a bucket:
[user@dev ~]$ vim create_bucket.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x create_bucket.rb
Run the file:
[user@dev ~]$ ./create_bucket.rb
If the output of the command is
true
it would mean that bucketmy-new-bucket1
was created successfully.Create a new file for listing owned buckets:
[user@dev ~]$ vim list_owned_buckets.rb
Paste the following content into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Service.buckets.each do |bucket| puts "{bucket.name}\t{bucket.creation_date}" end
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x list_owned_buckets.rb
Run the file:
[user@dev ~]$ ./list_owned_buckets.rb
The output should look something like this:
my-new-bucket1 2020-01-21 10:33:19 UTC
Create a new file for creating an object:
[user@dev ~]$ vim create_object.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.store( 'hello.txt', 'Hello World!', 'my-new-bucket1', :content_type => 'text/plain' )
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x create_object.rb
Run the file:
[user@dev ~]$ ./create_object.rb
This will create a file
hello.txt
with the stringHello World!
.Create a new file for listing a bucket’s content:
[user@dev ~]$ vim list_bucket_content.rb
Paste the following content into the file:
#!/usr/bin/env ruby load 'conn.rb' new_bucket = AWS::S3::Bucket.find('my-new-bucket1') new_bucket.each do |object| puts "{object.key}\t{object.about['content-length']}\t{object.about['last-modified']}" end
Save the file and exit the editor.
Make the file executable.
[user@dev ~]$ chmod +x list_bucket_content.rb
Run the file:
[user@dev ~]$ ./list_bucket_content.rb
The output will look something like this:
hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT
Create a new file for deleting an empty bucket:
[user@dev ~]$ vim del_empty_bucket.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x del_empty_bucket.rb
Run the file:
[user@dev ~]$ ./del_empty_bucket.rb | echo $?
If the bucket is successfully deleted, the command will return
0
as output.NoteEdit the
create_bucket.rb
file to create empty buckets, for example:my-new-bucket4
,my-new-bucket5
. Next, edit the above mentioneddel_empty_bucket.rb
file accordingly before trying to delete empty buckets.Create a new file for deleting non-empty buckets:
[user@dev ~]$ vim del_non_empty_bucket.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x del_non_empty_bucket.rb
Run the file:
[user@dev ~]$ ./del_non_empty_bucket.rb | echo $?
If the bucket is successfully deleted, the command will return
0
as output.Create a new file for deleting an object:
[user@dev ~]$ vim delete_object.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')
Save the file and exit the editor.
Make the file executable:
[user@dev ~]$ chmod +x delete_object.rb
Run the file:
[user@dev ~]$ ./delete_object.rb
This will delete the object
hello.txt
.
2.3.7. Accessing the Ceph Object Gateway using Ruby AWS SDK
You can use the Ruby programming language along with aws-sdk
gem for S3
access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK
.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to the node accessing the Ceph Object Gateway.
- Internet access.
Procedure
Install the
ruby
package:[root@dev ~]# yum install ruby
NoteThe above command will install
ruby
and it’s essential dependencies likerubygems
andruby-libs
. If somehow the command does not install all the dependencies, install them separately.Install the
aws-sdk
Ruby package:[root@dev ~]# gem install aws-sdk
Create a project directory:
[user@dev ~]$ mkdir ruby_aws_sdk [user@dev ~]$ cd ruby_aws_sdk
Create the connection file:
[user@ruby_aws_sdk]$ vim conn.rb
Paste the following contents into the
conn.rb
file:Syntax
#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://FQDN_OF_GATEWAY_NODE:8080', access_key_id: 'MY_ACCESS_KEY', secret_access_key: 'MY_SECRET_KEY', force_path_style: true, region: 'us-east-1' )
Replace
FQDN_OF_GATEWAY_NODE
with the FQDN of the Ceph Object Gateway node. ReplaceMY_ACCESS_KEY
andMY_SECRET_KEY
with theaccess_key
andsecret_key
that was generated when you created theradosgw
user forS3
access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://testclient.englab.pnq.redhat.com:8080', access_key_id: '98J4R9P22P5CDL65HKP8', secret_access_key: '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049', force_path_style: true, region: 'us-east-1' )
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x conn.rb
Run the file:
[user@ruby_aws_sdk]$ ./conn.rb | echo $?
If you have provided the values correctly in the file, the output of the command will be
0
.Create a new file for creating a bucket:
[user@ruby_aws_sdk]$ vim create_bucket.rb
Paste the following contents into the file:
Syntax
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.create_bucket(bucket: 'my-new-bucket2')
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x create_bucket.rb
Run the file:
[user@ruby_aws_sdk]$ ./create_bucket.rb
If the output of the command is
true
, this means that bucketmy-new-bucket2
was created successfully.Create a new file for listing owned buckets:
[user@ruby_aws_sdk]$ vim list_owned_buckets.rb
Paste the following content into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_buckets.buckets.each do |bucket| puts "{bucket.name}\t{bucket.creation_date}" end
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x list_owned_buckets.rb
Run the file:
[user@ruby_aws_sdk]$ ./list_owned_buckets.rb
The output should look something like this:
my-new-bucket2 2022-04-21 10:33:19 UTC
Create a new file for creating an object:
[user@ruby_aws_sdk]$ vim create_object.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.put_object( key: 'hello.txt', body: 'Hello World!', bucket: 'my-new-bucket2', content_type: 'text/plain' )
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x create_object.rb
Run the file:
[user@ruby_aws_sdk]$ ./create_object.rb
This will create a file
hello.txt
with the stringHello World!
.Create a new file for listing a bucket’s content:
[user@ruby_aws_sdk]$ vim list_bucket_content.rb
Paste the following content into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_objects(bucket: 'my-new-bucket2').contents.each do |object| puts "{object.key}\t{object.size}" end
Save the file and exit the editor.
Make the file executable.
[user@ruby_aws_sdk]$ chmod +x list_bucket_content.rb
Run the file:
[user@ruby_aws_sdk]$ ./list_bucket_content.rb
The output will look something like this:
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT
Create a new file for deleting an empty bucket:
[user@ruby_aws_sdk]$ vim del_empty_bucket.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_bucket(bucket: 'my-new-bucket2')
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x del_empty_bucket.rb
Run the file:
[user@ruby_aws_sdk]$ ./del_empty_bucket.rb | echo $?
If the bucket is successfully deleted, the command will return
0
as output.NoteEdit the
create_bucket.rb
file to create empty buckets, for example:my-new-bucket6
,my-new-bucket7
. Next, edit the above mentioneddel_empty_bucket.rb
file accordingly before trying to delete empty buckets.Create a new file for deleting a non-empty bucket:
[user@ruby_aws_sdk]$ vim del_non_empty_bucket.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new Aws::S3::Bucket.new('my-new-bucket2', client: s3_client).clear! s3_client.delete_bucket(bucket: 'my-new-bucket2')
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x del_non_empty_bucket.rb
Run the file:
[user@ruby_aws_sdk]$ ./del_non_empty_bucket.rb | echo $?
If the bucket is successfully deleted, the command will return
0
as output.Create a new file for deleting an object:
[user@ruby_aws_sdk]$ vim delete_object.rb
Paste the following contents into the file:
#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_object(key: 'hello.txt', bucket: 'my-new-bucket2')
Save the file and exit the editor.
Make the file executable:
[user@ruby_aws_sdk]$ chmod +x delete_object.rb
Run the file:
[user@ruby_aws_sdk]$ ./delete_object.rb
This will delete the object
hello.txt
.
2.3.8. Accessing the Ceph Object Gateway using PHP
You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object.
The examples given below are tested against php v5.4.16
and aws-sdk v2.8.24
. DO NOT use the latest version of aws-sdk
for php
as it requires php >= 5.5+
.php 5.5
is not available in the default repositories of RHEL 7
. If you want to use php 5.5
, you will have to enable epel
and other third party repositories. Also, the configuration options for php 5.5
and latest version of aws-sdk
are different.
Prerequisites
- Root-level access to a development workstation.
- Internet access.
Procedure
Install the
php
package:[root@dev ~]# yum install php
-
Download the zip archive of
aws-sdk
for PHP and extract it. Create a project directory:
[user@dev ~]$ mkdir php_s3 [user@dev ~]$ cd php_s3
Copy the extracted
aws
directory to the project directory. For example:[user@php_s3]$ cp -r ~/Downloads/aws/ ~/php_s3/
Create the connection file:
[user@php_s3]$ vim conn.php
Paste the following contents in the
conn.php
file:Syntax
<?php define('AWS_KEY', 'MY_ACCESS_KEY'); define('AWS_SECRET_KEY', 'MY_SECRET_KEY'); define('HOST', 'FQDN_OF_GATEWAY_NODE'); define('PORT', '8080'); // require the AWS SDK for php library require '/PATH_TO_AWS/aws-autoloader.php'; use Aws\S3\S3Client; // Establish connection with host using S3 Client client = S3Client::factory(array( 'base_url' => HOST, 'port' => PORT, 'key' => AWS_KEY, 'secret' => AWS_SECRET_KEY )); ?>
Replace
FQDN_OF_GATEWAY_NODE
with the FQDN of the gateway node. ReplaceMY_ACCESS_KEY
andMY_SECRET_KEY
with theaccess_key
andsecret_key
that was generated when creating theradosgw
user forS3
access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide. ReplacePATH_TO_AWS
with the absolute path to the extractedaws
directory that you copied to thephp
project directory.Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f conn.php | echo $?
If you have provided the values correctly in the file, the output of the command will be
0
.Create a new file for creating a bucket:
[user@php_s3]$ vim create_bucket.php
Paste the following contents into the new file:
Syntax
<?php include 'conn.php'; client->createBucket(array('Bucket' => 'my-new-bucket3')); ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f create_bucket.php
Create a new file for listing owned buckets:
[user@php_s3]$ vim list_owned_buckets.php
Paste the following content into the file:
Syntax
<?php include 'conn.php'; blist = client->listBuckets(); echo "Buckets belonging to " . blist['Owner']['ID'] . ":\n"; foreach (blist['Buckets'] as b) { echo "{b['Name']}\t{b['CreationDate']}\n"; } ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f list_owned_buckets.php
The output should look similar to this:
my-new-bucket3 2022-04-21 10:33:19 UTC
Create an object by first creating a source file named
hello.txt
:[user@php_s3]$ echo "Hello World!" > hello.txt
Create a new php file:
[user@php_s3]$ vim create_object.php
Paste the following contents into the file:
Syntax
<?php include 'conn.php'; key = 'hello.txt'; source_file = './hello.txt'; acl = 'private'; bucket = 'my-new-bucket3'; client->upload(bucket, key, fopen(source_file, 'r'), acl); ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f create_object.php
This will create the object
hello.txt
in bucketmy-new-bucket3
.Create a new file for listing a bucket’s content:
[user@php_s3]$ vim list_bucket_content.php
Paste the following content into the file:
Syntax
<?php include 'conn.php'; o_iter = client->getIterator('ListObjects', array( 'Bucket' => 'my-new-bucket3' )); foreach (o_iter as o) { echo "{o['Key']}\t{o['Size']}\t{o['LastModified']}\n"; } ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f list_bucket_content.php
The output will look similar to this:
hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT
Create a new file for deleting an empty bucket:
[user@php_s3]$ vim del_empty_bucket.php
Paste the following contents into the file:
Syntax
<?php include 'conn.php'; client->deleteBucket(array('Bucket' => 'my-new-bucket3')); ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f del_empty_bucket.php | echo $?
If the bucket is successfully deleted, the command will return
0
as output.NoteEdit the
create_bucket.php
file to create empty buckets, for example:my-new-bucket4
,my-new-bucket5
. Next, edit the above mentioneddel_empty_bucket.php
file accordingly before trying to delete empty buckets.ImportantDeleting a non-empty bucket is currently not supported in PHP 2 and newer versions of
aws-sdk
.Create a new file for deleting an object:
[user@php_s3]$ vim delete_object.php
Paste the following contents into the file:
Syntax
<?php include 'conn.php'; client->deleteObject(array( 'Bucket' => 'my-new-bucket3', 'Key' => 'hello.txt', )); ?>
Save the file and exit the editor.
Run the file:
[user@php_s3]$ php -f delete_object.php
This will delete the object
hello.txt
.
2.3.9. Accessing the Ceph Object Gateway using AWS CLI
You can use the AWS CLI for S3 access. This procedure provides steps for installing AWS CLI and some example commands to perform various tasks, such as deleting an object from an MFA-Delete enabled bucket.
Prerequisites
- User-level access to Ceph Object Gateway.
- Root-level access to a development workstation.
-
A multi-factor authentication (MFA) TOTP token was created using
radosgw-admin mfa create
Procedure
Install the
awscli
package:[user@dev]$ pip3 install --user awscli
Configure
awscli
to access Ceph Object Storage using AWS CLI:Syntax
aws configure --profile=MY_PROFILE_NAME AWS Access Key ID [None]: MY_ACCESS_KEY AWS Secret Access Key [None]: MY_SECRET_KEY Default region name [None]: Default output format [None]:
Replace
MY_PROFILE_NAME
with the name you want to use to identify this profile. ReplaceMY_ACCESS_KEY
andMY_SECRET_KEY
with theaccess_key
andsecret_key
that was generated when creating theradosgw
user forS3
access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.Example
[user@dev]$ aws configure --profile=ceph AWS Access Key ID [None]: 12345 AWS Secret Access Key [None]: 67890 Default region name [None]: Default output format [None]:
Create an alias to point to the FQDN of your Ceph Object Gateway node:
Syntax
alias aws="aws --endpoint-url=http://FQDN_OF_GATEWAY_NODE:8080"
Replace
FQDN_OF_GATEWAY_NODE
with the FQDN of the Ceph Object Gateway node.Example
[user@dev]$ alias aws="aws --endpoint-url=http://testclient.englab.pnq.redhat.com:8080"
Create a new bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api create-bucket --bucket BUCKET_NAME
Replace
MY_PROFILE_NAME
with the name you created to use this profile. ReplaceBUCKET_NAME
with a name for your new bucket.Example
[user@dev]$ aws --profile=ceph s3api create-bucket --bucket mybucket
List owned buckets:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-buckets
Replace
MY_PROFILE_NAME
with the name you created to use this profile.Example
[user@dev]$ aws --profile=ceph s3api list-buckets { "Buckets": [ { "Name": "mybucket", "CreationDate": "2021-08-31T16:46:15.257Z" } ], "Owner": { "DisplayName": "User", "ID": "user" } }
Configure a bucket for MFA-Delete:
Syntax
aws --profile=MY_PROFILE_NAME s3api put-bucket-versioning --bucket BUCKET_NAME --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'TOTP_SERIAL TOTP_PIN'
-
Replace
MY_PROFILE_NAME
with the name you created to use this profile. -
Replace
BUCKET_NAME
with the name of your new bucket. -
Replace
TOTP_SERIAL
with the string the represents the ID for the TOTP token and replaceTOTP_PIN
with the current pin displayed on your MFA authentication device. -
The
TOTP_SERIAL
is the string that was specified when you created the radosgw user for S3. - See the Creating a new multi-factor authentication TOTP token section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on creating a MFA TOTP token.
See the Creating a seed for multi-factor authentication using oathtool section in the Red Hat Ceph Storage Developer Guide for details on creating a MFA seed with oathtool.
Example
[user@dev]$ aws --profile=ceph s3api put-bucket-versioning --bucket mybucket --versioning-configuration '{"Status":"Enabled","MFADelete":"Enabled"}' --mfa 'MFAtest 232009'
-
Replace
View MFA-Delete status of the bucket versioning state:
Syntax
aws --profile=MY_PROFILE_NAME s3api get-bucket-versioning --bucket BUCKET_NAME
Replace
MY_PROFILE_NAME
with the name you created to use this profile. ReplaceBUCKET_NAME
with the name of your new bucket.Example
[user@dev]$ aws --profile=ceph s3api get-bucket-versioning --bucket mybucket { "Status": "Enabled", "MFADelete": "Enabled" }
Add an object to the MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api put-object --bucket BUCKET_NAME --key OBJECT_KEY --body LOCAL_FILE
-
Replace
MY_PROFILE_NAME
with the name you created to use this profile. -
Replace
BUCKET_NAME
with the name of your new bucket. -
Replace
OBJECT_KEY
with the name that will uniquely identify the object in a bucket. Replace
LOCAL_FILE
with the name of the local file to upload.Example
[user@dev]$ aws --profile=ceph s3api put-object --bucket mybucket --key example --body testfile { "ETag": "\"5679b828547a4b44cfb24a23fd9bb9d5\"", "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }
-
Replace
List the object versions for a specific object:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJEC_KEY]
-
Replace
MY_PROFILE_NAME
with the name you created to use this profile. -
Replace
BUCKET_NAME
with the name of your new bucket. Replace
OBJECT_KEY
with the name that was specified to uniquely identify the object in a bucket.Example
[user@dev]$ aws --profile=ceph s3api list-object-versions --bucket mybucket --key example { "IsTruncated": false, "KeyMarker": "example", "VersionIdMarker": "", "Versions": [ { "ETag": "\"5679b828547a4b44cfb24a23fd9bb9d5\"", "Size": 196, "StorageClass": "STANDARD", "Key": "example", "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r", "IsLatest": true, "LastModified": "2021-08-31T17:48:45.484Z", "Owner": { "DisplayName": "User", "ID": "user" } } ], "Name": "mybucket", "Prefix": "", "MaxKeys": 1000, "EncodingType": "url" }
-
Replace
Delete an object in an MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api delete-object --bucket BUCKET_NAME --key OBJECT_KEY --version-id VERSION_ID --mfa 'TOTP_SERIAL TOTP_PIN'
-
Replace
MY_PROFILE_NAME
with the name you created to use this profile. -
Replace
BUCKET_NAME
with the name of your bucket that contains the object to delete. -
Replace
OBJECT_KEY
with the name that uniquely identifies the object in a bucket. -
Replace
VERSION_ID
with the VersionID of the specific version of the object you want to delete. Replace
TOTP_SERIAL
with the string that represents the ID for the TOTP token andTOTP_PIN
to the current pin displayed on your MFA authentication device.Example
[user@dev]$ aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r --mfa 'MFAtest 420797' { "VersionId": "3VyyYPTEuIofdvMPWbr1znlOu7lJE3r" }
If the MFA token is not included, the request fails with the error shown below.
Example
[user@dev]$ aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r An error occurred (AccessDenied) when calling the DeleteObject operation: Unknown
-
Replace
List object versions to verify object was deleted from MFA-Delete enabled bucket:
Syntax
aws --profile=MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJECT_KEY
-
Replace
MY_PROFILE_NAME
with the name you created to use this profile. -
Replace
BUCKET_NAME
with the name of your bucket. Replace
OBJECT_KEY
with the name that uniquely identifies the object in a bucket.Example
[user@dev]$ aws --profile=ceph s3api list-object-versions --bucket mybucket --key example { "IsTruncated": false, "KeyMarker": "example", "VersionIdMarker": "", "Name": "mybucket", "Prefix": "", "MaxKeys": 1000, "EncodingType": "url" }
-
Replace
2.3.10. Creating a seed for multi-factor authentication using the oathtool command
To set up multi-factor authentication (MFA), you must create a secret seed for use by the time-based one time password (TOTP) generator and the back-end MFA system. You can use oathtool
to generate the hexadecimal seed and optionally qrencode
to create a QR code to import the token into your MFA device.
Prerequisites
- A Linux system.
- Access to the command line shell.
-
root
orsudo
access to the Linux system.
Procedure
Install the
oathtool
package:[root@dev]# dnf install oathtool
Install the
qrencode
package:[root@dev]# dnf install qrencode
Generate a 30 character seed from the
urandom
Linux device file and store it in the shell variableSEED
:Example
[user@dev]$ SEED=$(head -10 /dev/urandom | sha512sum | cut -b 1-30)
Print the seed by running echo on the
SEED
variable:Example
[user@dev]$ echo $SEED BA6GLJBJIKC3D7W7YFYXXAQ7
Feed the
SEED
into the oathtool command:Syntax
oathtool -v -d6 $SEED
Example
[user@dev]$ oathtool -v -d6 $SEED Hex secret: 083c65a4294285b1fedfc1717b821f Base32 secret: BA6GLJBJIKC3D7W7YFYXXAQ7 Digits: 6 Window size: 0 Start counter: 0x0 (0) 823182
NoteThe base32 secret is needed to add a token to the authenticator application on your MFA device. You can either use the QR code to import the token into the authenticator application or use the base32 secret manually add it.
Optional: Create a QR code image file to add the token to the authenticator:
Syntax
qrencode -o /tmp/user.png 'otpauth://totp/TOTP_SERIAL?secret=_BASE32_SECRET'
Replace
TOTP_SERIAL
with the string that represents the ID for the (TOTP) token andBASE32_SECRET
with the Base32 secret generated by oathtool.Example
[user@dev]$ qrencode -o /tmp/user.png 'otpauth://totp/MFAtest?secret=BA6GLJBJIKC3D7W7YFYXXAQ7'
- Scan the generated QR code image file to add the token to the authenticator application on your MFA device.
-
Create the multi-factor authentication TOTP token for the user using the
radowgw-admin
command.
Additional Resources
- See the Creating a new multi-factor authentication TOTP token section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on creating an MFA TOTP token.
2.3.11. Secure Token Service
The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. The Ceph Object Gateway implements a subset of the STS application programming interfaces (APIs) to provide temporary credentials for identity and access management (IAM). Using these temporary credentials authenticates S3 calls by utilizing the STS engine in the Ceph Object Gateway. You can restrict temporary credentials even further by using an IAM policy, which is a parameter passed to the STS APIs.
Additional Resources
- Amazon Web Services Secure Token Service welcome page.
- See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone.
- See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone.
2.3.11.1. The Secure Token Service application programming interfaces
The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs):
AssumeRole
This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn
and the RoleSessionName
request parameters are required, but the other request parameters are optional.
RoleArn
- Description
- The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters.
- Type
- String
- Required
- Yes
RoleSessionName
- Description
-
Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter’s value has a length of 2 to 64 characters. The
=
,,
,.
,@
, and-
characters are allowed, but no spaces allowed. - Type
- String
- Required
- Yes
Policy
- Description
- An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter’s value has a length of 1 to 2048 characters.
- Type
- String
- Required
- No
DurationSeconds
- Description
-
The duration of the session in seconds, with a minimum value of
900
seconds to a maximum value of43200
seconds. The default value is3600
seconds. - Type
- Integer
- Required
- No
ExternalId
- Description
- When assuming a role for another account, provide the unique external identifier if available. This parameter’s value has a length of 2 to 1224 characters.
- Type
- String
- Required
- No
SerialNumber
- Description
- A user’s identification number from their associated multi-factor authentication (MFA) device. The parameter’s value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters.
- Type
- String
- Required
- No
TokenCode
- Description
- The value generated from the multi-factor authentication (MFA) device, if the trust policy requires a MFA. If a MFA device is required, and if this parameter’s value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter’s value has a fixed length of 6 characters.
- Type
- String
- Required
- No
AssumeRoleWithWebIdentity
This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn
and the RoleSessionName
request parameters are required, but the other request parameters are optional.
RoleArn
- Description
- The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters.
- Type
- String
- Required
- Yes
RoleSessionName
- Description
-
Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter’s value has a length of 2 to 64 characters. The
=
,,
,.
,@
, and-
characters are allowed, but no spaces allowed. - Type
- String
- Required
- Yes
Policy
- Description
- An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter’s value has a length of 1 to 2048 characters.
- Type
- String
- Required
- No
DurationSeconds
- Description
-
The duration of the session in seconds, with a minimum value of
900
seconds to a maximum value of43200
seconds. The default value is3600
seconds. - Type
- Integer
- Required
- No
ProviderId
- Description
- The fully qualified host component of the domain name from the identity provider. This parameter’s value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters.
- Type
- String
- Required
- No
WebIdentityToken
- Description
- The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter’s value has a length of 4 to 2048 characters.
- Type
- String
- Required
- No
Additional Resources
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
- Amazon Web Services Security Token Service, the AssumeRole action.
- Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action.
2.3.11.2. Configuring the Secure Token Service
Configure the Secure Token Service (STS) for use with the Ceph Object Gateway using Ceph Ansible.
The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway.
Prerequisites
- A Ceph Ansible administration node.
- A running Red Hat Ceph Storage cluster.
- A running Ceph Object Gateway.
Procedure
Open for editing the
group_vars/rgws.yml
file.Add the following lines:
rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true
Replace:
-
STS_KEY
with the key used to encrypted the session token.
-
-
Save the changes to the
group_vars/rgws.yml
file. Rerun the appropriate Ceph Ansible playbook:
Bare-metal deployments:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Container deployments:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
Additional Resources
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
2.3.11.3. Creating a user for an OpenID Connect provider
To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy.
Prerequisites
- User-level access to the Ceph Object Gateway node.
Procedure
Create a new Ceph user:
Syntax
radosgw-admin --uid USER_NAME --display-name "DISPLAY_NAME" --access_key USER_NAME --secret SECRET user create
Example
[user@rgw ~]$ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
Configure the Ceph user capabilities:
Syntax
radosgw-admin caps add --uid="USER_NAME" --caps="oidc-provider=*"
Example
[user@rgw ~]$ radosgw-admin caps add --uid="TESTER" --caps="oidc-provider=*"
Add a condition to the role trust policy using the Secure Token Service (STS) API:
Syntax
"{\"Version\":\"2020-01-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/IDP_URL\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"IDP_URL:app_id\":\"AUD_FIELD\"\}\}\}\]\}"
ImportantThe
app_id
in the syntax example above must match theAUD_FIELD
field of the incoming token.
Additional Resources
- See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon’s website.
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
2.3.11.4. Obtaining a thumbprint of an OpenID Connect provider
To get the OpenID Connect provider’s (IDP) configuration document.
Prerequisites
-
Installation of the
openssl
andcurl
packages.
Procedure
Get the configuration document from the IDP’s URL:
Syntax
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL:8000/CONTEXT/realms/REALM/.well-known/openid-configuration" \ | jq .
Example
[user@client ~]$ curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration" \ | jq .
Get the IDP certificate:
Syntax
curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "IDP_URL/CONTEXT/realms/REALM/protocol/openid-connect/certs" \ | jq .
Example
[user@client ~]$ curl -k -v \ -X GET \ -H "Content-Type: application/x-www-form-urlencoded" \ "http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs" \ | jq .
-
Copy the result of the "x5c" response from the previous command and paste it into the
certificate.crt
file. Include—–BEGIN CERTIFICATE—–
at the beginning and—–END CERTIFICATE—–
at the end. Get the certificate thumbprint:
Syntax
openssl x509 -in CERT_FILE -fingerprint -noout
Example
[user@client ~]$ openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87
- Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request.
Additional Resources
- See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon’s website.
- See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs.
- See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details.
2.3.11.5. Configuring and using STS Lite with Keystone (Technology Preview)
The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options.
Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway.
Prerequisites
- Red Hat Ceph Storage 3.2 or higher.
- A running Ceph Object Gateway.
- Installation of the Boto Python module, version 3 or higher.
Procedure
Open and edit the
group_vars/rgws.yml
file with the following options:rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true
Replace:
-
STS_KEY
with the key used to encrypted the session token.
-
Rerun the appropriate Ceph Ansible playbook:
Bare-metal deployments:
[user@admin ceph-ansible]$ ansible-playbook site.yml --limit rgws
Container deployments:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
Generate the EC2 credentials:
Example
[user@osp ~]$ openstack ec2 credentials create +------------+--------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------+ | access | b924dfc87d454d15896691182fdeb0ef | | links | {u'self': u'http://192.168.0.15/identity/v3/users/ | | | 40a7140e424f493d8165abc652dc731c/credentials/ | | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} | | project_id | c703801dccaf4a0aaa39bec8c481e25a | | secret | 6a2142613c504c42a94ba2b82147dc28 | | trust_id | None | | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+
Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API.
Example
import boto3 access_key = b924dfc87d454d15896691182fdeb0ef secret_key = 6a2142613c504c42a94ba2b82147dc28 client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.get_session_token( DurationSeconds=43200 )
Obtaining the temporary credentials can be used for making S3 calls:
Example
s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=https://www.example.com/s3, region_name='') bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket') response = s3client.list_buckets() for bucket in response["Buckets"]: print "{name}\t{created}".format( name = bucket['Name'], created = bucket['CreationDate'], )
Create a new S3Access role and configure a policy.
Assign a user with administrative CAPS:
Syntax
radosgw-admin caps add --uid="USER" --caps="roles=*"
Example
[user@client]$ radosgw-admin caps add --uid="gwadmin" --caps="roles=*"
Create the S3Access role:
Syntax
radosgw-admin role create --role-name=ROLE_NAME --path=PATH --assume-role-policy-doc=TRUST_POLICY_DOC
Example
[user@client]$ radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}
Attach a permission policy to the S3Access role:
Syntax
radosgw-admin role-policy put --role-name=ROLE_NAME --policy-name=POLICY_NAME --policy-doc=PERMISSION_POLICY_DOC
Example
[user@client]$ radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:*\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}
-
Now another user can assume the role of the
gwadmin
user. For example, thegwuser
user can assume the permissions of thegwadmin
user. Make a note of the assuming user’s
access_key
andsecret_key
values.Example
[user@client]$ radosgw-admin user info --uid=gwuser | grep -A1 access_key
Use the AssumeRole API call, providing the
access_key
andsecret_key
values from the assuming user:Example
import boto3 access_key = 11BS02LGFB6AL6H1ADMW secret_key = vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.assume_role( RoleArn='arn:aws:iam:::role/application_abc/component_xyz/S3Access', RoleSessionName='Bob', DurationSeconds=3600 )
ImportantThe AssumeRole API requires the S3Access role.
Additional Resources
- See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module.
- See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information.
2.3.11.6. Working around the limitations of using STS Lite with Keystone (Technology Preview)
A limitation with Keystone is that it does not supports STS requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified.
Prerequisites
- A running Red Hat Ceph Storage cluster, version 3.2 or higher.
- A running Ceph Object Gateway.
- Installation of Boto Python module, version 3 or higher.
Procedure
Open and edit Boto’s
auth.py
file.Add the following four lines to the code block:
class SigV4Auth(BaseSigner): """ Sign a request with Signature V4. """ REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4
Add the following two lines to the code block:
def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request)
Additional Resources
- See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module.
2.3.12. Session tags for Attribute-based access control (ABAC) in STS
Session tags are key-value pairs that can be passed while federating a user. They are passed as aws:PrincipalTag
in the session or temporary credentials that are returned back by secure token service (STS). These principal tags consist of session tags that come in as part of the web token and tags that are attached to the role being assumed.
Currently, the session tags are only supported as part of the web token passed to AssumeRoleWithWebIdentity
.
The tags have to be always specified in the following namespace: https://aws.amazon.com/tags
.
The trust policy must have sts:TagSession
permission if the web token passed in by the federated user contains session tags. Otherwise, the AssumeRoleWithWebIdentity
action fails.
Example of the trust policy with sts:TagSession
:
{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"], "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]}, "Condition":{"StringEquals":{"localhost:8080/auth/realms/quickstart:sub":"test"}} }] }
Properties
The following are the properties of session tags:
Session tags can be multi-valued.
NoteMulti-valued session tags are not supported in Amazon Web Service (AWS).
- Keycloak can be set up as an OpenID Connect Identity Provider (IDP) with a maximum of 50 session tags.
- The maximum size of a key allowed is 128 characters.
- The maximum size of a value allowed is 256 characters.
-
The tag or the value cannot start with
aws:
.
Additional Resources
- See the Secure Token Service section in the Red Hat Ceph Storage Developer Guide for more information about secure token service.
2.3.12.1. Tag keys
The following are the tag keys that can be used in the role trust policy or the role permission policy.
aws:RequestTag
- Description
Compares the key-value pair passed in the request with the key-value pair in the role’s trust policy.
In the case of
AssumeRoleWithWebIdentity
, session tags can be used asaws:RequestTag
in the role trust policy. Those session tags are passed by Keycloak in the web token. As a result, a federated user can assume a role.
aws:PrincipalTag
- Description
Compares the key-value pair attached to the principal with the key-value pair in the policy.
In the case of
AssumeRoleWithWebIdentity
, session tags appear as principal tags in the temporary credentials once a user is authenticated. Those session tags are passed by Keycloak in the web token. They can be used asaws:PrincipalTag
in the role permission policy.
iam:ResourceTag
- Description
Compares the key-value pair attached to the resource with the key-value pair in the policy.
In the case of
AssumeRoleWithWebIdentity
, tags attached to the role are compared with those in the trust policy to allow a user to assume a role.NoteThe Ceph Object Gateway now supports RESTful APIs for tagging, listing tags, and untagging actions on a role.
aws:TagKeys
- Description
Compares tags in the request with the tags in the policy.
In the case of
AssumeRoleWithWebIdentity
, tags are used to check the tag keys in a role trust policy or permission policy before a user is allowed to assume a role.
s3:ResourceTag
- Description
Compares tags present on the S3 resource, that is bucket or object, with the tags in the role’s permission policy.
It can be used for authorizing an S3 operation in the Ceph Object Gateway. However, this is not allowed in AWS.
It is a key used to refer to tags that have been attached to an object or a bucket. Tags can be attached to an object or a bucket using RESTful APIs available for the same.
2.3.12.2. S3 resource tags
The following list shows which S3 resource tag type is supported for authorizing a particular operation.
- Tag type: Object tags
- Operations
-
GetObject
,GetObjectTags
,DeleteObjectTags
,DeleteObject
,PutACLs
,InitMultipart
,AbortMultipart , `ListMultipart
,GetAttrs
,PutObjectRetention
,GetObjectRetention
,PutObjectLegalHold
,GetObjectLegalHold
- Tag type: Bucket tags
- Operations
-
PutObjectTags
,GetBucketTags
,PutBucketTags
,DeleteBucketTags
,GetBucketReplication
,DeleteBucketReplication
,GetBucketVersioning
,SetBucketVersioning
,GetBucketWebsite
,SetBucketWebsite
,DeleteBucketWebsite
,StatBucket
,ListBucket
,GetBucketLogging
,GetBucketLocation
,DeleteBucket
,GetLC
,PutLC
,DeleteLC
,GetCORS
,PutCORS
,GetRequestPayment
,SetRequestPayment
.PutBucketPolicy
,GetBucketPolicy
,DeleteBucketPolicy
,PutBucketObjectLock
,GetBucketObjectLock
,GetBucketPolicyStatus
,PutBucketPublicAccessBlock
,GetBucketPublicAccessBlock
,DeleteBucketPublicAccessBlock
- Tag type: Bucket tags for bucket ACLs, Object tags for object ACLs
- Operations
-
GetACLs
,PutACLs
- Tag type: Object tags of source object, Bucket tags of destination bucket
- Operations
-
PutObject
,CopyObject
2.4. S3 bucket operations
As a developer, you can perform bucket operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway.
The following table list the Amazon S3 functional operations for buckets, along with the function’s support status.
Feature | Status | Notes |
---|---|---|
Supported | ||
Supported | Different set of canned ACLs. | |
Partially Supported |
| |
Partially Supported |
| |
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | Different set of canned ACLs | |
Supported | Different set of canned ACLs | |
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Partially Supported | ||
Supported | ||
Supported | ||
Supported |
2.4.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.4.2. S3 create bucket notifications
Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated
and ObjectRemoved
. These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations.
To create a bucket notification for s3:objectCreate
and s3:objectRemove
events,use PUT:
Example
client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'] }]})
Red Hat supports ObjectCreate
events, such as, put
, post
, multipartUpload
, and copy
. Red Hat also supports ObjectRemove
events, such as, object_delete
and s3_multi_object_delete
.
Request Entities
NotificationConfiguration
- Description
-
list of
TopicConfiguration
entities. - Type
- Container
- Required
- Yes
TopicConfiguration
- Description
-
Id
,Topic
andlist
of Event entities. - Type
- Container
- Required
- Yes
id
- Description
- Name of the notification.
- Type
- String
- Required
- Yes
Topic
- Description
Topic Amazon Resource Name(ARN)
NoteThe topic must be created beforehand.
- Type
- String
- Required
- Yes
Event
- Description
- List of supported events. Multiple event entities can be used. If omitted, all events are handled.
- Type
- String
- Required
- No
Filter
- Description
-
S3Key
,S3Metadata
andS3Tags
entities. - Type
- Container
- Required
- No
S3Key
- Description
-
A list of
FilterRule
entities, for filtering based on the object key. At most, 3 entities may be in the list, for exampleName
would beprefix
,suffix
orregex
. All filter rules in the list must match for the filter to match. - Type
- Container
- Required
- No
S3Metadata
- Description
-
A list of
FilterRule
entities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. - Type
- Container
- Required
- No
S3Tags
- Description
-
A list of
FilterRule
entities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. - Type
- Container
- Required
- No
S3Key.FilterRule
- Description
-
Name
andValue
entities. Name is :prefix
,suffix
orregex
. TheValue
would hold the key prefix, key suffix or a regular expression for matching the key, accordingly. - Type
- Container
- Required
- Yes
S3Metadata.FilterRule
- Description
-
Name
andValue
entities. Name is the name of the metadata attribute for examplex-amz-meta-xxx
. The value is the expected value for this attribute. - Type
- Container
- Required
- Yes
S3Tags.FilterRule
- Description
-
Name
andValue
entities. Name is the tag key, and the value is the tag value. - Type
- Container
- Required
- Yes
HTTP response
400
- Status Code
-
MalformedXML
- Description
- The XML is not well-formed.
400
- Status Code
-
InvalidArgument
- Description
- Missing Id or missing or invalid topic ARN or invalid event.
404
- Status Code
-
NoSuchBucket
- Description
- The bucket does not exist.
404
- Status Code
-
NoSuchKey
- Description
- The topic does not exist.
id="s3-get-bucket-notifications_dev"]
2.4.3. S3 get bucket notifications
Get a specific notification or list all the notifications configured on a bucket.
Syntax
Get /BUCKET?notification=NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Example
Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Example Response
<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <TopicConfiguration> <Id></Id> <Topic></Topic> <Event></Event> <Filter> <S3Key> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Key> <S3Metadata> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Metadata> <S3Tags> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Tags> </Filter> </TopicConfiguration> </NotificationConfiguration>
The notification
subresource returns the bucket notification configuration or an empty NotificationConfiguration
element. The caller must be the bucket owner.
Request Entities
notification-id
- Description
- Name of the notification. All notifications are listed if the ID is not provided.
- Type
- String
NotificationConfiguration
- Description
-
list of
TopicConfiguration
entities. - Type
- Container
- Required
- Yes
TopicConfiguration
- Description
-
Id
,Topic
andlist
of Event entities. - Type
- Container
- Required
- Yes
id
- Description
- Name of the notification.
- Type
- String
- Required
- Yes
Topic
- Description
Topic Amazon Resource Name(ARN)
NoteThe topic must be created beforehand.
- Type
- String
- Required
- Yes
Event
- Description
- Handled event. Multiple event entities may exist.
- Type
- String
- Required
- Yes
Filter
- Description
- The filters for the specified configuration.
- Type
- Container
- Required
- No
HTTP response
404
- Status Code
-
NoSuchBucket
- Description
- The bucket does not exist.
404
- Status Code
-
NoSuchKey
- Description
- The notification does not exist if it has been provided.
2.4.4. S3 delete bucket notifications
Delete a specific or all notifications from a bucket.
Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete
, is not considered an error.
To delete a specific or all notifications use DELETE:
Syntax
DELETE /BUCKET?notification=NOTIFICATION_ID HTTP/1.1
Example
DELETE /testbucket?notification=testnotificationID HTTP/1.1
Request Entities
notification-id
- Description
- Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided.
- Type
- String
HTTP response
404
- Status Code
-
NoSuchBucket
- Description
- The bucket does not exist.
2.4.5. Accessing bucket host names
There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI.
Example
GET /mybucket HTTP/1.1 Host: cname.domain.com
The second method identifies the bucket via a virtual bucket host name.
Example
GET / HTTP/1.1 Host: mybucket.cname.domain.com
Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards.
2.4.6. S3 list buckets
GET /
returns a list of buckets created by the user making the request. GET /
only returns buckets created by an authenticated user. You cannot make an anonymous request.
Syntax
GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Name | Type | Description |
---|---|---|
| Container | Container for list of buckets. |
| Container | Container for bucket information. |
| String | Bucket name. |
| Date | UTC time when the bucket was created. |
| Container | A container for the result. |
| Container |
A container for the bucket owner’s |
| String | The bucket owner’s ID. |
| String | The bucket owner’s display name. |
2.4.7. S3 return a list of bucket objects
Returns a list of bucket objects.
Syntax
GET /BUCKET?max-keys=25 HTTP/1.1
Host: cname.domain.com
Name | Type | Description |
---|---|---|
| String | Only returns objects that contain the specified prefix. |
| String | The delimiter between the prefix and the rest of the object name. |
| String | A beginning index for the list of objects returned. |
| Integer | The maximum number of keys to return. Default is 1000. |
HTTP Status | Status Code | Description |
---|---|---|
| OK | Buckets retrieved |
GET /BUCKET
returns a container for buckets with the following fields:
Name | Type | Description |
---|---|---|
| Entity | The container for the list of objects. |
| String | The name of the bucket whose contents will be returned. |
| String | A prefix for the object keys. |
| String | A beginning index for the list of objects returned. |
| Integer | The maximum number of keys returned. |
| String |
If set, objects with the same prefix will appear in the |
| Boolean |
If |
| Container | If multiple objects contain the same prefix, they will appear in this list. |
The ListBucketResult
contains objects, where each object is within a Contents
container.
Name | Type | Description |
---|---|---|
| Object | A container for the object. |
| String | The object’s key. |
| Date | The object’s last-modified date/time. |
| String | An MD-5 hash of the object. (entity tag) |
| Integer | The object’s size. |
| String |
Should always return |
2.4.8. S3 create a new bucket
Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user.
Constraints
In general, bucket names should follow domain name constraints.
- Bucket names must be unique.
- Bucket names must begin and end with a lowercase letter.
- Bucket names can contain a dash (-).
Syntax
PUT /BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Name | Description | Valid Values | Required |
---|---|---|---|
| Canned ACLs. |
| No |
HTTP Response
If the bucket name is unique, within constraints and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail.
HTTP Status | Status Code | Description |
---|---|---|
| BucketAlreadyExists | Bucket already exists under different user’s ownership. |
2.4.9. S3 delete a bucket
Deletes a bucket. You can reuse bucket names following a successful bucket removal.
Syntax
DELETE /BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
HTTP Status | Status Code | Description |
---|---|---|
| No Content | Bucket removed. |
2.4.10. S3 bucket lifecycle
You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions:
-
Expiration
: This defines the lifespan of objects within a bucket. It takes the number of days the object should live or an expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn’t enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. -
NoncurrentVersionExpiration
: This defines the lifespan of non-current object versions within a bucket. To use this feature, the bucket must enable versioning. It takes the number of days a non-current object should live, at which point Ceph Object Gateway will delete the non-current object. -
AbortIncompleteMultipartUpload
: This defines the number of days an incomplete multipart upload should live before it is aborted.
The lifecycle configuration contains one or more rules using the <Rule>
element.
Example
<LifecycleConfiguration> <Rule> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>10</Days> </Expiration> </Rule> </LifecycleConfiguration>
A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter>
element that you specify in the lifecycle rule. You can specify a filter several ways:
- Key prefixes
- Object tags
- Both key prefix and one or more object tags
Key prefixes
You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/>
would apply to objects that begin with keypre/
:
<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>
You can also apply different lifecycle rules to objects with different key prefixes:
<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> <Rule> <Status>Enabled</Status> <Filter> <Prefix>mypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>
Object tags
You can apply a lifecycle rule to only objects with a specific tag using the <Key>
and <Value>
elements:
<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Tag> <Key>key</Key> <Value>value</Value> </Tag> </Filter> </Rule> </LifecycleConfiguration>
Both prefix and one or more tags
In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And>
element. A filter can have only one prefix, and zero or more tags:
<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <And> <Prefix>key-prefix</Prefix> <Tag> <Key>key1</Key> <Value>value1</Value> </Tag> <Tag> <Key>key2</Key> <Value>value2</Value> </Tag> ... </And> </Filter> </Rule> </LifecycleConfiguration>
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle.
- See the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle.
- See the Red Hat Ceph Storage Developer Guide for details to delete a bucket lifecycle.
2.4.11. S3 GET bucket lifecycle
To get a bucket lifecycle, use GET
and specify a destination bucket.
Syntax
GET /BUCKET?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Request Headers
See the Common Request Headers for more information.
Response
The response contains the bucket lifecycle and its elements.
2.4.12. S3 create or replace a bucket lifecycle
To create or replace a bucket lifecycle, use PUT
and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality.
Syntax
PUT /BUCKET?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET <LifecycleConfiguration> <Rule> <Expiration> <Days>10</Days> </Expiration> </Rule> ... <Rule> </Rule> </LifecycleConfiguration>
Name | Description | Valid Values | Required |
---|---|---|---|
content-md5 | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
Additional Resources
- See the Red Hat Ceph Storage Developer Guide for details on common Amazon S3 request headers.
- See the Red Hat Ceph Storage Developer Guide for details on Amazon S3 bucket lifecycles.
2.4.13. S3 delete a bucket lifecycle
To delete a bucket lifecycle, use DELETE
and specify a destination bucket.
Syntax
DELETE /BUCKET?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Request Headers
The request does not contain any special elements.
Response
The response returns common response status.
Additional Resources
- See Appendix A for Amazon S3 common request headers.
- See Appendix B for Amazon S3 common response status codes.
2.4.14. S3 get bucket location
Retrieves the bucket’s zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint
during a PUT request.
Add the location
subresource to bucket resource as shown below.
Syntax
GET /BUCKET?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Name | Type | Description |
---|---|---|
| String | The zone group where bucket resides, empty string for default zone group |
2.4.15. S3 get bucket versioning
Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this.
Add the versioning
subresource to bucket resource as shown below.
Syntax
GET /BUCKET?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.16. S3 put the bucket versioning
This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value.
Setting the bucket versioning state:
Enabled
: Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended
: Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null.
Syntax
PUT /BUCKET?versioning HTTP/1.1
Name | Type | Description |
---|---|---|
| container | A container for the request. |
| String | Sets the versioning state of the bucket. Valid Values: Suspended/Enabled |
2.4.17. S3 get bucket access control lists
Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP
permission on the bucket.
Add the acl
subresource to the bucket request as shown below.
Syntax
GET /BUCKET?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Name | Type | Description |
---|---|---|
| Container | A container for the response. |
| Container | A container for the ACL information. |
| Container |
A container for the bucket owner’s |
| String | The bucket owner’s ID. |
| String | The bucket owner’s display name. |
| Container |
A container for |
| Container |
A container for the |
| String |
The permission given to the |
2.4.18. S3 put bucket Access Control Lists
Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP
permission on the bucket.
Add the acl
subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?acl HTTP/1.1
Name | Type | Description |
---|---|---|
| Container | A container for the request. |
| Container | A container for the ACL information. |
| Container |
A container for the bucket owner’s |
| String | The bucket owner’s ID. |
| String | The bucket owner’s display name. |
| Container |
A container for |
| Container |
A container for the |
| String |
The permission given to the |
2.4.19. S3 get bucket cors
Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP
permission on the bucket.
Add the cors
subresource to the bucket request as shown below.
Syntax
GET /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.20. S3 put bucket cors
Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP
permission on the bucket.
Add the cors
subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.21. S3 delete a bucket cors
Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP
permission on the bucket.
Add the cors
subresource to the bucket request as shown below.
Syntax
DELETE /BUCKET?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.22. S3 list bucket object versions
Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket.
Add the versions
subresource to the bucket request as shown below.
Syntax
GET /BUCKET?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
You can specify parameters for GET /BUCKET?versions
, but none of them are required.
Name | Type | Description |
---|---|---|
| String | Returns in-progress uploads whose keys contains the specified prefix. |
| String | The delimiter between the prefix and the rest of the object name. |
| String | The beginning marker for the list of uploads. |
| Integer | The maximum number of in-progress uploads. The default is 1000. |
| String | Specifies the object version to begin the list. |
Name | Type | Description |
---|---|---|
| String |
The key marker specified by the |
| String |
The key marker to use in a subsequent request if |
| String |
The upload ID marker to use in a subsequent request if |
| Boolean |
If |
| Integer | The size of the uploaded part. |
| String | The owners’s display name. |
| String | The owners’s ID. |
| Container |
A container for the |
| String |
The method used to store the resulting object. |
| Container | Container for the version information. |
| String | Version ID of an object. |
| String | The last version of the key in a truncated response. |
2.4.23. S3 head bucket
Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK
if the bucket exists and the caller has permissions; 404 Not Found
if the bucket does not exist; and, 403 Forbidden
if the bucket exists but the caller does not have access permissions.
Syntax
HEAD /BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.24. S3 list multipart uploads
GET /?uploads
returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn’t completed all the uploads yet.
Syntax
GET /BUCKET?uploads HTTP/1.1
You can specify parameters for GET /BUCKET?uploads
, but none of them are required.
Name | Type | Description |
---|---|---|
| String | Returns in-progress uploads whose keys contains the specified prefix. |
| String | The delimiter between the prefix and the rest of the object name. |
| String | The beginning marker for the list of uploads. |
| Integer | The maximum number of in-progress uploads. The default is 1000. |
| Integer | The maximum number of multipart uploads. The range from 1-1000. The default is 1000. |
| String |
Ignored if |
Name | Type | Description |
---|---|---|
| Container | A container for the results. |
| String |
The prefix specified by the |
| String | The bucket that will receive the bucket contents. |
| String |
The key marker specified by the |
| String |
The marker specified by the |
| String |
The key marker to use in a subsequent request if |
| String |
The upload ID marker to use in a subsequent request if |
| Integer |
The max uploads specified by the |
| String |
If set, objects with the same prefix will appear in the |
| Boolean |
If |
| Container |
A container for |
| String | The key of the object once the multipart upload is complete. |
| String |
The |
| Container |
Contains the |
| String | The initiator’s display name. |
| String | The initiator’s ID. |
| Container |
A container for the |
| String |
The method used to store the resulting object. |
| Date | The date and time the user initiated the upload. |
| Container | If multiple objects contain the same prefix, they will appear in this list. |
| String |
The substring of the key after the prefix as defined by the |
2.4.25. S3 bucket policies
The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets.
Creation and Removal
Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin
CLI tool.
Administrators may use the s3cmd
command to set or delete a policy.
Example
$ cat > examplepol { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred"]}, "Action": "s3:PutObjectAcl", "Resource": [ "arn:aws:s3:::happybucket/*" ] }] } $ s3cmd setpolicy examplepol s3://happybucket $ s3cmd delpolicy s3://happybucket
Limitations
Ceph Object Gateway only supports the following S3 actions:
-
s3:AbortMultipartUpload
-
s3:CreateBucket
-
s3:DeleteBucketPolicy
-
s3:DeleteBucket
-
s3:DeleteBucketWebsite
-
s3:DeleteObject
-
s3:DeleteObjectVersion
-
s3:GetBucketAcl
-
s3:GetBucketCORS
-
s3:GetBucketLocation
-
s3:GetBucketPolicy
-
s3:GetBucketRequestPayment
-
s3:GetBucketVersioning
-
s3:GetBucketWebsite
-
s3:GetLifecycleConfiguration
-
s3:GetObjectAcl
-
s3:GetObject
-
s3:GetObjectTorrent
-
s3:GetObjectVersionAcl
-
s3:GetObjectVersion
-
s3:GetObjectVersionTorrent
-
s3:ListAllMyBuckets
-
s3:ListBucketMultiPartUploads
-
s3:ListBucket
-
s3:ListBucketVersions
-
s3:ListMultipartUploadParts
-
s3:PutBucketAcl
-
s3:PutBucketCORS
-
s3:PutBucketPolicy
-
s3:PutBucketRequestPayment
-
s3:PutBucketVersioning
-
s3:PutBucketWebsite
-
s3:PutLifecycleConfiguration
-
s3:PutObjectAcl
-
s3:PutObject
-
s3:PutObjectVersionAcl
Ceph Object Gateway does not support setting policies on users, groups, or roles.
The Ceph Object Gateway uses the RGW ‘tenant’ identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users.
With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket
in the S3 request.
In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users.
Granting an entire account access to a bucket grants access to ALL users in that account.
Bucket policies do NOT support string interpolation.
Ceph Object Gateway supports the following condition keys:
-
aws:CurrentTime
-
aws:EpochTime
-
aws:PrincipalType
-
aws:Referer
-
aws:SecureTransport
-
aws:SourceIp
-
aws:UserAgent
-
aws:username
Ceph Object Gateway ONLY supports the following condition keys for the ListBucket
action:
-
s3:prefix
-
s3:delimiter
-
s3:max-keys
Impact on Swift
Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that have been set with the S3 API govern Swift as well as S3 operations.
Ceph Object Gateway matches Swift credentials against Principals specified in a policy.
2.4.26. S3 get the request payment configuration on a bucket
Uses the requestPayment
subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP
permission on the bucket.
Add the requestPayment
subresource to the bucket request as shown below.
Syntax
GET /BUCKET?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
2.4.27. S3 set the request payment configuration on a bucket
Uses the requestPayment
subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket.
Add the requestPayment
subresource to the bucket request as shown below.
Syntax
PUT /BUCKET?requestPayment HTTP/1.1
Host: cname.domain.com
Name | Type | Description |
---|---|---|
| Enum | Specifies who pays for the download and request fees. |
| Container |
A container for |
2.4.28. Multi-tenant bucket operations
When a client application accesses buckets, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi tenancy is completely backward compatible with previous releases, as long as the referred buckets and referring user belong to the same tenant.
Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used.
In the following example, a colon character separates tenant and bucket. Thus a sample URL would be:
https://rgw.domain.com/tenant:bucket
By contrast, a simple Python example separates the tenant and bucket in the bucket method itself:
Example
from boto.s3.connection import S3Connection, OrdinaryCallingFormat c = S3Connection( aws_access_key_id="TESTER", aws_secret_access_key="test123", host="rgw.domain.com", calling_format = OrdinaryCallingFormat() ) bucket = c.get_bucket("tenant:bucket")
It’s not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path
format has to be used with multi-tenancy.
Additional Resources
- See Multi Tenancy for additional details.
2.4.29. Additional Resources
- See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on configuring a bucket website.
2.5. S3 object operations
As a developer, you can perform object operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway.
The following table list the Amazon S3 functional operations for objects, along with the function’s support status.
Get Object | Supported | |
---|---|---|
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Supported | ||
Multi-Tenancy | Supported |
2.5.1. Prerequisites
- A running Red Hat Ceph Storage cluster.
- A RESTful client.
2.5.2. S3 get an object from a bucket
Retrieves an object from a bucket:
Syntax
GET /BUCKET/OBJECT HTTP/1.1
Add the versionId
subresource to retrieve a particular version of the object:
Syntax
GET /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
Name | Description | Valid Values | Required |
---|---|---|---|
range | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
if-modified-since | Gets only if modified since the timestamp. | Timestamp | No |
if-unmodified-since | Gets only if not modified since the timestamp. | Timestamp | No |
if-match | Gets only if object ETag matches ETag. | Entity Tag | No |
if-none-match | Gets only if object ETag matches ETag. | Entity Tag | No |
Name | Description |
---|---|
Content-Range | Data range, will only be returned if the range header field was specified in the request |
x-amz-version-id | Returns the version ID or null. |
2.5.3. S3 get information on an object
Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload.
Retrieves the current version of the object:
Syntax
HEAD /BUCKET/OBJECT HTTP/1.1
Add the versionId
subresource to retrieve info for a particular version:
Syntax
HEAD /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
Name | Description | Valid Values | Required |
---|---|---|---|
range | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
if-modified-since | Gets only if modified since the timestamp. | Timestamp | No |
if-unmodified-since | Gets only if not modified since the timestamp. | Timestamp | No |
if-match | Gets only if object ETag matches ETag. | Entity Tag | No |
if-none-match | Gets only if object ETag matches ETag. | Entity Tag | No |
Name | Description |
---|---|
x-amz-version-id | Returns the version ID or null. |
2.5.4. S3 add an object to a bucket
Adds an object to a bucket. You must have write permissions on the bucket to perform this operation.
Syntax
PUT /BUCKET/OBJECT HTTP/1.1
Name | Description | Valid Values | Required |
---|---|---|---|
content-md5 | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
content-type | A standard MIME type. |
Any MIME type. Default: | No |
x-amz-meta-<…> | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
x-amz-acl | A canned ACL. |
| No |
Name | Description |
---|---|
x-amz-version-id | Returns the version ID or null. |
2.5.5. S3 delete an object
Removes an object. Requires WRITE permission set on the containing bucket.
Deletes an object. If object versioning is on, it creates a marker.
Syntax
DELETE /BUCKET/OBJECT HTTP/1.1
To delete an object when versioning is on, you must specify the versionId
subresource and the version of the object to delete.
DELETE /BUCKET/OBJECT?versionId=VERSION_ID HTTP/1.1
2.5.6. S3 delete multiple objects
This API call deletes multiple objects from a bucket.
Syntax
POST /BUCKET/OBJECT?delete HTTP/1.1
2.5.7. S3 get an object’s Access Control List (ACL)
Returns the ACL for the current version of the object:
Syntax
GET /BUCKET/OBJECT?acl HTTP/1.1
Add the versionId
subresource to retrieve the ACL for a particular version:
Syntax
GET /BUCKET/OBJECT?versionId=VERSION_ID&acl HTTP/1.1
Name | Description |
---|---|
x-amz-version-id | Returns the version ID or null. |
Name | Type | Description |
---|---|---|
| Container | A container for the response. |
| Container | A container for the ACL information. |
| Container |
A container for the object owner’s |
| String | The object owner’s ID. |
| String | The object owner’s display name. |
| Container |
A container for |
| Container |
A container for the |
| String |
The permission given to the |
2.5.8. S3 set an object’s Access Control List (ACL)
Sets an object ACL for the current version of the object.
Syntax
PUT /BUCKET/OBJECT?acl
Name | Type | Description |
---|---|---|
| Container | A container for the response. |
| Container | A container for the ACL information. |
| Container |
A container for the object owner’s |
| String | The object owner’s ID. |
| String | The object owner’s display name. |
| Container |
A container for |
| Container |
A container for the |
| String |
The permission given to the |
2.5.9. S3 copy an object
To copy an object, use PUT
and specify a destination bucket and the object name.
Syntax
PUT /DEST_BUCKET/DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET/SOURCE_OBJECT
Name | Description | Valid Values | Required |
---|---|---|---|
x-amz-copy-source | The source bucket name + object name. | BUCKET/OBJECT | Yes |
x-amz-acl | A canned ACL. |
| No |
x-amz-copy-if-modified-since | Copies only if modified since the timestamp. | Timestamp | No |
x-amz-copy-if-unmodified-since | Copies only if unmodified since the timestamp. | Timestamp | No |
x-amz-copy-if-match | Copies only if object ETag matches ETag. | Entity Tag | No |
x-amz-copy-if-none-match | Copies only if object ETag doesn’t match. | Entity Tag | No |
Name | Type | Description |
---|---|---|
CopyObjectResult | Container | A container for the response elements. |
LastModified | Date | The last modified date of the source object. |
Etag | String | The ETag of the new object. |
Additional Resources
- <additional resource 1>
- <additional resource 2>
2.5.10. S3 add an object to a bucket using HTML forms
Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation.
Syntax
POST /BUCKET/OBJECT HTTP/1.1
2.5.11. S3 determine options for a request
A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers.
Syntax
OPTIONS /OBJECT HTTP/1.1
2.5.12. S3 initiate a multipart upload
Initiates a multi-part upload process. Returns a UploadId
, which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload.
Syntax
POST /BUCKET/OBJECT?uploads
Name | Description | Valid Values | Required |
---|---|---|---|
| A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
| A standard MIME type. |
Any MIME type. Default: | No |
| User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
| A canned ACL. |
| No |
Name | Type | Description |
---|---|---|
| Container | A container for the results. |
| String | The bucket that will receive the object contents. |
| String |
The key specified by the |
| String |
The ID specified by the |
2.5.13. S3 add a part to a multipart upload
Adds a part to a multi-part upload.
Specify the uploadId
subresource and the upload ID to add a part to a multi-part upload:
Syntax
PUT /BUCKET/OBJECT?partNumber=&uploadId=UPLOAD_ID HTTP/1.1
The following HTTP response might be returned:
HTTP Status | Status Code | Description |
---|---|---|
| NoSuchUpload | Specified upload-id does not match any initiated upload on this object |
2.5.14. S3 list the parts of a multipart upload
Specify the uploadId
subresource and the upload ID to list the parts of a multi-part upload:
Syntax
GET /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
Name | Type | Description |
---|---|---|
| Container | A container for the results. |
| String | The bucket that will receive the object contents. |
| String |
The key specified by the |
| String |
The ID specified by the |
| Container |
Contains the |
| String | The initiator’s ID. |
| String | The initiator’s display name. |
| Container |
A container for the |
| String |
The method used to store the resulting object. |
| String |
The part marker to use in a subsequent request if |
| String |
The next part marker to use in a subsequent request if |
| Integer |
The max parts allowed in the response as specified by the |
| Boolean |
If |
| Container |
A container for |
| Integer | The identification number of the part. |
| String | The part’s entity tag. |
| Integer | The size of the uploaded part. |
2.5.15. S3 assemble the uploaded parts
Assembles uploaded parts and creates a new object, thereby completing a multipart upload.
Specify the uploadId
subresource and the upload ID to complete a multi-part upload:
Syntax
POST /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
Name | Type | Description | Required |
---|---|---|---|
| Container | A container consisting of one or more parts. | Yes |
| Container |
A container for the | Yes |
| Integer | The identifier of the part. | Yes |
| String | The part’s entity tag. | Yes |
Name | Type | Description |
---|---|---|
| Container | A container for the response. |
| URI | The resource identifier (path) of the new object. |
| String | The name of the bucket that contains the new object. |
| String | The object’s key. |
| String | The entity tag of the new object. |
2.5.16. S3 copy a multipart upload
Uploads a part by copying data from an existing object as data source.
Specify the uploadId
subresource and the upload ID to perform a multi-part upload copy:
Syntax
PUT /BUCKET/OBJECT?partNumber=PartNumber&uploadId=UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY:HASH_OF_HEADER_AND_SECRET
Name | Description | Valid Values | Required |
---|---|---|---|
| The source bucket name and object name. | BUCKET/OBJECT | Yes |
| The range of bytes to copy from the source object. |
Range: | No |
Name | Type | Description |
---|---|---|
| Container | A container for all response elements. |
| String | Returns the ETag of the new part. |
| String | Returns the date the part was last modified. |
.Additional Resources
- For more information about this feature, see the Amazon S3 site.
2.5.17. S3 abort a multipart upload
Aborts a multipart upload.
Specify the uploadId
subresource and the upload ID to abort a multi-part upload:
Syntax
DELETE /BUCKET/OBJECT?uploadId=UPLOAD_ID HTTP/1.1
2.5.18. S3 Hadoop interoperability
For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway.
Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3.
2.5.19. Additional Resources
- See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on multi-tenancy.
2.6. Additional Resources
- See Appendix A for Amazon S3 common request headers.
- See Appendix B for Amazon S3 common response status codes.
- See Appendix C for unsupported header fields.