Questo contenuto non è disponibile nella lingua selezionata.
Chapter 4. Basic configuration
As a storage administrator, learning the basics of configuring the Ceph Object Gateway is important. You can learn about the defaults and the embedded web server called Beast. For troubleshooting issues with the Ceph Object Gateway, you can adjust the logging and debugging output generated by the Ceph Object Gateway. Also, you can provide a High-Availability proxy for storage cluster access using the Ceph Object Gateway.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- Installation of the Ceph Object Gateway software package.
4.1. Add a wildcard to the DNS
You can add the wildcard such as hostname to the DNS record of the DNS server.
Prerequisite
- A running Red Hat Ceph Storage cluster.
- Ceph Object Gateway installed.
- Root-level access to the admin node.
Procedure
To use Ceph with S3-style subdomains, add a wildcard to the DNS record of the DNS server that the
ceph-radosgw
daemon uses to resolve domain names:Syntax
bucket-name.domain-name.com
For
dnsmasq
, add the following address setting with a dot (.) prepended to the host name:Syntax
address=/.HOSTNAME_OR_FQDN/HOST_IP_ADDRESS
Example
address=/.gateway-host01/192.168.122.75
For
bind
, add a wildcard to the DNS record:Example
$TTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @
Restart the DNS server and ping the server with a subdomain to ensure that the
ceph-radosgw
daemon can process the subdomain requests:Syntax
ping mybucket.HOSTNAME
Example
[root@host01 ~]# ping mybucket.gateway-host01
-
If the DNS server is on the local machine, you might need to modify
/etc/resolv.conf
by adding a nameserver entry for the local machine. Add the host name in the Ceph Object Gateway zone group:
Get the zone group:
Syntax
radosgw-admin zonegroup get --rgw-zonegroup=ZONEGROUP_NAME > zonegroup.json
Example
[ceph: root@host01 /]# radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json
Take a back-up of the JSON file:
Example
[ceph: root@host01 /]# cp zonegroup.json zonegroup.backup.json
View the
zonegroup.json
file:Example
[ceph: root@host01 /]# cat zonegroup.json { "id": "d523b624-2fa5-4412-92d5-a739245f0451", "name": "asia", "api_name": "asia", "is_master": "true", "endpoints": [], "hostnames": [], "hostnames_s3website": [], "master_zone": "d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32", "zones": [ { "id": "d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32", "name": "india", "endpoints": [], "log_meta": "false", "log_data": "false", "bucket_index_max_shards": 11, "read_only": "false", "tier_type": "", "sync_from_all": "true", "sync_from": [], "redirect_zone": "" } ], "placement_targets": [ { "name": "default-placement", "tags": [], "storage_classes": [ "STANDARD" ] } ], "default_placement": "default-placement", "realm_id": "d7e2ad25-1630-4aee-9627-84f24e13017f", "sync_policy": { "groups": [] } }
Update the
zonegroup.json
file with new host name:Example
"hostnames": ["host01", "host02","host03"],
Set the zone group back in the Ceph Object Gateway:
Syntax
radosgw-admin zonegroup set --rgw-zonegroup=ZONEGROUP_NAME --infile=zonegroup.json
Example
[ceph: root@host01 /]# radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json
Update the period:
Example
[ceph: root@host01 /]# radosgw-admin period update --commit
- Restart the Ceph Object Gateway so that the DNS setting takes effect.
Additional Resources
- See the The Ceph configuration database section in the Red Hat Ceph Storage Configuration Guide for more details.
4.2. The Beast front-end web server
The Ceph Object Gateway provides Beast, a C/C embedded front-end web server. Beast uses the `Boost.Beast` C library to parse HTTP, and Boost.Asio
for asynchronous network I/O.
Additional Resources
4.3. Beast configuration options
The following Beast configuration options can be passed to the embedded web server in the Ceph configuration file for the RADOS Gateway. Each option has a default value. If a value is not specified, the default value is empty.
Option | Description | Default |
---|---|---|
|
Sets the listening address in the form | EMPTY |
| Path to the SSL certificate file used for SSL-enabled endpoints. | EMPTY |
|
Optional path to the private key file used for SSL-enabled endpoints. If one is not given the file specified by | EMPTY |
| Performance optimization in some environments. | EMPTY |
Example /etc/ceph/ceph.conf
file with Beast options using SSL:
... [client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>
By default, the Beast front end writes an access log line recording all requests processed by the server to the RADOS Gateway log file.
Additional Resources
- See Using the Beast front end for more information.
4.4. Configuring SSL for Beast
You can configure the Beast front-end web server to use the OpenSSL library to provide Transport Layer Security (TLS). To use Secure Socket Layer (SSL) with Beast, you need to obtain a certificate from a Certificate Authority (CA) that matches the hostname of the Ceph Object Gateway node. Beast also requires the secret key, server certificate, and any other CA in a single .pem
file.
Prevent unauthorized access to the .pem
file, because it contains the secret key hash.
Red Hat recommends obtaining a certificate from a CA with the Subject Alternative Name (SAN) field, and a wildcard for use with S3-style subdomains.
Red Hat recommends only using SSL with the Beast front-end web server for small to medium sized test environments. For production environments, you must use HAProxy and keepalived
to terminate the SSL connection at the HAProxy.
If the Ceph Object Gateway acts as a client and a custom certificate is used on the server, you can inject a custom CA by importing it on the node and then mapping the etc/pki
directory into the container with the extra_container_args
parameter in the Ceph Object Gateway specification file.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- Installation of the Ceph Object Gateway software package.
- Installation of the OpenSSL software package.
- Root-level access to the Ceph Object Gateway node.
Procedure
Create a new file named
rgw.yml
in the current directory:Example
[ceph: root@host01 /]# touch rgw.yml
Open the
rgw.yml
file for editing, and customize it for the environment:Syntax
service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH
Example
service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----
Deploy the Ceph Object Gateway using the service specification file:
Example
[ceph: root@host01 /]# ceph orch apply -i rgw.yml
4.5. D3N data cache
Datacenter-Data-Delivery Network (D3N) uses high-speed storage, such as NVMe
, to cache datasets on the access side. Such caching allows big data jobs to use the compute and fast-storage resources available on each Rados Gateway node at the edge. The Rados Gateways act as cache servers for the back-end object store (OSDs), storing data locally for reuse.
Each time the Rados Gateway is restarted the content of the cache directory is purged.
4.5.1. Adding D3N cache directory
To enable D3N cache on RGW, you need to also include the D3N cache directory in podman unit.run
.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ceph Object Gateway installed.
- Root-level access to the admin node.
- A fast NVMe drive in each RGW node to serve as the local cache storage.
Procedure
Create a mount point for the NVMe drive.
Syntax
mkfs.ext4 nvme-drive-path
Example
[ceph: root@host01 /]# mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/
Create a cache directory path.
Syntax
mkdir <nvme-mount-path>/cache-directory-name
Example
[ceph: root@host01 /]# mkdir /mnt/nvme0n1/rgw_datacache
Provide a+rwx permission to
nvme-mount-path
andrgw_d3n_l1_datacache_persistent_path
.Syntax
chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path
Example
[ceph: root@host01 /]# chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/
Create/Modify a RGW specification file with
extra_container_args
to addrgw_d3n_l1_datacache_persistent_path
intopodman unit.run
.Syntax
"extra_container_args: "-v" "rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path" "
Example
[ceph: root@host01 /]# cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: "-v" "/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/"
NoteIf there are multiple instances of RGW in a single host, then a separate
rgw_d3n_l1_datacache_persistent_path
has to be created for each instance and add each path inextra_container_args
.Example:
For two instances of RGW in each host, create two separate cache-directory under
rgw_d3n_l1_datacache_persistent_path
:/mnt/nvme0n1/rgw_datacache/rgw1
and/mnt/nvme0n1/rgw_datacache/rgw2
Example for "extra_container_args" in rgw specification file:
"extra_container_args: "-v" "/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/" "-v" "/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/" "
Example for rgw-spec.yml::
[ceph: root@host01 /]# cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: "-v" "/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/" "-v" "/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/"
Redeploy the RGW service:
Example
[ceph: root@host01 /]# ceph orch apply -i rgw-spec.yml
4.5.2. Configuring D3N on rados gateway
You can configure the D3N data cache on an existing RGW to improve the performance of big-data jobs running in Red Hat Ceph Storage clusters.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ceph Object Gateway installed.
- Root-level access to the admin node.
- A fast NVMe to serve as the cache storage.
Adding the required D3N-related configuration
To enable D3N on an existing RGW, the following configuration needs to be set for each Rados Gateways client :
Syntax
# ceph config set <client.rgw> <CONF-OPTION> <VALUE>
-
rgw_d3n_l1_local_datacache_enabled=true
rgw_d3n_l1_datacache_persistent_path=path to the cache directory
Example
rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/
rgw_d3n_l1_datacache_size=max_size_of_cache_in_bytes
Example
rgw_d3n_l1_datacache_size=10737418240
Example procedure
Create a test object:
NoteThe test object needs to be larger than 4 MB to cache.
Example
[ceph: root@host01 /]# fallocate -l 1G ./1G.dat [ceph: root@host01 /]# s3cmd mb s3://bkt [ceph: root@host01 /]# s3cmd put ./1G.dat s3://bkt
Perform
GET
of an object:Example
[ceph: root@host01 /]# s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done
Verify cache creation. Cache will be created with the name consisting of object
key-name
within a configuredrgw_d3n_l1_datacache_persistent_path
.Example
[ceph: root@host01 /]# ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1
Once the cache is created for an object, the next
GET
operation for that object will access from cache resulting in faster access.Example
[ceph: root@host01 /]# s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done
In the above example, to demonstrate the cache acceleration, we are writing to RAM drive (
/dev/shm
).
Additional Resources
- See the Ceph subsystems default logging level values section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability.
- See the Understanding Ceph logs section in the Red Hat Ceph Storage Troubleshooting Guide for additional details on using high availability.
4.6. Adjusting logging and debugging output
Once you finish the setup procedure, check your logging output to ensure it meets your needs. By default, the Ceph daemons log to journald
, and you can view the logs using the journalctl
command. Alternatively, you can also have the Ceph daemons log to files, which are located under the /var/log/ceph/CEPH_CLUSTER_ID/
directory.
Verbose logging can generate over 1 GB of data per hour. This type of logging can potentially fill up the operating system’s disk, causing the operating system to stop functioning.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Installation of the Ceph Object Gateway software.
Procedure
Set the following parameter to increase the Ceph Object Gateway logging output:
Syntax
ceph config set client.rgw debug_rgw VALUE
Example
[ceph: root@host01 /]# ceph config set client.rgw debug_rgw 20
You can also modify these settings at runtime:
Syntax
ceph --admin-daemon /var/run/ceph/ceph-client.rgw.NAME.asok config set debug_rgw VALUE
Example
[ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20
Optionally, you can configure the Ceph daemons to log their output to files. Set the
log_to_file
, andmon_cluster_log_to_file
options totrue
:Example
[ceph: root@host01 /]# ceph config set global log_to_file true [ceph: root@host01 /]# ceph config set global mon_cluster_log_to_file true
Additional Resources
- See the Ceph debugging and logging configuration section of the Red Hat Ceph Storage Configuration Guide for more details.
4.7. Static web hosting
As a storage administrator, you can configure the Ceph Object Gateway to host static websites in S3 buckets. Traditional website hosting involves configuring a web server for each website, which can use resources inefficiently when content does not change dynamically. For example, sites that do not use server-side services like PHP, servlets, databases, nodejs, and the like. This approach is substantially more economical than setting up virtual machines with web servers for each site.
Prerequisites
- A healthy, running Red Hat Ceph Storage cluster.
4.7.1. Static web hosting assumptions
Static web hosting requires at least one running Red Hat Ceph Storage cluster, and at least two Ceph Object Gateway instances for the static web sites. Red Hat assumes that each zone will have multiple gateway instances using a load balancer, such as high-availability (HA) Proxy and keepalived
.
Red Hat DOES NOT support using a Ceph Object Gateway instance to deploy both standard S3/Swift APIs and static web hosting simultaneously.
Additional Resources
- See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for additional details on using high availability.
4.7.2. Static web hosting requirements
Static web hosting functionality uses its own API, so configuring a gateway to use static web sites in S3 buckets requires the following:
- S3 static web hosting uses Ceph Object Gateway instances that are separate and distinct from instances used for standard S3/Swift API use cases.
- Gateway instances hosting S3 static web sites should have separate, non-overlapping domain names from the standard S3/Swift API gateway instances.
- Gateway instances hosting S3 static web sites should use separate public-facing IP addresses from the standard S3/Swift API gateway instances.
- Gateway instances hosting S3 static web sites load balance, and if necessary terminate SSL, using HAProxy/keepalived.
4.7.3. Static web hosting gateway setup
To enable a Ceph Object Gateway for static web hosting, set the following options:
Syntax
ceph config set client.rgw OPTION VALUE
Example
[ceph: root@host01 /]# ceph config set client.rgw rgw_enable_static_website true [ceph: root@host01 /]# ceph config set client.rgw rgw_enable_apis s3,s3website [ceph: root@host01 /]# ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com [ceph: root@host01 /]# ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com [ceph: root@host01 /]# ceph config set client.rgw rgw_resolve_cname true
The rgw_enable_static_website
setting MUST be true
. The rgw_enable_apis
setting MUST enable the s3website
API. The rgw_dns_name
and rgw_dns_s3website_name
settings must provide their fully qualified domains. If the site uses canonical name extensions, then set the rgw_resolve_cname
option to true
.
The FQDNs of rgw_dns_name
and rgw_dns_s3website_name
MUST NOT overlap.
4.7.4. Static web hosting DNS configuration
The following is an example of assumed DNS settings, where the first two lines specify the domains of the gateway instance using a standard S3 interface and point to the IPv4 and IPv6 addresses. The third line provides a wildcard CNAME setting for S3 buckets using canonical name extensions. The fourth and fifth lines specify the domains for the gateway instance using the S3 website interface and point to their IPv4 and IPv6 addresses.
objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20
The IP addresses in the first two lines differ from the IP addresses in the fourth and fifth lines.
If using Ceph Object Gateway in a multi-site configuration, consider using a routing solution to route traffic to the gateway closest to the client.
The Amazon Web Service (AWS) requires static web host buckets to match the host name. Ceph provides a few different ways to configure the DNS, and HTTPS will work if the proxy has a matching certificate.
Hostname to a Bucket on a Subdomain
To use AWS-style S3 subdomains, use a wildcard in the DNS entry which can redirect requests to any bucket. A DNS entry might look like the following:
*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.
Access the bucket name, where the bucket name is bucket1
, in the following manner:
http://bucket1.objects-website-zonegroup.domain.com
Hostname to Non-Matching Bucket
Ceph supports mapping domain names to buckets without including the bucket name in the request, which is unique to Ceph Object Gateway. To use a domain name to access a bucket, map the domain name to the bucket name. A DNS entry might look like the following:
www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.
Where the bucket name is bucket2
.
Access the bucket in the following manner:
http://www.example.com
Hostname to Long Bucket with CNAME
AWS typically requires the bucket name to match the domain name. To configure the DNS for static web hosting using CNAME, the DNS entry might look like the following:
www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.
Access the bucket in the following manner:
http://www.example.com
Hostname to Long Bucket without CNAME
If the DNS name contains other non-CNAME records, such as SOA
, NS
, MX
or TXT
, the DNS record must map the domain name directly to the IP address. For example:
www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20
Access the bucket in the following manner:
http://www.example.com
4.7.5. Creating a static web hosting site
To create a static website, perform the following steps:
Create an S3 bucket. The bucket name MIGHT be the same as the website’s domain name. For example,
mysite.com
may have a bucket name ofmysite.com
. This is required for AWS, but it is NOT required for Ceph.- See the Static web hosting DNS configuration section in the Red Hat Ceph Storage Object Gateway Guide for details.
-
Upload the static website content to the bucket. Contents may include HTML, CSS, client-side JavaScript, images, audio/video content, and other downloadable files. A website MUST have an
index.html
file and might have anerror.html
file. - Verify the website’s contents. At this point, only the creator of the bucket has access to the contents.
- Set permissions on the files so that they are publicly readable.
4.8. High availability for the Ceph Object Gateway
As a storage administrator, you can assign many instances of the Ceph Object Gateway to a single zone. This allows you to scale out as the load increases, that is, the same zone group and zone; however, you do not need a federated architecture to use a highly available proxy. Since each Ceph Object Gateway daemon has its own IP address, you can use the ingress
service to balance the load across many Ceph Object Gateway daemons or nodes. The ingress
service manages HAProxy and keepalived
daemons for the Ceph Object Gateway environment. You can also terminate HTTPS traffic at the HAProxy server, and use HTTP between the HAProxy server and the Beast front-end web server instances for the Ceph Object Gateway.
Prerequisites
- At least two Ceph Object Gateway daemons running on different hosts.
-
Capacity for at least two instances of the
ingress
service running on different hosts.
4.8.1. High availability service
The ingress
service provides a highly available endpoint for the Ceph Object Gateway. The ingress
service can be deployed to any number of hosts as needed. Red Hat recommends having at least two supported Red Hat Enterprise Linux servers, each server configured with the ingress
service. You can run a high availability (HA) service with a minimum set of configuration options. The Ceph orchestrator deploys the ingress
service, which manages the haproxy
and keepalived
daemons, by providing load balancing with a floating virtual IP address. The active haproxy
distributes all Ceph Object Gateway requests to all the available Ceph Object Gateway daemons.
A virtual IP address is automatically configured on one of the ingress
hosts at a time, known as the primary host. The Ceph orchestrator selects the first network interface based on existing IP addresses that are configured as part of the same subnet. In cases where the virtual IP address does not belong to the same subnet, you can define a list of subnets for the Ceph orchestrator to match with existing IP addresses. If the keepalived
daemon and the active haproxy
are not responding on the primary host, then the virtual IP address moves to a backup host. This backup host becomes the new primary host.
Currently, you can not configure a virtual IP address on a network interface that does not have a configured IP address.
To use the secure socket layer (SSL), SSL must be terminated by the ingress
service and not at the Ceph Object Gateway.
4.8.2. Configuring high availability for the Ceph Object Gateway
To configure high availability (HA) for the Ceph Object Gateway you write a YAML configuation file, and the Ceph orchestrator does the installation, configuraton, and management of the ingress
service. The ingress
service uses the haproxy
and keepalived
daemons to provide high availability for the Ceph Object Gateway.
With the Ceph 8.0 release, you can now deploy an ingress service with RGW as the backend, where the "use_tcp_mode_over_rgw" option is set to true in the "spec" section of the ingress specification.
Prerequisites
-
A minimum of two hosts running Red Hat Enterprise Linux 9, or higher, for installing the
ingress
service on. - A healthy running Red Hat Ceph Storage cluster.
- A minimum of two Ceph Object Gateway daemons running on different hosts.
-
Root-level access to the host running the
ingress
service. - If using a firewall, then open port 80 for HTTP and port 443 for HTTPS traffic.
Procedure
Create a new
ingress.yaml
file:Example
[root@host01 ~] touch ingress.yaml
Open the
ingress.yaml
file for editing. Add the following options, and add values applicable to the environment:Syntax
service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS/CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS/CIDR ssl_cert: | 8
- 1
- Must be set to
ingress
. - 2
- Must match the existing Ceph Object Gateway service name.
- 3
- Where to deploy the
haproxy
andkeepalived
containers. - 4
- The virtual IP address where the
ingress
service is available. - 5
- The port to access the
ingress
service. - 6
- The port to access the
haproxy
load balancer status. - 7
- Optional list of available subnets.
- 8
- Optional SSL certificate and private key.
Example of providing an SSL cert
service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----
Example of not providing an SSL cert
service_type: ingress service_id: rgw.ssl # adjust to match your existing RGW service placement: hosts: - hostname1 - hostname2 spec: backend_service: rgw.rgw.ssl.ceph13 # adjust to match your existing RGW service virtual_ip: IP_ADDRESS/CIDR # ex: 192.168.20.1/24 frontend_port: INTEGER # ex: 443 monitor_port: INTEGER # ex: 1969 use_tcp_mode_over_rgw: True
Launch the Cephadm shell:
Example
[root@host01 ~]# cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml
Configure the latest
haproxy
andkeepalived
images:Syntax
ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID
Red Hat Enterprise Linux 9
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest [ceph: root@host01 /]# ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest
Install and configure the new
ingress
service using the Ceph orchestrator:[ceph: root@host01 /]# ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml
After the Ceph orchestrator completes, verify the HA configuration.
On the host running the
ingress
service, check that the virtual IP address appears:Example
[root@host01 ~]# ip addr show
Try reaching the Ceph Object Gateway from a Ceph client:
Syntax
wget HOST_NAME
Example
[root@client ~]# wget host01.example.com
If this returns an
index.html
with similar content as in the example below, then the HA configuration for the Ceph Object Gateway is working properly.Example
<?xml version="1.0" encoding="UTF-8"?> <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>
Additional resources
- See the Performing a Standard RHEL Installation Guide for more details.
- See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for more details.
4.9. Configuring NFS with Ceph Object gateway
NFS with the Ceph Object Storage backend is not a comprehensive NFS service. Its primary purpose is to assist in the seamless migration of legacy applications that use file to object storage with Ceph object storage by ingesting data through NFS file systems. The data is subsequently accessible through the S3 endpoint as an S3 bucket. For a complete NFS solution with features such as high availability and transparent failover, you should use NFS with CephFS backend.
The NFS service is deployed with Ceph Object Storage backend using Cephadm. The configuration for NFS is stored in the nfs-ganesha pool and exports are managed via the Command-Line-interface (CLI) commands and through the Ceph dashboard. See Deploying NFS service with Ceph Object Storage backend, Exporting the namespace to NFS-Ganesha, and Managing NFS Ganesha exports for more information.
Ceph Object Gateway namespaces can be exported over the file-based NFSv4 protocols, alongside traditional HTTP access protocols (S3 and Swift). In particular, the Ceph Object Gateway can now be configured to provide file-based access when embedded in the NFS-Ganesha NFS server.
Only the NFSv4 protocol is supported when using a Cephadm-based or Rook-based deployment.
Namespace Conventions
NFS conforms to Amazon Web Services (AWS) hierarchical namespace conventions which map UNIX-style path names onto S3 buckets and objects.
The top level of the attached namespace consists of S3 buckets, represented as NFS directories. Files and directories, subordinate to buckets, are each represented as objects, following S3 prefix, and delimiter conventions. / is the only supported path delimiter.
For example, if a NFS client has mounted a RGW namespace at /nfs
, then a file /nfs/mybucket/www/index.html
in the NFS namespace corresponds to an RGW object www/index.html
in a bucket/container mybucket
.
Limitations on supported operations
The Ceph Object Storage NFS interface supports most operations on files and directories, with the following restrictions:
- Links, including symlinks, are not supported.
NFS ACLs are not supported.
- Unix user and group ownership and permissions are supported.
Directories may not be moved/renamed.
- Files may be moved between directories.
Only full, sequential write I/O is supported
- Write operations are constrained to be uploads.
- Many typical I/O operations, such as editing files in place will fail as they perform non-sequential stores.
- Some file utilities writing sequentially, for example, some versions of GNU tar, may fail due to infrequent non-sequential stores.
-
When mounting via NFS, sequential application I/O can generally be constrained to be written sequentially to the NFS server via a synchronous mount option. For example,
-osync
in Linux. - NFS clients which cannot mount synchronously, for example, MS Windows, will not be able to upload files.
4.9.1. Exporting the namespace to NFS-Ganesha
To configure new NFS Ganesha exports for use with the Ceph Object Gateway, you have to use the Red Hat Ceph Storage Dashboard. See the Managing NFS Ganesha exports on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details.
For existing NFS environments using the Ceph Object Gateway, upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 is not supported at this time.
Red Hat supports only NFS version 4 exports using the Ceph Object Gateway.
You can create user-level NFS Ganesha exports by using the command-line-interface (CLI) only.
Prerequisites
- A running Red Hat Ceph Storage.
- A user created. For more information, see Create a user.
Procedure
Log in to the Cephadm shell.
Syntax
[root@host01 ~]# cephadm shell
Create the user-level export in the root directory.
Syntax
ceph nfs export create rgw --cluster-id NFS_CLUSTER_NAME --pseudo-path PATH_FROM_ROOT --user-id USER_ID
Example
[ceph:root@host01 /]# ceph nfs export create rgw --cluster-id cluster1 --pseudo-path root/testnfs1/ --user-id nfsuser
Mount the NFS.
Syntax
mount -t nfs IP_ADDRESS:PATH_FROM_ROOT -osync MOUNT_POINT
Example
[ceph:root@host01 /]# mount -t nfs 10.0.209.0:/root/testnfs1 -osync /mnt/mount1
For large uploads >200 GB, mounting with -osync
might affect the input/output operations. Use S3 with multi-part to upload such objects.
If you run setattr
on the buckets, it silently prevents setting attributes on paths representing buckets.