이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Configure Ceph Object Gateway
Once you have installed the Ceph Object Gateway packages, the next step is to configure your Ceph Object Gateway. There are two approaches:
-
Simple: A
simple
Ceph Object Gateway configuration implies that you are running a Ceph Object Storage service in a single data center. So you can configure the Ceph Object Gateway without regard to regions and zones. -
Federated: A
federated
Ceph Object Gateway configuration implies that you are running a Ceph Object Storage service in a geographically distributed manner for fault tolerance and failover. This involves configuring your Ceph Object Gateway instances with regions and zones.
In this guide we shall proceed with a simple
Ceph Object Gateway configuration.
The Ceph Object Gateway is a client of the Ceph Storage Cluster. As a Ceph Storage Cluster client, it requires:
-
A name for the gateway instance. We use
gateway
in this guide. - A storage cluster user name with appropriate permissions in a keyring.
- Pools to store its data.
- A data directory for the gateway instance.
- An instance entry in the Ceph Configuration file.
- A configuration file for the web server to interact with FastCGI.
The configuration steps are as follows:
2.1. Create a User and Keyring 링크 복사링크가 클립보드에 복사되었습니다!
Each instance must have a user name and key to communicate with a Ceph Storage Cluster. In the following steps, we use an admin node
to create a keyring. Then, we create a client user name and key. Next, we add the key to the Ceph Storage Cluster. Finally, we distribute the key ring to the node containing the gateway instance i.e, gateway host
.
Monitor Key CAPS
When you provide CAPS to the key, you MUST provide read capability. However, you have the option of providing write capability for the monitor. This is an important choice. If you provide write capability to the key, the Ceph Object Gateway will have the ability to create pools automatically; however, it will create pools with either the default number of placement groups (not ideal) or the number of placement groups you specified in your Ceph configuration file. If you allow the Ceph Object Gateway to create pools automatically, ensure that you have reasonable defaults for the number of placement groups first.
Execute the following steps on the admin node
of your cluster:
Create a keyring for the gateway:
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a Ceph Object Gateway user name and key for each instance. For exemplary purposes, we will use the name
gateway
afterclient.radosgw
:sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add capabilities to the key:
sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
sudo ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once you have created a keyring and key to enable the Ceph Object Gateway with access to the Ceph Storage Cluster, add the key to your Ceph Storage Cluster. For example:
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Distribute the keyring to the
gateway host
:sudo scp /etc/ceph/ceph.client.radosgw.keyring ceph@{hostname}:/home/ceph ssh {hostname} sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
sudo scp /etc/ceph/ceph.client.radosgw.keyring ceph@{hostname}:/home/ceph ssh {hostname} sudo mv ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe 5th step is optional if
admin node
is thegateway host
.
2.2. Create Pools 링크 복사링크가 클립보드에 복사되었습니다!
Ceph Object Gateways require Ceph Storage Cluster pools to store specific gateway data. If the user you created in the preceding section has permissions, the gateway will create the pools automatically. However, you should ensure that you have set an appropriate default number of placement groups per pool in your Ceph configuration file, or create the pools manually with an appropriate number of placement groups. See Ceph Placement Groups (PGs) per Pool Calculator for details.
The Ceph Object Gateway requires multiple pools, and they can use the same CRUSH ruleset. The mon_pg_warn_max_per_osd
setting warns you if assign too many placement groups to a pool (i.e., 300 by default). You may adjust the value to suit your needs and the capabilities of your hardware where n
is the maximum number of PGs per OSD.
mon_pg_warn_max_per_osd = n
mon_pg_warn_max_per_osd = n
When configuring a gateway with the default region and zone, the naming convention for pools typically omits region and zone naming, but you can use any naming convention you prefer. For example:
-
.rgw
-
.rgw.root
-
.rgw.control
-
.rgw.gc
-
.rgw.buckets
-
.rgw.buckets.index
-
.log
-
.intent-log
-
.usage
-
.users
-
.users.email
-
.users.swift
-
.users.uid
As already said, if write permission is given, Ceph Object Gateway will create pools automatically. To create a pool manually, execute the following:
ceph osd pool create {poolname} {pg-num} {pgp-num}
ceph osd pool create {poolname} {pg-num} {pgp-num}
When adding a large number of pools, it may take some time for your cluster to return to a active + clean
state.
When you have completed this step, execute the following to ensure that you have created all of the foregoing pools:
rados lspools
rados lspools
2.3. Add a Gateway Configuration to Ceph 링크 복사링크가 클립보드에 복사되었습니다!
Add the Ceph Object Gateway configuration to your Ceph Configuration file in admin node
. The Ceph Object Gateway configuration requires you to identify the Ceph Object Gateway instance. Then, you must specify the host name where you installed the Ceph Object Gateway daemon, a keyring (for use with cephx), the socket path for FastCGI and a log file.
For RHEL 6, append the following configuration to /etc/ceph/ceph.conf
in your admin node
:
For RHEL 7, append the following configuration to /etc/ceph/ceph.conf
in your admin node
:
Here, {hostname}
is the short hostname (output of command hostname -s
) of the node that is going to provide the gateway service i.e, the gateway host
.
The [client.radosgw.gateway]
portion of the gateway instance identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway (i.e., radosgw
).
The last line in the configuration i.e, rgw print continue = false
is added to avoid issues with PUT
operations.
Once you finish the setup procedure, if you encounter issues with your configuration, you can add debugging to the [global]
section of your Ceph configuration file and restart the gateway to help troubleshoot any configuration issues. For example:
[global] #append the following in the global section. debug ms = 1 debug rgw = 20
[global]
#append the following in the global section.
debug ms = 1
debug rgw = 20
2.4. Distribute updated Ceph configuration file 링크 복사링크가 클립보드에 복사되었습니다!
The updated Ceph configuration file needs to be distributed to all Ceph cluster nodes from the admin node
.
It involves the following steps:
Pull the updated
ceph.conf
from/etc/ceph/
to the root directory of the cluster in admin node (e.g.ceph-config
directory). The contents ofceph.conf
inceph-config
will get overwritten. To do so, execute the following:ceph-deploy --overwrite-conf config pull {hostname}
ceph-deploy --overwrite-conf config pull {hostname}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
{hostname}
is the short hostname of the Ceph admin node.Push the updated
ceph.conf
file from the admin node to all other nodes in the cluster including thegateway host
:ceph-deploy --overwrite-conf config push [HOST][HOST...]
ceph-deploy --overwrite-conf config push [HOST][HOST...]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Give the hostnames of the other Ceph nodes in place of
[HOST][HOST...]
.
2.5. Copy ceph.client.admin.keyring from admin node to gateway host 링크 복사링크가 클립보드에 복사되었습니다!
As the gateway host
can be a different node that is not part of the cluster, the ceph.client.admin.keyring
needs to be copied from the admin node
to the gateway host
. To do so, execute the following on admin node
:
sudo scp /etc/ceph/ceph.client.admin.keyring ceph@{hostname}:/home/ceph ssh {hostname} sudo mv ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
sudo scp /etc/ceph/ceph.client.admin.keyring ceph@{hostname}:/home/ceph
ssh {hostname}
sudo mv ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
The above step need not be executed if admin node
is the gateway host
.
2.6. Create Data Directory 링크 복사링크가 클립보드에 복사되었습니다!
Deployment scripts may not create the default Ceph Object Gateway data directory. Create data directories for each instance of a radosgw
daemon (if you haven’t done so already). The host
variables in the Ceph configuration file determine which host runs each instance of a radosgw
daemon. The typical form specifies the radosgw
daemon, the cluster name and the daemon ID.
To create the directory on the gateway host
, execute the following:
sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
2.7. Adjust Socket Directory Permissions 링크 복사링크가 클립보드에 복사되었습니다!
The radosgw
daemon runs as the unprivileged apache
UID, and this UID must have write access to the location where it will write its socket file.
To grant permissions to the default socket location, execute the following on the gateway host
:
sudo chown apache:apache /var/run/ceph
sudo chown apache:apache /var/run/ceph
2.8. Start radosgw service 링크 복사링크가 클립보드에 복사되었습니다!
The Ceph Object gateway daemon needs to be started. To do so, execute the following on the gateway host
:
On RHEL 7:
sudo systemctl start ceph-radosgw sudo chkconfig ceph-radosgw on
sudo systemctl start ceph-radosgw
sudo chkconfig ceph-radosgw on
On RHEL 6:
sudo service ceph-radosgw start sudo chkconfig ceph-radosgw on
sudo service ceph-radosgw start
sudo chkconfig ceph-radosgw on
2.9. Change Log File Owner 링크 복사링크가 클립보드에 복사되었습니다!
The radosgw
daemon runs as the unprivileged apache
UID, but the root
user owns the log file by default. You must change it to the apache
user so that Apache can populate the log file.
sudo chown apache:apache /var/log/radosgw/client.radosgw.gateway.log
sudo chown apache:apache /var/log/radosgw/client.radosgw.gateway.log
2.10. Create a Gateway Configuration file 링크 복사링크가 클립보드에 복사되었습니다!
On the host where you installed the Ceph Object Gateway i.e, gateway host
, create an rgw.conf
file. Place the file in the /etc/httpd/conf.d
directory. It is a httpd
configuration file which is needed for the radosgw service. This file must be readable by the web server.
Execute the following steps:
Create the file:
sudo vi /etc/httpd/conf.d/rgw.conf
sudo vi /etc/httpd/conf.d/rgw.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 6, add the following contents to the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 7, add the following contents to the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Restart httpd service 링크 복사링크가 클립보드에 복사되었습니다!
The httpd service needs to be restarted to accept the new configuration.
On RHEL 7, execute:
sudo systemctl restart httpd sudo systemctl enable httpd
sudo systemctl restart httpd
sudo systemctl enable httpd
On RHEL 6, execute:
sudo service httpd restart sudo chkconfig httpd on
sudo service httpd restart
sudo chkconfig httpd on