Chapter 4. Client Installation
Red Hat Ceph Storage supports the following types of Ceph clients:
- Ceph CLI
- The Ceph command-line interface (CLI) enables administrators to execute Ceph administrative commands. See Section 4.2, “Ceph Command-line Interface Installation” for information on installing the Ceph CLI.
- Block Device
- The Ceph Block Device is a thin-provisioned, resizable block device. See Section 4.3, “Ceph Block Device Installation” for information on installing Ceph Block Devices.
- Object Gateway
- The Ceph Object Ǵateway provides its own user management and Swift- and S3-compliant APIs. See Section 4.4, “Ceph Object Gateway Installation” for information on installing Ceph Object Gateways.
In addition, the ceph-ansible
utility provides the ceph-client
role that copies the Ceph configuration file and the administration keyring to nodes. See Section 4.1, “Installing the ceph-client role” for details.
To use Ceph clients, you must have a Ceph cluster storage running, preferably in the active + clean
state.
In addition, before installing the Ceph clients, ensure to perform the tasks listed in the Figure 2.1, “Prerequisite Workflow” section.
4.1. Installing the ceph-client role Copy linkLink copied to clipboard!
The ceph-client
role copies the Ceph configuration file and administration keyring to a node. In addition, you can use this role to create custom pools and clients.
To deploy the ceph-client
role by using Ansible, see the Red Hat Ceph Storage 2 Installation Guide for Red Hat Enterprise Linux.
4.2. Ceph Command-line Interface Installation Copy linkLink copied to clipboard!
The Ceph command-line interface (CLI) is provided by the ceph-common
package and includes the following utilities:
-
ceph
-
ceph-authtool
-
ceph-dencoder
-
rados
To install the Ceph CLI:
- On the client node, enable the Tools repository.
On the client node, install the
ceph-common
package:sudo apt-get install ceph-common
$ sudo apt-get install ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the initial monitor node, copy the Ceph configuration file, in this case
ceph.conf
, and the administration keyring to the client node:Syntax
scp /etc/ceph/<cluster_name>.conf <user_name>@<client_host_name>:/etc/ceph/ scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/
# scp /etc/ceph/<cluster_name>.conf <user_name>@<client_host_name>:/etc/ceph/ # scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/
# scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<client_host_name>
with the host name of the client node.
4.3. Ceph Block Device Installation Copy linkLink copied to clipboard!
The following procedure shows how to install and mount a thin-provisioned, resizable Ceph Block Device.
Ceph Block Devices must be deployed on separate nodes from the Ceph Monitor and OSD nodes. Running kernel clients and kernel server daemons on the same node can lead to kernel deadlocks.
Before you start
- Ensure to perform the tasks listed in the Section 4.2, “Ceph Command-line Interface Installation” section.
- If you use Ceph Block Devices as a back end for virtual machines (VMs) that use QEMU, increase the default file descriptor. See the Ceph - VM hangs when transferring large amounts of data to RBD disk Knowledgebase article for details.
Installing Ceph Block Devices by Using the Command Line
Create a Ceph Block Device user named
client.rbd
with full permissions to files on OSD nodes (osd 'allow rwx'
) and output the result to a keyring file:ceph auth get-or-create client.rbd mon 'allow r' osd 'allow rwx pool=<pool_name>' \ -o /etc/ceph/rbd.keyring
ceph auth get-or-create client.rbd mon 'allow r' osd 'allow rwx pool=<pool_name>' \ -o /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<pool_name>
with the name of the pool that you want to allowclient.rbd
to have access to, for examplerbd
:sudo ceph auth get-or-create \ client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \ -o /etc/ceph/rbd.keyring
$ sudo ceph auth get-or-create \ client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \ -o /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the User Management section in the Red Hat Ceph Storage Administration Guide for more information about creating users.
Create a block device image:
rbd create <image_name> --size <image_size> --pool <pool_name> \ --name client.rbd --keyring /etc/ceph/rbd.keyring
rbd create <image_name> --size <image_size> --pool <pool_name> \ --name client.rbd --keyring /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify
<image_name>
,<image_size>
, and<pool_name>
, for example:rbd create image1 --size 4096 --pool rbd \ --name client.rbd --keyring /etc/ceph/rbd.keyring
$ rbd create image1 --size 4096 --pool rbd \ --name client.rbd --keyring /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningThe default Ceph configuration includes the following Ceph Block Device features:
-
layering
-
exclusive-lock
-
object-map
-
deep-flatten
-
fast-diff
If you use the kernel RBD (
krbd
) client, you will not be able to map the block device image because the current kernel version included in Red Hat Enterprise Linux 7.3 does not supportobject-map
,deep-flatten
, andfast-diff
.To work around this problem, disable the unsupported features. Use one of the following options to do so:
Disable the unsupported features dynamically:
rbd feature disable <image_name> <feature_name>
rbd feature disable <image_name> <feature_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
rbd feature disable image1 object-map deep-flatten fast-diff
# rbd feature disable image1 object-map deep-flatten fast-diff
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Use the
--image-feature layering
option with therbd create
command to enable onlylayering
on newly created block device images. Disable the features be default in the Ceph configuration file:
rbd_default_features = 1
rbd_default_features = 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This is a known issue, for details see the Release Notes Red Hat Ceph Storage 2.2.
All these features work for users that use the user-space RBD client to access the block device images.
-
Map the newly created image to the block device:
rbd map <image_name> --pool <pool_name>\ --name client.rbd --keyring /etc/ceph/rbd.keyring
rbd map <image_name> --pool <pool_name>\ --name client.rbd --keyring /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
sudo rbd map image1 --pool rbd --name client.rbd \ --keyring /etc/ceph/rbd.keyring
$ sudo rbd map image1 --pool rbd --name client.rbd \ --keyring /etc/ceph/rbd.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the block device by creating a file system:
mkfs.ext4 -m5 /dev/rbd/<pool_name>/<image_name>
mkfs.ext4 -m5 /dev/rbd/<pool_name>/<image_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the pool name and the image name, for example:
sudo mkfs.ext4 -m5 /dev/rbd/rbd/image1
$ sudo mkfs.ext4 -m5 /dev/rbd/rbd/image1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This can take a few moments.
Mount the newly created file system:
mkdir <mount_directory> mount /dev/rbd/<pool_name>/<image_name> <mount_directory>
mkdir <mount_directory> mount /dev/rbd/<pool_name>/<image_name> <mount_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
sudo mkdir /mnt/ceph-block-device sudo mount /dev/rbd/rbd/image1 /mnt/ceph-block-device
$ sudo mkdir /mnt/ceph-block-device $ sudo mount /dev/rbd/rbd/image1 /mnt/ceph-block-device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For additional details, see the Red Hat Ceph Storage Block Device Guide.
4.4. Ceph Object Gateway Installation Copy linkLink copied to clipboard!
The Ceph object gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados
API to provide applications with a RESTful gateway to Ceph storage clusters.
For more information about the Ceph object gateway, see the Object Gateway Guide for Ubuntu.
There are two ways to install the Ceph object gateway:
- Using the Ansible automation application, see Section 4.4.1, “Installing Ceph Object Gateway by using Ansible” for details
- Using the comand-line interface, see Section 4.3.2, "Installing Ceph Object Gateway Manually for details
4.4.1. Installing Ceph Object Gateway by using Ansible Copy linkLink copied to clipboard!
To deploy the Ceph Object Gateway using Ansible, see the Red Hat Ceph Storage 2 Installation Guide for Red Hat Enterprise Linux.
After installation for a multi-site cluster is complete, proceed to the Multi-site chapter in the Object Gateway Guide for Ubuntu for details on configuring a cluster for multi-site.
4.4.2. Installing Ceph Object Gateway Manually Copy linkLink copied to clipboard!
- Enable the Red Hat Ceph Storage 2 Tools repository. For ISO-based installations, see the ISO installation section.
On the Object Gateway node, install the
radosgw
package:sudo apt-get install radosgw
$ sudo apt-get install radosgw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the initial Monitor node, do the following steps.
Update the Ceph configuration file as follows:
[client.rgw.<obj_gw_hostname>] host = <obj_gw_hostname> rgw frontends = "civetweb port=80" rgw dns name = <obj_gw_hostname>.example.com
[client.rgw.<obj_gw_hostname>] host = <obj_gw_hostname> rgw frontends = "civetweb port=80" rgw dns name = <obj_gw_hostname>.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<obj_gw_hostname>
is a short host name of the gateway node. To view the short host name, use thehostname -s
command.Copy the updated configuration file to the new Object Gateway node and all other nodes in the Ceph storage cluster:
Syntax
sudo scp /etc/ceph/<cluster_name>.conf <user_name>@<target_host_name>:/etc/ceph
$ sudo scp /etc/ceph/<cluster_name>.conf <user_name>@<target_host_name>:/etc/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo scp /etc/ceph/ceph.conf root@node1:/etc/ceph/
$ sudo scp /etc/ceph/ceph.conf root@node1:/etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the
<cluster_name>.client.admin.keyring
file to the new Object Gateway node:Syntax
sudo scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/
$ sudo scp /etc/ceph/<cluster_name>.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/
$ sudo scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On the Object Gateway node, create the data directory:
Syntax
sudo mkdir -p /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`
$ sudo mkdir -p /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`
$ sudo mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Object Gateway node, add a user and keyring to bootstrap the object gateway:
Syntax
sudo ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/keyring
$ sudo ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring
$ sudo ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantWhen you provide capabilities to the gateway key you must provide the read capability. However, providing the Monitor write capability is optional; if you provide it, the Ceph Object Gateway will be able to create pools automatically.
In such a case, ensure to specify a reasonable number of placement groups in a pool. Otherwise, the gateway uses the default number, which might not be suitable for your needs. See Ceph Placement Groups (PGs) per Pool Calculator for details.
On the Object Gateway node, create the
done
file:Syntax
sudo touch /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/done
$ sudo touch /var/lib/ceph/radosgw/<cluster_name>-rgw.`hostname -s`/done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done
$ sudo touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Object Gateway node, change the owner and group permissions:
sudo chown -R ceph:ceph /var/lib/ceph/radosgw sudo chown -R ceph:ceph /var/log/ceph sudo chown -R ceph:ceph /var/run/ceph sudo chown -R ceph:ceph /etc/ceph
$ sudo chown -R ceph:ceph /var/lib/ceph/radosgw $ sudo chown -R ceph:ceph /var/log/ceph $ sudo chown -R ceph:ceph /var/run/ceph $ sudo chown -R ceph:ceph /etc/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For storage clusters with custom names, as
root
, add the the following line:Syntax
sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph
$ sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo echo "CLUSTER=test123" >> /etc/default/ceph
$ sudo echo "CLUSTER=test123" >> /etc/default/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Object Gateway node, open TCP port 80:
sudo iptables -I INPUT 1 -i <network_interface> -p tcp -s <ip_address>/<netmask> --dport 80 -j ACCEPT
$ sudo iptables -I INPUT 1 -i <network_interface> -p tcp -s <ip_address>/<netmask> --dport 80 -j ACCEPT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Object Gateway node, start and enable the
ceph-radosgw
process:Syntax
sudo systemctl enable ceph-radosgw.target sudo systemctl enable ceph-radosgw@rgw.<rgw_hostname> sudo systemctl start ceph-radosgw@rgw.<rgw_hostname>
$ sudo systemctl enable ceph-radosgw.target $ sudo systemctl enable ceph-radosgw@rgw.<rgw_hostname> $ sudo systemctl start ceph-radosgw@rgw.<rgw_hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo systemctl enable ceph-radosgw.target sudo systemctl enable ceph-radosgw@rgw.node1 sudo systemctl start ceph-radosgw@rgw.node1
$ sudo systemctl enable ceph-radosgw.target $ sudo systemctl enable ceph-radosgw@rgw.node1 $ sudo systemctl start ceph-radosgw@rgw.node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Once installed, the Ceph Object Gateway automatically creates pools if the write capability is set on the Monitor. See the Pools chapter in the Storage Strategies Guide for information on creating pools manually.