Chapter 3. Storage Cluster Installation
Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSDs.
You can install a Red Hat Ceph Storage cluster by using:
You can install a Red Hat Ceph Storage cluster by using:
3.1. Installing Red Hat Ceph Storage using Ansible Copy linkLink copied to clipboard!
Previously, Red Hat did not provide the ceph-ansible package for Ubuntu. In Red Hat Ceph Storage version 2 and later, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node. Execute the procedures in Figure 2.1, “Prerequisite Workflow” first.
To add more Monitors or OSDs to an existing storage cluster, see the Red Hat Ceph Storage Administration Guide for details:
Before you start
Install Python on all nodes:
apt install python
# apt install pythonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.1. Installing ceph-ansible Copy linkLink copied to clipboard!
Install the
ceph-ansiblepackage:sudo apt-get install ceph-ansible
$ sudo apt-get install ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the Ceph hosts to the/etc/ansible/hostsfile. Remember to comment out example hosts.If the Ceph hosts have sequential naming, consider using a range:
Add Monitor nodes under the
[mons]section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add OSD nodes under the
[osds]section:[osds] <osd-host-name[1:10]>
[osds] <osd-host-name[1:10]>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, use the
devicesparameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.[osds] <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"
[osds] <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
[osds] ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]" ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
[osds] ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]" ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow When specifying no devices, then set the
osd_auto_discoveryoption totruein theosds.ymlfile. See Section 3.1.4, “Configuring Ceph OSD Settings” for more details.Using the
devicesparameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs. See Section A.1, “Ansible Stops Installation Because It Detects Less Devices Than It Expected” for more details.
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
ansible all -m ping
$ ansible all -m pingCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSee Section 2.7, “Creating an Ansible User with Sudo Access” for more details on creating an Ansible user.
3.1.2. Configuring Ceph Global Settings Copy linkLink copied to clipboard!
Create a directory under the home directory so Ansible can write the keys:
cd ~ mkdir ceph-ansible-keys
# cd ~ # mkdir ceph-ansible-keysCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create a symbolic link to the Ansiblegroup_varsdirectory in the/etc/ansible/directory:ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_varsCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create anall.ymlfile from theall.yml.samplefile and open it for editing:cd /etc/ansible/group_vars cp all.yml.sample all.yml vim all.yml
# cd /etc/ansible/group_vars # cp all.yml.sample all.yml # vim all.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the
fetch_directorysetting under the GENERAL section. Then, point it to the directory you created in step 1:fetch_directory: ~/ceph-ansible-keys
fetch_directory: ~/ceph-ansible-keysCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the
ceph_repository_typesetting and set it to eithercdnoriso:ceph_repository_type: cdn
ceph_repository_type: cdnCopy to Clipboard Copied! Toggle word wrap Toggle overflow Select the installation method. There are two approaches:
If Ceph hosts have connectivity to the Red Hat Content Delivery Network (CDN), uncomment and set the following:
ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 2
ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Ceph nodes cannot connect to the Red Hat Content Delivery Network (CDN), uncomment the
ceph_repository_typesetting and set it toiso. This approach is most frequently used in high security environments.ceph_repository_type: iso
ceph_repository_type: isoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Then, uncomment the
ceph_rhcs_iso_pathsetting and specify the path to the ISO image:ceph_rhcs_iso_path: <path>
ceph_rhcs_iso_path: <path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph_rhcs_iso_path: /path/to/ISO_file.iso
ceph_rhcs_iso_path: /path/to/ISO_file.isoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For RCHS 2.5 and later versions, uncomment and set
ceph_rhcs_cdn_debian_repoandceph_rhcs_cdn_debian_repo_versionso that Ansible can automatically enable and access Ubuntu online repositories:ceph_rhcs_cdn_debian_repo: <repo-path> ceph_rhcs_cdn_debian_repo_version: <repo-version>
ceph_rhcs_cdn_debian_repo: <repo-path> ceph_rhcs_cdn_debian_repo_version: <repo-version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph_rhcs_cdn_debian_repo: https://<login>:<pwd>@rhcs.download.redhat.com ceph_rhcs_cdn_debian_repo_version: /2-release/
ceph_rhcs_cdn_debian_repo: https://<login>:<pwd>@rhcs.download.redhat.com ceph_rhcs_cdn_debian_repo_version: /2-release/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where
<login>is the RHN user login and<pwd>is the RHN user’s password.Set the
generate_fsidsetting tofalse:generate_fsid: false
generate_fsid: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWith
generate_fsidset tofalse, then you must specify a value for the File System Identifier (FSID) manually. For example, using the command-line utility,uuidgen, you can generate a Universally Unique Identifier (UUID). Once you generate a UUID, then uncomment thefsidsetting and specify the generated UUID:fsid: <generated_uuid>
fsid: <generated_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow With
generate_fsidset totrue, then the UUID will be automatically generated. This removes the need to specify the UUID in thefsidsetting.To enable authentication, uncomment the
cephxsetting under the Ceph Configuration section. Red Hat recommends running Ceph with authentication enabled:cephx: true
cephx: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Uncomment the
monitor_interfacesetting and specify the network interface:monitor_interface:
monitor_interface:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
monitor_interface: eth0
monitor_interface: eth0Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
monitor_interfacesetting will use the IPv4 address. To use an IPv6 address, use themonitor_addresssetting instead.If not using IPv6, then skip this step. Uncomment and set the
ip_versionoption:ip_version: ipv6
ip_version: ipv6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set journal size:
journal_size: <size_in_MB>
journal_size: <size_in_MB>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If not filled, the default journal size will be 5 GB. See Journal Settings for additional details.
Set the public network. Optionally, set the cluster network, too:
public_network: <public_network> cluster_network: <cluster_network>
public_network: <public_network> cluster_network: <cluster_network>Copy to Clipboard Copied! Toggle word wrap Toggle overflow See Section 2.4, “Configuring Network” and Network Configuration Reference for additional details.
If not using IPv6, then skip this step. Uncomment and set the
radosgw_civetweb_bind_ipoption:radosgw_civetweb_bind_ip: "[{{ ansible_default_ipv6.address }}]"
radosgw_civetweb_bind_ip: "[{{ ansible_default_ipv6.address }}]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantCurrently, there is a rendering bug when displaying content within double curly brackets on the Customer Portal. The Customer Portal team is working diligently to resolve this issue.
The HTML escape codes being displayed in the above step represent the left (
{) and right (}) curly brackets respectively. For example, written in long hand, theradosgw_civetweb_bind_ipoption would be the following:radosgw_civetweb_bind_ip: “[<left_curly_bracket><left_curly_bracket> ansible_default_ipv6.address <right_curly_bracket><right_curly_bracket>]"
radosgw_civetweb_bind_ip: “[<left_curly_bracket><left_curly_bracket> ansible_default_ipv6.address <right_curly_bracket><right_curly_bracket>]"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Configuring Monitor Settings Copy linkLink copied to clipboard!
Ansible will create Ceph Monitors without any additional configuration steps. However, you may override default settings for authentication, and for use with OpenStack. By default, the Calamari API is disabled.
To configure monitors, perform the following:
Navigate to the
/etc/ansible/group_vars/directory:cd /etc/ansible/group_vars/
# cd /etc/ansible/group_vars/Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create anmons.ymlfile frommons.yml.samplefile and open it for editing:cp mons.yml.sample mons.yml vim mons.yml
# cp mons.yml.sample mons.yml # vim mons.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the Calamari API, uncomment the
calamarisetting and set it totrue:calamari: true
calamari: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To configure other settings, uncomment them and set appropriate values.
3.1.4. Configuring Ceph OSD Settings Copy linkLink copied to clipboard!
To configure OSDs:
Navigate to the
/etc/ansible/group_vars/directory:cd /etc/ansible/group_vars/
# cd /etc/ansible/group_vars/Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create a newosds.ymlfile from theosds.yml.samplefile and open it for editing:cp osds.yml.sample osds.yml vim osds.yml
# cp osds.yml.sample osds.yml # vim osds.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Uncomment and set settings that are relevant for your use case. See Table 3.1, “What settings are needed for my use case?” for details.
- Once you are done editing the file, save your changes and close the file.
| I want: | Relevant Options | Comments |
|---|---|---|
| to have the Ceph journal and OSD data co-located on the same device and to specify OSD disks on my own. |
|
The |
|
to have the Ceph journal and OSD data co-located on the same device and |
| |
| to use one or more dedicated devices to store the Ceph journal. |
|
The |
| to use directories instead of disks. |
|
The |
| to have the Ceph journal and OSD data co-located on the same device and encrypt OSD data. |
|
The Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide. |
| to use one or more dedicated devices to store the Ceph journal and encrypt OSD data. |
|
The Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide. |
| to use the BlueStore back end instead of the FileStore back end. |
|
The For details on BlueStore, see the OSD BlueStore (Technology Preview) chapter in the Administration Guide for Red Hat Ceph Storage. |
For additional settings, see the osds.yml.sample file located in /usr/share/ceph-ansible/group_vars/.
Some OSD options will conflict with each other. Avoid enabling these sets of options together:
-
journal_collocationandraw_multi_journal -
journal_collocationandosd_directory -
raw_multi_journalandosd_directory
In addition, specifying one of these options is required.
3.1.5. Overriding Ceph Default Settings Copy linkLink copied to clipboard!
Unless otherwise specified in the Ansible configuration files, Ceph uses its default settings.
Because Ansible manages the Ceph configuration file, edit the /etc/ansible/group_vars/all.yml file to change the Ceph configuration. Use the ceph_conf_overrides setting to override the default Ceph configuration.
Ansible supports the same sections as the Ceph configuration file; [global], [mon], [osd], [mds], [rgw], and so on. You can also override particular instances, such as a particular Ceph Object Gateway instance. For example:
Ansible does not include braces when referring to a particular section of the Ceph configuration file. Sections and settings names are terminated with a colon.
Do not set the cluster network with the cluster_network parameter in the CONFIG OVERRIDE section because this can cause two conflicting cluster networks being set in the Ceph configuration file.
To set the cluster network, use the cluster_network parameter in the CEPH CONFIGURATION section. For details, see Configuring Ceph Global Settings.
3.1.6. Deploying a Ceph Cluster Copy linkLink copied to clipboard!
Navigate to the Ansible configuration directory:
cd /usr/share/ceph-ansible
# cd /usr/share/ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create asite.ymlfile from thesite.yml.samplefile:cp site.yml.sample site.yml
# cp site.yml.sample site.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow As the Ansible user, run the Ansible playbook from within the directory where the playbook exists:
ansible-playbook site.yml [-u <user_name>]
$ ansible-playbook site.yml [-u <user_name>]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once the playbook runs, it creates a running Ceph cluster.
NoteDuring the deployment of a Red Hat Ceph Storage cluster with Ansible, the installation, configuration, and enabling NTP is done automatically on each node in the storage cluster.
As
root, on the Ceph Monitor nodes, create a Calamari user:Syntax
calamari-ctl add_user --password <password> --email <email_address> <user_name>
# calamari-ctl add_user --password <password> --email <email_address> <user_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
calamari-ctl add_user --password abc123 --email user1@example.com user1
# calamari-ctl add_user --password abc123 --email user1@example.com user1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Installing Red Hat Ceph Storage by using the Command-line Interface Copy linkLink copied to clipboard!
All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).
Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:
- The number of replicas for pools
- The number of placement groups per OSD
- The heartbeat intervals
- Any authentication requirement
Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.
Installing a Ceph storage cluster by using the command line interface involves these steps:
- Bootstrapping the initial Monitor node
- Adding an Object Storage Device (OSD) node
Red Hat does not support or test upgrading manually deployed clusters. Currently, the only supported way to upgrade to a minor version of Red Hat Ceph Storage 2 is to use the Ansible automation application as described in Important. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 2. See Section 3.1, “Installing Red Hat Ceph Storage using Ansible” for details.
You can use command-line utilities, such as apt-get, to upgrade manually deployed clusters, but Red Hat does not support or test this.
3.2.1. Monitor Bootstrapping Copy linkLink copied to clipboard!
Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:
- Unique Identifier
-
The File System Identifier (
fsid) is a unique identifier for the cluster. Thefsidwas originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, sofsidis a bit of a misnomer. - Cluster Name
Ceph clusters have a cluster name, which is a simple string without spaces. The default cluster name is
ceph, but you can specify a different cluster name. Overriding the default cluster name is especially useful when you work with multiple clusters.When you run multiple clusters in a multi-site architecture, the cluster name for example,
us-west,us-eastidentifies the cluster for the current command-line session.NoteTo identify the cluster name on the command-line interface, specify the Ceph configuration file with the cluster name, for example,
ceph.conf,us-west.conf,us-east.conf, and so on.Example:
ceph --cluster us-west.conf ...
# ceph --cluster us-west.conf ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Monitor Name
-
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the
hostname -scommand. - Monitor Map
Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:
-
The File System Identifier (
fsid) -
The cluster name, or the default cluster name of
cephis used - At least one host name and its IP address.
-
The File System Identifier (
- Monitor Keyring
- Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
- Administrator Keyring
-
To use the
cephcommand-line interface utilities, create theclient.adminuser and generate its keyring. Also, you must add theclient.adminuser to the Monitor keyring.
The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid, the mon initial members and the mon host settings at a minimum.
You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.
To bootstrap the initial Monitor, perform the following steps:
- Enable the Red Hat Ceph Storage 2 Monitor repository. For ISO-based installations, see the ISO installation section.
On your initial Monitor node, install the
ceph-monpackage asroot:sudo apt-get install ceph-mon
$ sudo apt-get install ceph-monCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create a Ceph configuration file in the/etc/ceph/directory. By default, Ceph usesceph.conf, wherecephreflects the cluster name:Syntax
touch /etc/ceph/<cluster_name>.conf
# touch /etc/ceph/<cluster_name>.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
touch /etc/ceph/ceph.conf
# touch /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, generate the unique identifier for your cluster and add the unique identifier to the[global]section of the Ceph configuration file:Syntax
echo "[global]" > /etc/ceph/<cluster_name>.conf echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf
# echo "[global]" > /etc/ceph/<cluster_name>.conf # echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "[global]" > /etc/ceph/ceph.conf echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
# echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the current Ceph configuration file:
cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
$ cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the initial Monitor to the Ceph configuration file:Syntax
echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf
# echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "mon initial members = node1" >> /etc/ceph/ceph.conf
# echo "mon initial members = node1" >> /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the IP address of the initial Monitor to the Ceph configuration file:Syntax
echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf
# echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf
# echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo use IPv6 addresses, you must set the
ms bind ipv6option totrue. See the Red Hat Ceph Storage Configuration Guide for more details.As
root, create the keyring for the cluster and generate the Monitor secret key:Syntax
ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'
# ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, generate an administrator keyring, generate a<cluster_name>.client.admin.keyringuser and add the user to the keyring:Syntax
ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'
# ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the<cluster_name>.client.admin.keyringkey to the<cluster_name>.mon.keyring:Syntax
ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring
# ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the Monitor map. Specify using the node name, IP address and the
fsid, of the initial Monitor and save it as/tmp/monmap:Syntax
monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap
$ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
$ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
rooton the initial Monitor node, create a default data directory:Syntax
mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>
# mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /var/lib/ceph/mon/ceph-node1
# mkdir /var/lib/ceph/mon/ceph-node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, populate the initial Monitor daemon with the Monitor map and keyring:Syntax
ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring
# ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
# ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the current Ceph configuration file:
cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120
# cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more details on the various Ceph configuration settings, see the Red Hat Ceph Storage Configuration Guide. The following example of a Ceph configuration file lists some of the most common configuration settings:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create thedonefile:Syntax
touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done
# touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
touch /var/lib/ceph/mon/ceph-node1/done
# touch /var/lib/ceph/mon/ceph-node1/doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, update the owner and group permissions on the newly created directory and files:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glanceandcinderrespectively. For example:ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow For storage clusters with custom names, as
root, add the the following line:Syntax
sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph
$ sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo echo "CLUSTER=test123" >> /etc/default/ceph
$ sudo echo "CLUSTER=test123" >> /etc/default/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, start and enable theceph-monprocess on the initial Monitor node:Syntax
sudo systemctl enable ceph-mon.target sudo systemctl enable ceph-mon@<monitor_host_name> sudo systemctl start ceph-mon@<monitor_host_name>
$ sudo systemctl enable ceph-mon.target $ sudo systemctl enable ceph-mon@<monitor_host_name> $ sudo systemctl start ceph-mon@<monitor_host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo systemctl enable ceph-mon.target sudo systemctl enable ceph-mon@node1 sudo systemctl start ceph-mon@node1
$ sudo systemctl enable ceph-mon.target $ sudo systemctl enable ceph-mon@node1 $ sudo systemctl start ceph-mon@node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that Ceph created the default pools:
ceph osd lspools 0 rbd,
$ ceph osd lspools 0 rbd,Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Monitor is running. The status output will look similar to the following example. The Monitor is up and running, but the cluster health will be in a
HEALTH_ERRstate. This error is indicating that placement groups are stuck and inactive. Once OSDs are added to the cluster and active, the placement group health errors will disappear.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Red Hat Ceph Storage Administration Guide
3.2.2. OSD Bootstrapping Copy linkLink copied to clipboard!
Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object.
The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file.
For more details, see the OSD Configuration Reference section in the Red Hat Ceph Storage Configuration Guide.
After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.
To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:
- Enable the Red Hat Ceph Storage 2 OSD repository. For ISO-based installations, see the ISO installation section.
As
root, install theceph-osdpackage on the Ceph OSD node:sudo apt-get install ceph-osd
$ sudo apt-get install ceph-osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:
Syntax
scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>
# scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
scp root@node1:/etc/ceph/ceph.conf /etc/ceph scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
# scp root@node1:/etc/ceph/ceph.conf /etc/ceph # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the Universally Unique Identifier (UUID) for the OSD:
uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a
$ uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7aCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, create the OSD instance:Syntax
ceph osd create <uuid> [<osd_id>]
# ceph osd create <uuid> [<osd_id>]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0
# ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command outputs the OSD number identifier needed for subsequent steps.
As
root, create the default directory for the new OSD:Syntax
mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>
# mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /var/lib/ceph/osd/ceph-0
# mkdir /var/lib/ceph/osd/ceph-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:Syntax
parted <path_to_disk> mklabel gpt parted <path_to_disk> mkpart primary 1 10000 mkfs -t <fstype> <path_to_partition> mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab
# parted <path_to_disk> mklabel gpt # parted <path_to_disk> mkpart primary 1 10000 # mkfs -t <fstype> <path_to_partition> # mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> # echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstabCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, initialize the OSD data directory:Syntax
ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>
# ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring
# ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe directory must be empty before you run
ceph-osdwith the--mkkeyoption. If you have a custom cluster name, theceph-osdutility requires the--clusteroption.As
root, register the OSD authentication key. If your cluster name differs fromceph, insert your cluster name instead:Syntax
ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring
# ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0
# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the OSD node to the CRUSH map:Syntax
ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host
# ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush add-bucket node2 host
# ceph osd crush add-bucket node2 hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, place the OSD node under thedefaultCRUSH tree:Syntax
ceph [--cluster <cluster_name>] osd crush move <host_name> root=default
# ceph [--cluster <cluster_name>] osd crush move <host_name> root=defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush move node2 root=default
# ceph osd crush move node2 root=defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, add the OSD disk to the CRUSH mapSyntax
ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]
# ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map# ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush mapCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Red Hat Ceph Storage Storage Strategies Guide for more details.
As
root, update the owner and group permissions on the newly created directory and files:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow For storage clusters with custom names, as
root, add the following line to the/etc/default/cephfile:Syntax
sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/ceph
$ sudo echo "CLUSTER=<custom_cluster_name>" >> /etc/default/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo echo "CLUSTER=test123" >> /etc/default/ceph
$ sudo echo "CLUSTER=test123" >> /etc/default/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is
downandin. The new OSD must beupbefore it can begin receiving data. Asroot, enable and start the OSD process:Syntax
sudo systemctl enable ceph-osd.target sudo systemctl enable ceph-osd@<osd_id> sudo systemctl start ceph-osd@<osd_id>
$ sudo systemctl enable ceph-osd.target $ sudo systemctl enable ceph-osd@<osd_id> $ sudo systemctl start ceph-osd@<osd_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo systemctl enable ceph-osd.target sudo systemctl enable ceph-osd@0 sudo systemctl start ceph-osd@0
$ sudo systemctl enable ceph-osd.target $ sudo systemctl enable ceph-osd@0 $ sudo systemctl start ceph-osd@0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once you start the OSD daemon, it is
upandin.
Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:
ceph -w
$ ceph -w
To view the OSD tree, execute the following command:
ceph osd tree
$ ceph osd tree
Example
To expand the storage capacity by adding new OSDs to the storage cluster, see the Red Hat Ceph Storage Administration Guide for more details.
3.2.3. Calamari Server Installation Copy linkLink copied to clipboard!
The Calamari server provides a RESTful API for monitoring Ceph storage clusters.
To install calamari-server, perform the following steps on all Monitor nodes.
-
As
root, enable the Red Hat Ceph Storage 2 Monitor repository As
root, installcalamari-server:sudo apt-get install calamari-server
$ sudo apt-get install calamari-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, initialize thecalamari-server:Syntax
sudo calamari-ctl clear --yes-i-am-sure sudo calamari-ctl initialize --admin-username <uid> --admin-password <pwd> --admin-email <email>
$ sudo calamari-ctl clear --yes-i-am-sure $ sudo calamari-ctl initialize --admin-username <uid> --admin-password <pwd> --admin-email <email>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo calamari-ctl clear --yes-i-am-sure sudo calamari-ctl initialize --admin-username admin --admin-password admin --admin-email cephadm@example.com
$ sudo calamari-ctl clear --yes-i-am-sure $ sudo calamari-ctl initialize --admin-username admin --admin-password admin --admin-email cephadm@example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
calamari-ctl clear --yes-i-am-surecommand is only necessary for removing the database of old Calamari server installations. Running this command on a new Calamari server results in an error.NoteDuring initialization, the
calamari-serverwill generate a self-signed certificate and a private key and place them in the/etc/calamari/ssl/certs/and/etc/calamari/ssl/privatedirectories respectively. Use HTTPS when making requests. Otherwise, user names and passwords are transmitted in clear text.
The calamari-ctl initialize process generates a private key and a self-signed certificate, which means there is no need to purchase a certificate from a Certificate Authority (CA).
To verify access to the HTTPS API through a web browser, go to the following URL. Click through the untrusted certificate warnings, because the auto-generated certificate is self-signed:
https://<calamari_hostname>:8002/api/v2/cluster
https://<calamari_hostname>:8002/api/v2/cluster
To use a key and certificate from a CA:
- Purchase a certificate from a CA. During the process, you will generate a private key and a certificate for CA. Or you can also use the self-signed certificate generated by Calamari.
-
Save the private key associated to the certificate to a path, preferably under
/etc/calamari/ssl/private/. -
Save the certificate to a path, preferably under
/etc/calamari/ssl/certs/. -
Open the
/etc/calamari/calamari.conffile. Under the
[calamari_web]section, modifyssl_certandssl_keyto point to the respective certificate and key path, for example:[calamari_web] ... ssl_cert = /etc/calamari/ssl/certs/calamari-lite-bundled.crt ssl_key = /etc/calamari/ssl/private/calamari-lite.key
[calamari_web] ... ssl_cert = /etc/calamari/ssl/certs/calamari-lite-bundled.crt ssl_key = /etc/calamari/ssl/private/calamari-lite.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, re-initialize Calamari:sudo calamari-ctl initialize
$ sudo calamari-ctl initializeCopy to Clipboard Copied! Toggle word wrap Toggle overflow