Chapter 3. Storage Cluster Installation
Production Ceph storage clusters start with a minimum of three monitor hosts and three OSD nodes containing multiple OSDs.

You can install a Red Hat Ceph Storage cluster by using:
3.1. Installing Red Hat Ceph Storage using Ansible
You can use the Ansible automation application to deploy Red Hat Ceph Storage. Execute the procedures in Figure 2.1, “Prerequisite Workflow” first.
To add more Monitors or OSDs to an existing storage cluster, see the Red Hat Ceph Storage Administration Guide for details:
3.1.1. Installing ceph-ansible
Install the
ceph-ansible
package:# yum install ceph-ansible
As
root
, add the Ceph hosts to the/etc/ansible/hosts
file. Remember to comment out example hosts.If the Ceph hosts have sequential naming, consider using a range:
Add Monitor nodes under the
[mons]
section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
Add OSD nodes under the
[osds]
section:[osds] <osd-host-name[1:10]>
Optionally, use the
devices
parameter to specify devices that the OSD nodes will use. Use a comma-separated list to list multiple devices.[osds] <ceph-host-name> devices="[ '<device_1>', '<device_2>' ]"
For example:
[osds] ceph-osd-01 devices="[ '/dev/sdb', '/dev/sdc' ]" ceph-osd-02 devices="[ '/dev/sdb', '/dev/sdc', '/dev/sdd' ]"
When specifying no devices, then set the
osd_auto_discovery
option totrue
in theosds.yml
file. See Section 3.1.4, “Configuring Ceph OSD Settings” for more details.Using the
devices
parameter is useful when OSDs use devices with different names or when one of the devices failed on one of the OSDs. See Section A.1, “Ansible Stops Installation Because It Detects Less Devices Than It Expected” for more details.
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
$ ansible all -m ping
NoteSee Section 2.8, “Creating an Ansible User with Sudo Access” for more details on creating an Ansible user.
3.1.2. Configuring Ceph Global Settings
Create a directory under the home directory so Ansible can write the keys:
# cd ~ # mkdir ceph-ansible-keys
As
root
, create a symbolic link to the Ansiblegroup_vars
directory in the/etc/ansible/
directory:# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
As
root
, create anall.yml
file from theall.yml.sample
file and open it for editing:# cd /etc/ansible/group_vars # cp all.yml.sample all.yml # vim all.yml
Uncomment the
fetch_directory
setting under the GENERAL section. Then, point it to the directory you created in step 1:fetch_directory: ~/ceph-ansible-keys
Uncomment the
ceph_repository_type
setting and set it to eithercdn
oriso
:ceph_repository_type: cdn
Select the installation method. There are two approaches:
If Ceph hosts have connectivity to the Red Hat Content Delivery Network (CDN), uncomment and set the following:
ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 2
If Ceph nodes cannot connect to the Red Hat Content Delivery Network (CDN), uncomment the
ceph_repository_type
setting and set it toiso
. This approach is most frequently used in high security environments.ceph_repository_type: iso
Then, uncomment the
ceph_rhcs_iso_path
setting and specify the path to the ISO image:ceph_rhcs_iso_path: <path>
Example
ceph_rhcs_iso_path: /path/to/ISO_file.iso
Set the
generate_fsid
setting tofalse
:generate_fsid: false
NoteWith
generate_fsid
set tofalse
, then you must specify a value for the File System Identifier (FSID) manually. For example, using the command-line utility,uuidgen
, you can generate a Universally Unique Identifier (UUID). Once you generate a UUID, then uncomment thefsid
setting and specify the generated UUID:fsid: <generated_uuid>
With
generate_fsid
set totrue
, then the UUID will be automatically generated. This removes the need to specify the UUID in thefsid
setting.To enable authentication, uncomment the
cephx
setting under the Ceph Configuration section. Red Hat recommends running Ceph with authentication enabled:cephx: true
Uncomment the
monitor_interface
setting and specify the network interface:monitor_interface:
Example
monitor_interface: eth0
NoteThe
monitor_interface
setting will use the IPv4 address. To use an IPv6 address, use themonitor_address
setting instead.If not using IPv6, then skip this step. Uncomment and set the
ip_version
option:ip_version: ipv6
Set journal size:
journal_size: <size_in_MB>
If not filled, the default journal size will be 5 GB. See Journal Settings for additional details.
Set the public network. Optionally, set the cluster network, too:
public_network: <public_network> cluster_network: <cluster_network>
See Section 2.5, “Configuring Network” and Network Configuration Reference for additional details.
If not using IPv6, then skip this step. Uncomment and set the
radosgw_civetweb_bind_ip
option:radosgw_civetweb_bind_ip: "[{{ ansible_default_ipv6.address }}]"
ImportantCurrently, there is a rendering bug when displaying content within double curly brackets on the Customer Portal. The Customer Portal team is working diligently to resolve this issue.
The HTML escape codes being displayed in the above step represent the left (
{
) and right (}
) curly brackets respectively. For example, written in long hand, theradosgw_civetweb_bind_ip
option would be the following:radosgw_civetweb_bind_ip: “[<left_curly_bracket><left_curly_bracket> ansible_default_ipv6.address <right_curly_bracket><right_curly_bracket>]"
3.1.3. Configuring Monitor Settings
Ansible will create Ceph Monitors without any additional configuration steps. However, you may override default settings for authentication, and for use with OpenStack. By default, the Calamari API is disabled.
To configure monitors, perform the following:
Navigate to the
/etc/ansible/group_vars/
directory:# cd /etc/ansible/group_vars/
As
root
, create anmons.yml
file frommons.yml.sample
file and open it for editing:# cp mons.yml.sample mons.yml # vim mons.yml
To enable the Calamari API, uncomment the
calamari
setting and set it totrue
:calamari: true
- To configure other settings, uncomment them and set appropriate values.
3.1.4. Configuring Ceph OSD Settings
To configure OSDs:
Navigate to the
/etc/ansible/group_vars/
directory:# cd /etc/ansible/group_vars/
As
root
, create a newosds.yml
file from theosds.yml.sample
file and open it for editing:# cp osds.yml.sample osds.yml # vim osds.yml
- Uncomment and set settings that are relevant for your use case. See Table 3.1, “What settings are needed for my use case?” for details.
- Once you are done editing the file, save your changes and close the file.
I want: | Relevant Options | Comments |
---|---|---|
to have the Ceph journal and OSD data co-located on the same device and to specify OSD disks on my own. |
|
The |
to have the Ceph journal and OSD data co-located on the same device and |
| |
to use one or more dedicated devices to store the Ceph journal. |
|
The |
to use directories instead of disks. |
|
The |
to have the Ceph journal and OSD data co-located on the same device and encrypt OSD data. |
|
The Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide. |
to use one or more dedicated devices to store the Ceph journal and encrypt OSD data. |
|
The Note that all OSDs will be encrypted. For details, see the Encryption chapter in the Red Hat Ceph Storage 2 Architecture Guide. |
to use the BlueStore back end instead of the FileStore back end. |
|
The For details on BlueStore, see the OSD BlueStore (Technology Preview) chapter in the Administration Guide for Red Hat Ceph Storage. |
For additional settings, see the osds.yml.sample
file located in /usr/share/ceph-ansible/group_vars/
.
Some OSD options will conflict with each other. Avoid enabling these sets of options together:
-
journal_collocation
andraw_multi_journal
-
journal_collocation
andosd_directory
-
raw_multi_journal
andosd_directory
In addition, specifying one of these options is required.
3.1.5. Overriding Ceph Default Settings
Unless otherwise specified in the Ansible configuration files, Ceph uses its default settings.
Because Ansible manages the Ceph configuration file, edit the /etc/ansible/group_vars/all.yml
file to change the Ceph configuration. Use the ceph_conf_overrides
setting to override the default Ceph configuration.
Ansible supports the same sections as the Ceph configuration file; [global]
, [mon]
, [osd]
, [mds]
, [rgw]
, and so on. You can also override particular instances, such as a particular Ceph Object Gateway instance. For example:
################### # CONFIG OVERRIDE # ################### ceph_conf_overrides: client.rgw.rgw1: log_file: /var/log/ceph/ceph-rgw-rgw1.log
Ansible does not include braces when referring to a particular section of the Ceph configuration file. Sections and settings names are terminated with a colon.
Do not set the cluster network with the cluster_network
parameter in the CONFIG OVERRIDE section because this can cause two conflicting cluster networks being set in the Ceph configuration file.
To set the cluster network, use the cluster_network
parameter in the CEPH CONFIGURATION section. For details, see Configuring Ceph Global Settings.
3.1.6. Deploying a Ceph Cluster
Navigate to the Ansible configuration directory:
# cd /usr/share/ceph-ansible
As
root
, create asite.yml
file from thesite.yml.sample
file:# cp site.yml.sample site.yml
As the Ansible user, run the Ansible playbook from within the directory where the playbook exists:
$ ansible-playbook site.yml [-u <user_name>]
Once the playbook runs, it creates a running Ceph cluster.
NoteDuring the deployment of a Red Hat Ceph Storage cluster with Ansible, the installation, configuration, and enabling NTP is done automatically on each node in the storage cluster.
As
root
, on the Ceph Monitor nodes, create a Calamari user:Syntax
# calamari-ctl add_user --password <password> --email <email_address> <user_name>
Example
# calamari-ctl add_user --password abc123 --email user1@example.com user1
3.1.7. Taking over an Existing Cluster
You can configure Ansible to use a cluster deployed without Ansible. For example, if you upgraded Red Hat Ceph Storage 1.3 clusters to version 2 manually, configure them to use Ansible by following this procedure:
- After manually upgrading from version 1.3 to version 2, install and configure Ansible on the administration node. This is the node where the master Ceph configuration file is maintained. See Section 3.2.1, "Installing Ceph Ansible" for details.
-
Ensure that the Ansible administration node has passwordless
ssh
access to all Ceph nodes in the cluster. See Section 2.9, “Enabling Password-less SSH (Ansible Deployment Only)” for more details. As
root
, create a symbolic link to the Ansiblegroup_vars
directory in the/etc/ansible/
directory:# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
As
root
, create anall.yml
file from theall.yml.sample
file and open it for editing:# cd /etc/ansible/group_vars # cp all.yml.sample all.yml # vim all.yml
-
Set the
generate_fsid
setting tofalse
ingroup_vars/all.yml
. -
Get the current cluster
fsid
by executingceph fsid
. -
Set the retrieved
fsid
ingroup_vars/all.yml
. -
Modify the Ansible inventory in
/etc/ansible/hosts
to include Ceph hosts. Add monitors under a[mons]
section, OSDs under an[osds]
section and gateways under an[rgws]
section to identify their roles to Ansible. Make sure
ceph_conf_overrides
is updated with the originalceph.conf
options used for[global]
,[osd]
,[mon]
, and[client]
sections in theall.yml
file.Options like
osd journal
,public_network
andcluster_network
should not be added inceph_conf_overrides
because they are already part ofall.yml
. Only the options that are not part ofall.yml
and are in the originalceph.conf
should be added toceph_conf_overrides
.From the
/usr/share/ceph-ansible/
directory run the playbook.# cd /usr/share/ceph-ansible/ # cp infrastructure-playbooks/take-over-existing-cluster.yml . $ ansible-playbook take-over-existing-cluster.yml -u <username>
3.1.8. Purging a Ceph Cluster
If you deployed a Ceph cluster using Ansible and you want to purge the cluster, then use the purge-cluster.yml
Ansible playbook located in the infrastructure-playbooks
directory.
Purging a Ceph cluster will lose data stored on the cluster’s OSDs.
Before purging the Ceph cluster…
Check the osd_auto_discovery
option in the osds.yml
file. Having this option set to true
will cause the purge to fail. To prevent the failure, do the following steps before running the purge:
-
Declare the OSD devices in the
osds.yml
file. See Section 3.1.4, “Configuring Ceph OSD Settings” for more details. -
Comment out the
osd_auto_discovery
option in theosds.yml
file.
To purge the Ceph cluster…
As
root
, navigate to the/usr/share/ceph-ansible/
directory:# cd /usr/share/ceph-ansible
As
root
, copy thepurge-cluster.yml
Ansible playbook to the current directory:# cp infrastructure-playbooks/purge-cluster.yml .
Run the
purge-cluster.yml
Ansible playbook:$ ansible-playbook purge-cluster.yml
3.2. Installing Red Hat Ceph Storage by using the Command-line Interface
All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).
Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:
- The number of replicas for pools
- The number of placement groups per OSD
- The heartbeat intervals
- Any authentication requirement
Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.
Installing a Ceph storage cluster by using the command line interface involves these steps:
- Bootstrapping the initial Monitor node
- Adding an Object Storage Device (OSD) node
Red Hat does not support or test upgrading manually deployed clusters. Currently, the only supported way to upgrade to a minor version of Red Hat Ceph Storage 2 is to use the Ansible automation application as described in Important. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 2. See Section 3.1, “Installing Red Hat Ceph Storage using Ansible” for details.
You can use command-line utilities, such as Yum, to upgrade manually deployed clusters, but Red Hat does not support or test this.
3.2.1. Monitor Bootstrapping
Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:
- Unique Identifier
-
The File System Identifier (
fsid
) is a unique identifier for the cluster. Thefsid
was originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, sofsid
is a bit of a misnomer. - Cluster Name
Ceph clusters have a cluster name, which is a simple string without spaces. The default cluster name is
ceph
, but you can specify a different cluster name. Overriding the default cluster name is especially useful when you work with multiple clusters.When you run multiple clusters in a multi-site architecture, the cluster name for example,
us-west
,us-east
identifies the cluster for the current command-line session.NoteTo identify the cluster name on the command-line interface, specify the Ceph configuration file with the cluster name, for example,
ceph.conf
,us-west.conf
,us-east.conf
, and so on.Example:
# ceph --cluster us-west.conf ...
- Monitor Name
-
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the
hostname -s
command. - Monitor Map
Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:
-
The File System Identifier (
fsid
) -
The cluster name, or the default cluster name of
ceph
is used - At least one host name and its IP address.
-
The File System Identifier (
- Monitor Keyring
- Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
- Administrator Keyring
-
To use the
ceph
command-line interface utilities, create theclient.admin
user and generate its keyring. Also, you must add theclient.admin
user to the Monitor keyring.
The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid
, the mon initial members
and the mon host
settings at a minimum.
You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.
To bootstrap the initial Monitor, perform the following steps:
- Enable the Red Hat Ceph Storage 2 Monitor repository. For ISO-based installations, see the ISO installation section.
On your initial Monitor node, install the
ceph-mon
package asroot
:# yum install ceph-mon
As
root
, create a Ceph configuration file in the/etc/ceph/
directory. By default, Ceph usesceph.conf
, whereceph
reflects the cluster name:Syntax
# touch /etc/ceph/<cluster_name>.conf
Example
# touch /etc/ceph/ceph.conf
As
root
, generate the unique identifier for your cluster and add the unique identifier to the[global]
section of the Ceph configuration file:Syntax
# echo "[global]" > /etc/ceph/<cluster_name>.conf # echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf
Example
# echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
View the current Ceph configuration file:
$ cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
As
root
, add the initial Monitor to the Ceph configuration file:Syntax
# echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf
Example
# echo "mon initial members = node1" >> /etc/ceph/ceph.conf
As
root
, add the IP address of the initial Monitor to the Ceph configuration file:Syntax
# echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf
Example
# echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf
NoteTo use IPv6 addresses, you must set the
ms bind ipv6
option totrue
. See the Red Hat Ceph Storage Configuration Guide for more details.As
root
, create the keyring for the cluster and generate the Monitor secret key:Syntax
# ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'
Example
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring
As
root
, generate an administrator keyring, generate a<cluster_name>.client.admin.keyring
user and add the user to the keyring:Syntax
# ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'
Example
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring
As
root
, add the<cluster_name>.client.admin.keyring
key to the<cluster_name>.mon.keyring
:Syntax
# ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring
Example
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
Generate the Monitor map. Specify using the node name, IP address and the
fsid
, of the initial Monitor and save it as/tmp/monmap
:Syntax
$ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap
Example
$ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
As
root
on the initial Monitor node, create a default data directory:Syntax
# mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>
Example
# mkdir /var/lib/ceph/mon/ceph-node1
As
root
, populate the initial Monitor daemon with the Monitor map and keyring:Syntax
# ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring
Example
# ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
View the current Ceph configuration file:
# cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120
For more details on the various Ceph configuration settings, see the Red Hat Ceph Storage Configuration Guide. The following example of a Ceph configuration file lists some of the most common configuration settings:
Example
[global] fsid = <cluster-id> mon initial members = <monitor_host_name>[, <monitor_host_name>] mon host = <ip-address>[, <ip-address>] public network = <network>[, <network>] cluster network = <network>[, <network>] auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = <n> filestore xattr use omap = true osd pool default size = <n> # Write an object n times. osd pool default min size = <n> # Allow writing n copy in a degraded state. osd pool default pg num = <n> osd pool default pgp num = <n> osd crush chooseleaf type = <n>
As
root
, create thedone
file:Syntax
# touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done
Example
# touch /var/lib/ceph/mon/ceph-node1/done
As
root
, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/mon # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown ceph:ceph /etc/ceph/ceph.client.admin.keyring # chown ceph:ceph /etc/ceph/ceph.conf # chown ceph:ceph /etc/ceph/rbdmap
NoteIf the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glance
andcinder
respectively. For example:# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
For storage clusters with custom names, as
root
, add the the following line:Syntax
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Example
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
As
root
, start and enable theceph-mon
process on the initial Monitor node:Syntax
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name> # systemctl start ceph-mon@<monitor_host_name>
Example
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@node1 # systemctl start ceph-mon@node1
Verify that Ceph created the default pools:
$ ceph osd lspools 0 rbd,
Verify that the Monitor is running. The status output will look similar to the following example. The Monitor is up and running, but the cluster health will be in a
HEALTH_ERR
state. This error is indicating that placement groups are stuck and inactive. Once OSDs are added to the cluster and active, the placement group health errors will disappear.Example
$ ceph -s cluster a7f64266-0894-4f1e-a635-d0aeaca0e993 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds monmap e1: 1 mons at {node1=192.168.0.120:6789/0}, election epoch 1, quorum 0 node1 osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 192 creating
To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Red Hat Ceph Storage Administration Guide
3.2.2. OSD Bootstrapping
Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean
state until you have enough OSDs to handle the number of copies of an object.
The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size
and osd pool default min size
settings in the Ceph configuration file.
For more details, see the OSD Configuration Reference section in the Red Hat Ceph Storage Configuration Guide.
After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.
To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:
- Enable the Red Hat Ceph Storage 2 OSD repository. For ISO-based installations, see the ISO installation section.
As
root
, install theceph-osd
package on the Ceph OSD node:# yum install ceph-osd
Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:
Syntax
# scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>
Example
# scp root@node1:/etc/ceph/ceph.conf /etc/ceph # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
Generate the Universally Unique Identifier (UUID) for the OSD:
$ uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a
As
root
, create the OSD instance:Syntax
# ceph osd create <uuid> [<osd_id>]
Example
# ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0
NoteThis command outputs the OSD number identifier needed for subsequent steps.
As
root
, create the default directory for the new OSD:Syntax
# mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>
Example
# mkdir /var/lib/ceph/osd/ceph-0
As
root
, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:Syntax
# parted <path_to_disk> mklabel gpt # parted <path_to_disk> mkpart primary 1 10000 # mkfs -t <fstype> <path_to_partition> # mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> # echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab
Example
# parted /dev/sdb mklabel gpt # parted /dev/sdb mkpart primary 1 10000 # parted /dev/sdb mkpart primary 10001 15000 # mkfs -t xfs /dev/sdb1 # mount -o noatime /dev/sdb1 /var/lib/ceph/osd/ceph-0 # echo "/dev/sdb1 /var/lib/ceph/osd/ceph-0 xfs defaults,noatime 1 2" >> /etc/fstab
As
root
, initialize the OSD data directory:Syntax
# ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>
Example
# ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring
NoteThe directory must be empty before you run
ceph-osd
with the--mkkey
option. If you have a custom cluster name, theceph-osd
utility requires the--cluster
option.As
root
, register the OSD authentication key. If your cluster name differs fromceph
, insert your cluster name instead:Syntax
# ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring
Example
# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0
As
root
, add the OSD node to the CRUSH map:Syntax
# ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host
Example
# ceph osd crush add-bucket node2 host
As
root
, place the OSD node under thedefault
CRUSH tree:Syntax
# ceph [--cluster <cluster_name>] osd crush move <host_name> root=default
Example
# ceph osd crush move node2 root=default
As
root
, add the OSD disk to the CRUSH mapSyntax
# ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]
Example
# ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map
NoteYou can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Red Hat Ceph Storage Storage Strategies Guide for more details.
As
root
, update the owner and group permissions on the newly created directory and files:Syntax
# chown -R <owner>:<group> <path_to_directory>
Example
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph
For storage clusters with custom names, as
root
, add the following line to the/etc/sysconfig/ceph
file:Syntax
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Example
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is
down
andin
. The new OSD must beup
before it can begin receiving data. Asroot
, enable and start the OSD process:Syntax
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id> # systemctl start ceph-osd@<osd_id>
Example
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@0 # systemctl start ceph-osd@0
Once you start the OSD daemon, it is
up
andin
.
Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:
$ ceph -w
To view the OSD tree, execute the following command:
$ ceph osd tree
Example
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2 root default -2 2 host node2 0 1 osd.0 up 1 1 -3 1 host node3 1 1 osd.1 up 1 1
To expand the storage capacity by adding new OSDs to the storage cluster, see the Red Hat Ceph Storage Administration Guide for more details.
3.2.3. Calamari Server Installation
The Calamari server provides a RESTful API for monitoring Ceph storage clusters.
To install calamari-server
, perform the following steps on all Monitor nodes.
-
As
root
, enable the Red Hat Ceph Storage 2 Monitor repository As
root
, installcalamari-server
:# yum install calamari-server
As
root
, initialize thecalamari-server
:Syntax
# calamari-ctl clear --yes-i-am-sure # calamari-ctl initialize --admin-username <uid> --admin-password <pwd> --admin-email <email>
Example
# calamari-ctl clear --yes-i-am-sure # calamari-ctl initialize --admin-username admin --admin-password admin --admin-email cephadm@example.com
ImportantThe
calamari-ctl clear --yes-i-am-sure
command is only necessary for removing the database of old Calamari server installations. Running this command on a new Calamari server results in an error.NoteDuring initialization, the
calamari-server
will generate a self-signed certificate and a private key and place them in the/etc/calamari/ssl/certs/
and/etc/calamari/ssl/private
directories respectively. Use HTTPS when making requests. Otherwise, user names and passwords are transmitted in clear text.
The calamari-ctl initialize
process generates a private key and a self-signed certificate, which means there is no need to purchase a certificate from a Certificate Authority (CA).
To verify access to the HTTPS API through a web browser, go to the following URL. Click through the untrusted certificate warnings, because the auto-generated certificate is self-signed:
https://<calamari_hostname>:8002/api/v2/cluster
To use a key and certificate from a CA:
- Purchase a certificate from a CA. During the process, you will generate a private key and a certificate for CA. Or you can also use the self-signed certificate generated by Calamari.
-
Save the private key associated to the certificate to a path, preferably under
/etc/calamari/ssl/private/
. -
Save the certificate to a path, preferably under
/etc/calamari/ssl/certs/
. -
Open the
/etc/calamari/calamari.conf
file. Under the
[calamari_web]
section, modifyssl_cert
andssl_key
to point to the respective certificate and key path, for example:[calamari_web] ... ssl_cert = /etc/calamari/ssl/certs/calamari-lite-bundled.crt ssl_key = /etc/calamari/ssl/private/calamari-lite.key
As
root
, re-initialize Calamari:# calamari-ctl initialize