Ce contenu n'est pas disponible dans la langue sélectionnée.
Appendix B. Manually Installing Red Hat Ceph Storage
Red Hat does not support or test upgrading manually deployed clusters. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 3. See Chapter 3, Deploying Red Hat Ceph Storage for details.
You can use command-line utilities, such as Yum, to install manually deployed clusters.
All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).
Installing a Ceph storage cluster by using the command line interface involves these steps:
B.1. Prerequisites Copier lienLien copié sur presse-papiers!
Configuring the Network Time Protocol for Red Hat Ceph Storage
All Ceph Monitor and OSD nodes requires configuring the Network Time Protocol (NTP). Ensure that Ceph nodes are NTP peers. NTP helps preempt issues that arise from clock drift.
When using Ansible to deploy a Red Hat Ceph Storage cluster, Ansible automatically installs, configures, and enables NTP.
Prerequisites
- Network access to a valid time source.
Procedure: Configuring the Network Time Protocol for RHCS
Do the following steps on the all RHCS nodes in the storage cluster, as the root
user.
Install the
ntp
package:yum install ntp
# yum install ntp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start and enable the NTP service to be persistent across a reboot:
systemctl start ntpd systemctl enable ntpd
# systemctl start ntpd # systemctl enable ntpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that NTP is synchronizing clocks properly:
ntpq -p
$ ntpq -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- The Configuring NTP Using ntpd chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
Monitor Bootstrapping
Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:
- Unique Identifier
-
The File System Identifier (
fsid
) is a unique identifier for the cluster. Thefsid
was originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, sofsid
is a bit of a misnomer. - Cluster Name
Ceph clusters have a cluster name, which is a simple string without spaces. The default cluster name is
ceph
, but you can specify a different cluster name. Overriding the default cluster name is especially useful when you work with multiple clusters.When you run multiple clusters in a multi-site architecture, the cluster name for example,
us-west
,us-east
identifies the cluster for the current command-line session.NoteTo identify the cluster name on the command-line interface, specify the Ceph configuration file with the cluster name, for example,
ceph.conf
,us-west.conf
,us-east.conf
, and so on.Example:
ceph --cluster us-west.conf ...
# ceph --cluster us-west.conf ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Monitor Name
-
Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the
hostname -s
command. - Monitor Map
Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires:
-
The File System Identifier (
fsid
) -
The cluster name, or the default cluster name of
ceph
is used - At least one host name and its IP address.
-
The File System Identifier (
- Monitor Keyring
- Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
- Administrator Keyring
-
To use the
ceph
command-line interface utilities, create theclient.admin
user and generate its keyring. Also, you must add theclient.admin
user to the Monitor keyring.
The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid
, the mon initial members
and the mon host
settings at a minimum.
You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.
To bootstrap the initial Monitor, perform the following steps:
Enable the Red Hat Ceph Storage 3 Monitor repository:
subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-els-rpms
[root@monitor ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-els-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On your initial Monitor node, install the
ceph-mon
package asroot
:yum install ceph-mon
# yum install ceph-mon
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create a Ceph configuration file in the/etc/ceph/
directory. By default, Ceph usesceph.conf
, whereceph
reflects the cluster name:Syntax
touch /etc/ceph/<cluster_name>.conf
# touch /etc/ceph/<cluster_name>.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
touch /etc/ceph/ceph.conf
# touch /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, generate the unique identifier for your cluster and add the unique identifier to the[global]
section of the Ceph configuration file:Syntax
echo "[global]" > /etc/ceph/<cluster_name>.conf echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf
# echo "[global]" > /etc/ceph/<cluster_name>.conf # echo "fsid = `uuidgen`" >> /etc/ceph/<cluster_name>.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "[global]" > /etc/ceph/ceph.conf echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
# echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the current Ceph configuration file:
cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
$ cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, add the initial Monitor to the Ceph configuration file:Syntax
echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf
# echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/<cluster_name>.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "mon initial members = node1" >> /etc/ceph/ceph.conf
# echo "mon initial members = node1" >> /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, add the IP address of the initial Monitor to the Ceph configuration file:Syntax
echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf
# echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/<cluster_name>.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf
# echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo use IPv6 addresses, you set the
ms bind ipv6
option totrue
. For details, see the Bind section in the Configuration Guide for Red Hat Ceph Storage 3.As
root
, create the keyring for the cluster and generate the Monitor secret key:Syntax
ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'
# ceph-authtool --create-keyring /tmp/<cluster_name>.mon.keyring --gen-key -n mon. --cap mon '<capabilites>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, generate an administrator keyring, generate a<cluster_name>.client.admin.keyring
user and add the user to the keyring:Syntax
ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'
# ceph-authtool --create-keyring /etc/ceph/<cluster_name>.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, add the<cluster_name>.client.admin.keyring
key to the<cluster_name>.mon.keyring
:Syntax
ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring
# ceph-authtool /tmp/<cluster_name>.mon.keyring --import-keyring /etc/ceph/<cluster_name>.client.admin.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the Monitor map. Specify using the node name, IP address and the
fsid
, of the initial Monitor and save it as/tmp/monmap
:Syntax
monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap
$ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
$ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
on the initial Monitor node, create a default data directory:Syntax
mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>
# mkdir /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /var/lib/ceph/mon/ceph-node1
# mkdir /var/lib/ceph/mon/ceph-node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, populate the initial Monitor daemon with the Monitor map and keyring:Syntax
ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring
# ceph-mon [--cluster <cluster_name>] --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/<cluster_name>.mon.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the current Ceph configuration file:
cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120
# cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more details on the various Ceph configuration settings, see the Configuration Guide for Red Hat Ceph Storage 3. The following example of a Ceph configuration file lists some of the most common configuration settings:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create thedone
file:Syntax
touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done
# touch /var/lib/ceph/mon/<cluster_name>-<monitor_host_name>/done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
touch /var/lib/ceph/mon/ceph-node1/done
# touch /var/lib/ceph/mon/ceph-node1/done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, update the owner and group permissions on the newly created directory and files:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glance
andcinder
respectively. For example:ls -l /etc/ceph/
# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For storage clusters with custom names, as
root
, add the the following line:Syntax
echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "CLUSTER=test123" >> /etc/sysconfig/ceph
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, start and enable theceph-mon
process on the initial Monitor node:Syntax
systemctl enable ceph-mon.target systemctl enable ceph-mon@<monitor_host_name> systemctl start ceph-mon@<monitor_host_name>
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name> # systemctl start ceph-mon@<monitor_host_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
systemctl enable ceph-mon.target systemctl enable ceph-mon@node1 systemctl start ceph-mon@node1
# systemctl enable ceph-mon.target # systemctl enable ceph-mon@node1 # systemctl start ceph-mon@node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, verify the monitor daemon is running:Syntax
systemctl status ceph-mon@<monitor_host_name>
# systemctl status ceph-mon@<monitor_host_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Adding a Monitor section in the Administration Guide for Red Hat Ceph Storage 3.
B.2. Manually installing Ceph Manager Copier lienLien copié sur presse-papiers!
Usually, the Ansible automation utility installs the Ceph Manager daemon (ceph-mgr
) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node.
Prerequisites
- A working Red Hat Ceph Storage cluster
-
root
orsudo
access -
The
rhel-7-server-rhceph-3-mon-els-rpms
repository enabled -
Open ports
6800-7300
on the public network if firewall is used
Procedure
Use the following commands on the node where ceph-mgr
will be deployed and as the root
user or with the sudo
utility.
Install the
ceph-mgr
package:yum install ceph-mgr
[root@node1 ~]# yum install ceph-mgr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
/var/lib/ceph/mgr/ceph-hostname/
directory:mkdir /var/lib/ceph/mgr/ceph-hostname
mkdir /var/lib/ceph/mgr/ceph-hostname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace hostname with the host name of the node where the
ceph-mgr
daemon will be deployed, for example:mkdir /var/lib/ceph/mgr/ceph-node1
[root@node1 ~]# mkdir /var/lib/ceph/mgr/ceph-node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the newly created directory, create an authentication key for the
ceph-mgr
daemon:ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring
[root@node1 ~]# ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the owner and group of the
/var/lib/ceph/mgr/
directory toceph:ceph
:chown -R ceph:ceph /var/lib/ceph/mgr
[root@node1 ~]# chown -R ceph:ceph /var/lib/ceph/mgr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
ceph-mgr
target:systemctl enable ceph-mgr.target
[root@node1 ~]# systemctl enable ceph-mgr.target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
ceph-mgr
instance:systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname
systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace hostname with the host name of the node where the
ceph-mgr
will be deployed, for example:systemctl enable ceph-mgr@node1 systemctl start ceph-mgr@node1
[root@node1 ~]# systemctl enable ceph-mgr@node1 [root@node1 ~]# systemctl start ceph-mgr@node1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ceph-mgr
daemon started successfully:ceph -s
ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output will include a line similar to the following one under the
services:
section:mgr: node1(active)
mgr: node1(active)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Install more
ceph-mgr
daemons to serve as standby daemons that become active if the current active daemon fails.
Additional resources
OSD Bootstrapping
Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean
state until you have enough OSDs to handle the number of copies of an object.
The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size
and osd pool default min size
settings in the Ceph configuration file.
For more details, see the OSD Configuration Reference section in the Configuration Guide for Red Hat Ceph Storage 3.
After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.
To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:
Enable the Red Hat Ceph Storage 3 OSD repository:
subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-els-rpms
[root@osd ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-els-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, install theceph-osd
package on the Ceph OSD node:yum install ceph-osd
# yum install ceph-osd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node:
Syntax
scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>
# scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
scp root@node1:/etc/ceph/ceph.conf /etc/ceph scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
# scp root@node1:/etc/ceph/ceph.conf /etc/ceph # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the Universally Unique Identifier (UUID) for the OSD:
uuidgen
$ uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, create the OSD instance:Syntax
ceph osd create <uuid> [<osd_id>]
# ceph osd create <uuid> [<osd_id>]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a
# ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command outputs the OSD number identifier needed for subsequent steps.
As
root
, create the default directory for the new OSD:Syntax
mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>
# mkdir /var/lib/ceph/osd/<cluster_name>-<osd_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /var/lib/ceph/osd/ceph-0
# mkdir /var/lib/ceph/osd/ceph-0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:Syntax
parted <path_to_disk> mklabel gpt parted <path_to_disk> mkpart primary 1 10000 mkfs -t <fstype> <path_to_partition> mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab
# parted <path_to_disk> mklabel gpt # parted <path_to_disk> mkpart primary 1 10000 # mkfs -t <fstype> <path_to_partition> # mount -o noatime <path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> # echo "<path_to_partition> /var/lib/ceph/osd/<cluster_name>-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, initialize the OSD data directory:Syntax
ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>
# ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a
# ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe directory must be empty before you run
ceph-osd
with the--mkkey
option. If you have a custom cluster name, theceph-osd
utility requires the--cluster
option.As
root
, register the OSD authentication key. If your cluster name differs fromceph
, insert your cluster name instead:Syntax
ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring
# ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/<cluster_name>-<osd_id>/keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, add the OSD node to the CRUSH map:Syntax
ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host
# ceph [--cluster <cluster_name>] osd crush add-bucket <host_name> host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush add-bucket node2 host
# ceph osd crush add-bucket node2 host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, place the OSD node under thedefault
CRUSH tree:Syntax
ceph [--cluster <cluster_name>] osd crush move <host_name> root=default
# ceph [--cluster <cluster_name>] osd crush move <host_name> root=default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush move node2 root=default
# ceph osd crush move node2 root=default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root
, add the OSD disk to the CRUSH mapSyntax
ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]
# ceph [--cluster <cluster_name>] osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd crush add osd.0 1.0 host=node2
# ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Editing a CRUSH map section in the Storage Strategies Guide for Red Hat Ceph Storage 3. for more details.
As
root
, update the owner and group permissions on the newly created directory and files:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For storage clusters with custom names, as
root
, add the following line to the/etc/sysconfig/ceph
file:Syntax
echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
# echo "CLUSTER=<custom_cluster_name>" >> /etc/sysconfig/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
echo "CLUSTER=test123" >> /etc/sysconfig/ceph
# echo "CLUSTER=test123" >> /etc/sysconfig/ceph
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is
down
andin
. The new OSD must beup
before it can begin receiving data. Asroot
, enable and start the OSD process:Syntax
systemctl enable ceph-osd.target systemctl enable ceph-osd@<osd_id> systemctl start ceph-osd@<osd_id>
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id> # systemctl start ceph-osd@<osd_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
systemctl enable ceph-osd.target systemctl enable ceph-osd@0 systemctl start ceph-osd@0
# systemctl enable ceph-osd.target # systemctl enable ceph-osd@0 # systemctl start ceph-osd@0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once you start the OSD daemon, it is
up
andin
.
Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:
ceph -w
$ ceph -w
To view the OSD tree, execute the following command:
ceph osd tree
$ ceph osd tree
Example
To expand the storage capacity by adding new OSDs to the storage cluster, see the Adding an OSD section in the Administration Guide for Red Hat Ceph Storage 3.