Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Appendix B. Using the command-line interface to install the Ceph software
As a storage administrator, you can choose to manually install various components of the Red Hat Ceph Storage software.
B.1. Installing the Ceph Command Line Interface
				The Ceph command-line interface (CLI) enables administrators to execute Ceph administrative commands. The CLI is provided by the ceph-common package and includes the following utilities:
			
- 
						ceph
- 
						ceph-authtool
- 
						ceph-dencoder
- 
						rados
Prerequisites
- 
						A running Ceph storage cluster, preferably in the active + cleanstate.
Procedure
- On the client node, enable the Red Hat Ceph Storage 4 Tools repository: - subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms - [root@gateway ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the client node, install the - ceph-commonpackage:- yum install ceph-common - # yum install ceph-common- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- From the initial monitor node, copy the Ceph configuration file, in this case - ceph.conf, and the administration keyring to the client node:- Syntax - scp /etc/ceph/ceph.conf <user_name>@<client_host_name>:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/ - # scp /etc/ceph/ceph.conf <user_name>@<client_host_name>:/etc/ceph/ # scp /etc/ceph/ceph.client.admin.keyring <user_name>@<client_host_name:/etc/ceph/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/ - # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - <client_host_name>with the host name of the client node.
B.2. Manually Installing Red Hat Ceph Storage
Red Hat does not support or test upgrading manually deployed clusters. Therefore, Red Hat recommends to use Ansible to deploy a new cluster with Red Hat Ceph Storage 4. See Chapter 5, Installing Red Hat Ceph Storage using Ansible for details.
You can use command-line utilities, such as Yum, to upgrade manually deployed clusters, but Red Hat does not support or test this approach.
All Ceph clusters require at least one monitor, and at least as many OSDs as copies of an object stored on the cluster. Red Hat recommends using three monitors for production environments and a minimum of three Object Storage Devices (OSD).
Bootstrapping the initial monitor is the first step in deploying a Ceph storage cluster. Ceph monitor deployment also sets important criteria for the entire cluster, such as:
- The number of replicas for pools
- The number of placement groups per OSD
- The heartbeat intervals
- Any authentication requirement
Most of these values are set by default, so it is useful to know about them when setting up the cluster for production.
Installing a Ceph storage cluster by using the command line interface involves these steps:
- Bootstrapping the initial Monitor node
- Adding an Object Storage Device (OSD) node
Monitor Bootstrapping
Bootstrapping a Monitor and by extension a Ceph storage cluster, requires the following data:
- Unique Identifier
- 
							The File System Identifier (fsid) is a unique identifier for the cluster. Thefsidwas originally used when the Ceph storage cluster was principally used for the Ceph file system. Ceph now supports native interfaces, block devices, and object storage gateway interfaces too, sofsidis a bit of a misnomer.
- Monitor Name
- 
							Each Monitor instance within a cluster has a unique name. In common practice, the Ceph Monitor name is the node name. Red Hat recommend one Ceph Monitor per node, and no co-locating the Ceph OSD daemons with the Ceph Monitor daemon. To retrieve the short node name, use the hostname -scommand.
- Monitor Map
- Bootstrapping the initial Monitor requires you to generate a Monitor map. The Monitor map requires: - 
									The File System Identifier (fsid)
- 
									The cluster name, or the default cluster name of cephis used
- At least one host name and its IP address.
 
- 
									The File System Identifier (
- Monitor Keyring
- Monitors communicate with each other by using a secret key. You must generate a keyring with a Monitor secret key and provide it when bootstrapping the initial Monitor.
- Administrator Keyring
- 
							To use the cephcommand-line interface utilities, create theclient.adminuser and generate its keyring. Also, you must add theclient.adminuser to the Monitor keyring.
				The foregoing requirements do not imply the creation of a Ceph configuration file. However, as a best practice, Red Hat recommends creating a Ceph configuration file and populating it with the fsid, the mon initial members and the mon host settings at a minimum.
			
You can get and set all of the Monitor settings at runtime as well. However, the Ceph configuration file might contain only those settings which overrides the default values. When you add settings to a Ceph configuration file, these settings override the default settings. Maintaining those settings in a Ceph configuration file makes it easier to maintain the cluster.
To bootstrap the initial Monitor, perform the following steps:
- Enable the Red Hat Ceph Storage 4 Monitor repository: - subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms - [root@monitor ~]# subscription-manager repos --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On your initial Monitor node, install the - ceph-monpackage as- root:- yum install ceph-mon - # yum install ceph-mon- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, create a Ceph configuration file in the- /etc/ceph/directory.- touch /etc/ceph/ceph.conf - # touch /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, generate the unique identifier for your cluster and add the unique identifier to the- [global]section of the Ceph configuration file:- echo "[global]" > /etc/ceph/ceph.conf echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf - # echo "[global]" > /etc/ceph/ceph.conf # echo "fsid = `uuidgen`" >> /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- View the current Ceph configuration file: - cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 - $ cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, add the initial Monitor to the Ceph configuration file:- Syntax - echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/ceph.conf - # echo "mon initial members = <monitor_host_name>[,<monitor_host_name>]" >> /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - echo "mon initial members = node1" >> /etc/ceph/ceph.conf - # echo "mon initial members = node1" >> /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, add the IP address of the initial Monitor to the Ceph configuration file:- Syntax - echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/ceph.conf - # echo "mon host = <ip-address>[,<ip-address>]" >> /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf - # echo "mon host = 192.168.0.120" >> /etc/ceph/ceph.conf- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- To use IPv6 addresses, you set the - ms bind ipv6option to- true. For details, see the Bind section in the Configuration Guide for Red Hat Ceph Storage 4.
- As - root, create the keyring for the cluster and generate the Monitor secret key:- ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' - # ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /tmp/ceph.mon.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, generate an administrator keyring, generate a- ceph.client.admin.keyringuser and add the user to the keyring:- Syntax - ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>' - # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon '<capabilites>' --cap osd '<capabilites>' --cap mds '<capabilites>'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' - # ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' creating /etc/ceph/ceph.client.admin.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, add the- ceph.client.admin.keyringkey to the- ceph.mon.keyring:- ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring - # ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /tmp/ceph.mon.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Generate the Monitor map. Specify using the node name, IP address and the - fsid, of the initial Monitor and save it as- /tmp/monmap:- Syntax - monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap - $ monmaptool --create --add <monitor_host_name> <ip-address> --fsid <uuid> /tmp/monmap- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap - $ monmaptool --create --add node1 192.168.0.120 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap monmaptool: monmap file /tmp/monmap monmaptool: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - rooton the initial Monitor node, create a default data directory:- Syntax - mkdir /var/lib/ceph/mon/ceph-<monitor_host_name> - # mkdir /var/lib/ceph/mon/ceph-<monitor_host_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - mkdir /var/lib/ceph/mon/ceph-node1 - # mkdir /var/lib/ceph/mon/ceph-node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, populate the initial Monitor daemon with the Monitor map and keyring:- Syntax - ceph-mon --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring - # ceph-mon --mkfs -i <monitor_host_name> --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring - # ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring ceph-mon: set fsid to a7f64266-0894-4f1e-a635-d0aeaca0e993 ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- View the current Ceph configuration file: - cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120 - # cat /etc/ceph/ceph.conf [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon_initial_members = node1 mon_host = 192.168.0.120- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For more details on the various Ceph configuration settings, see the Configuration Guide for Red Hat Ceph Storage 4. The following example of a Ceph configuration file lists some of the most common configuration settings: - Example - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, create the- donefile:- Syntax - touch /var/lib/ceph/mon/ceph-<monitor_host_name>/done - # touch /var/lib/ceph/mon/ceph-<monitor_host_name>/done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - touch /var/lib/ceph/mon/ceph-node1/done - # touch /var/lib/ceph/mon/ceph-node1/done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, update the owner and group permissions on the newly created directory and files:- Syntax - chown -R <owner>:<group> <path_to_directory> - # chown -R <owner>:<group> <path_to_directory>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- If the Ceph Monitor node is co-located with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by - glanceand- cinderrespectively. For example:- ls -l /etc/ceph/ - # ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, start and enable the- ceph-monprocess on the initial Monitor node:- Syntax - systemctl enable ceph-mon.target systemctl enable ceph-mon@<monitor_host_name> systemctl start ceph-mon@<monitor_host_name> - # systemctl enable ceph-mon.target # systemctl enable ceph-mon@<monitor_host_name> # systemctl start ceph-mon@<monitor_host_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - systemctl enable ceph-mon.target systemctl enable ceph-mon@node1 systemctl start ceph-mon@node1 - # systemctl enable ceph-mon.target # systemctl enable ceph-mon@node1 # systemctl start ceph-mon@node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, verify the monitor daemon is running:- Syntax - systemctl status ceph-mon@<monitor_host_name> - # systemctl status ceph-mon@<monitor_host_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
To add more Red Hat Ceph Storage Monitors to the storage cluster, see the Adding a Monitor section in the Administration Guide for Red Hat Ceph Storage 4.
OSD Bootstrapping
				Once you have your initial monitor running, you can start adding the Object Storage Devices (OSDs). Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object.
			
				The default number of copies for an object is three. You will need three OSD nodes at minimum. However, if you only want two copies of an object, therefore only adding two OSD nodes, then update the osd pool default size and osd pool default min size settings in the Ceph configuration file.
			
For more details, see the OSD Configuration Reference section in the Configuration Guide for Red Hat Ceph Storage 4.
After bootstrapping the initial monitor, the cluster has a default CRUSH map. However, the CRUSH map does not have any Ceph OSD daemons mapped to a Ceph node.
To add an OSD to the cluster and updating the default CRUSH map, execute the following on each OSD node:
- Enable the Red Hat Ceph Storage 4 OSD repository: - subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms - [root@osd ~]# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, install the- ceph-osdpackage on the Ceph OSD node:- yum install ceph-osd - # yum install ceph-osd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Copy the Ceph configuration file and administration keyring file from the initial Monitor node to the OSD node: - Syntax - scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file> - # scp <user_name>@<monitor_host_name>:<path_on_remote_system> <path_to_local_file>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - scp root@node1:/etc/ceph/ceph.conf /etc/ceph scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph - # scp root@node1:/etc/ceph/ceph.conf /etc/ceph # scp root@node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Generate the Universally Unique Identifier (UUID) for the OSD: - uuidgen - $ uuidgen b367c360-b364-4b1d-8fc6-09408a9cda7a- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, create the OSD instance:- Syntax - ceph osd create <uuid> [<osd_id>] - # ceph osd create <uuid> [<osd_id>]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a - # ceph osd create b367c360-b364-4b1d-8fc6-09408a9cda7a 0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- This command outputs the OSD number identifier needed for subsequent steps. 
- As - root, create the default directory for the new OSD:- Syntax - mkdir /var/lib/ceph/osd/ceph-<osd_id> - # mkdir /var/lib/ceph/osd/ceph-<osd_id>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - mkdir /var/lib/ceph/osd/ceph-0 - # mkdir /var/lib/ceph/osd/ceph-0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, prepare the drive for use as an OSD, and mount it to the directory you just created. Create a partition for the Ceph data and journal. The journal and the data partitions can be located on the same disk. This example is using a 15 GB disk:- Syntax - parted <path_to_disk> mklabel gpt parted <path_to_disk> mkpart primary 1 10000 mkfs -t <fstype> <path_to_partition> mount -o noatime <path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> echo "<path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab - # parted <path_to_disk> mklabel gpt # parted <path_to_disk> mkpart primary 1 10000 # mkfs -t <fstype> <path_to_partition> # mount -o noatime <path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> # echo "<path_to_partition> /var/lib/ceph/osd/ceph-<osd_id> xfs defaults,noatime 1 2" >> /etc/fstab- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, initialize the OSD data directory:- Syntax - ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid> - # ceph-osd -i <osd_id> --mkfs --mkkey --osd-uuid <uuid>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a - # ceph-osd -i 0 --mkfs --mkkey --osd-uuid b367c360-b364-4b1d-8fc6-09408a9cda7a ... auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory ... created new key in keyring /var/lib/ceph/osd/ceph-0/keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, register the OSD authentication key.- Syntax - ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-<osd_id>/keyring - # ceph auth add osd.<osd_id> osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-<osd_id>/keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring - # ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring added key for osd.0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, add the OSD node to the CRUSH map:- Syntax - ceph osd crush add-bucket <host_name> host - # ceph osd crush add-bucket <host_name> host- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph osd crush add-bucket node2 host - # ceph osd crush add-bucket node2 host- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, place the OSD node under the- defaultCRUSH tree:- Syntax - ceph osd crush move <host_name> root=default - # ceph osd crush move <host_name> root=default- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph osd crush move node2 root=default - # ceph osd crush move node2 root=default- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As - root, add the OSD disk to the CRUSH map- Syntax - ceph osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...] - # ceph osd crush add osd.<osd_id> <weight> [<bucket_type>=<bucket-name> ...]- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph osd crush add osd.0 1.0 host=node2 - # ceph osd crush add osd.0 1.0 host=node2 add item id 0 name 'osd.0' weight 1 at location {host=node2} to crush map- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- You can also decompile the CRUSH map, and add the OSD to the device list. Add the OSD node as a bucket, then add the device as an item in the OSD node, assign the OSD a weight, recompile the CRUSH map and set the CRUSH map. For more details, see the Editing a CRUSH map section in the Storage Strategies Guide for Red Hat Ceph Storage 4 for more details. 
- As - root, update the owner and group permissions on the newly created directory and files:- Syntax - chown -R <owner>:<group> <path_to_directory> - # chown -R <owner>:<group> <path_to_directory>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph - # chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- The OSD node is in your Ceph storage cluster configuration. However, the OSD daemon is - downand- in. The new OSD must be- upbefore it can begin receiving data. As- root, enable and start the OSD process:- Syntax - systemctl enable ceph-osd.target systemctl enable ceph-osd@<osd_id> systemctl start ceph-osd@<osd_id> - # systemctl enable ceph-osd.target # systemctl enable ceph-osd@<osd_id> # systemctl start ceph-osd@<osd_id>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - systemctl enable ceph-osd.target systemctl enable ceph-osd@0 systemctl start ceph-osd@0 - # systemctl enable ceph-osd.target # systemctl enable ceph-osd@0 # systemctl start ceph-osd@0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Once you start the OSD daemon, it is - upand- in.
Now you have the monitors and some OSDs up and running. You can watch the placement groups peer by executing the following command:
ceph -w
$ ceph -wTo view the OSD tree, execute the following command:
ceph osd tree
$ ceph osd treeExample
To expand the storage capacity by adding new OSDs to the storage cluster, see the Adding an OSD section in the Administration Guide for Red Hat Ceph Storage 4.
B.3. Manually installing Ceph Manager
				Usually, the Ansible automation utility installs the Ceph Manager daemon (ceph-mgr) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node.
			
Prerequisites
- A working Red Hat Ceph Storage cluster
- 
						rootorsudoaccess
- 
						The rhceph-4-mon-for-rhel-8-x86_64-rpmsrepository enabled
- 
						Open ports 6800-7300on the public network if firewall is used
Procedure
					Use the following commands on the node where ceph-mgr will be deployed and as the root user or with the sudo utility.
				
- Install the - ceph-mgrpackage:- yum install ceph-mgr - [root@node1 ~]# yum install ceph-mgr- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - /var/lib/ceph/mgr/ceph-hostname/directory:- mkdir /var/lib/ceph/mgr/ceph-hostname - mkdir /var/lib/ceph/mgr/ceph-hostname- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace hostname with the host name of the node where the - ceph-mgrdaemon will be deployed, for example:- mkdir /var/lib/ceph/mgr/ceph-node1 - [root@node1 ~]# mkdir /var/lib/ceph/mgr/ceph-node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- In the newly created directory, create an authentication key for the - ceph-mgrdaemon:- ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring - [root@node1 ~]# ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Change the owner and group of the - /var/lib/ceph/mgr/directory to- ceph:ceph:- chown -R ceph:ceph /var/lib/ceph/mgr - [root@node1 ~]# chown -R ceph:ceph /var/lib/ceph/mgr- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Enable the - ceph-mgrtarget:- systemctl enable ceph-mgr.target - [root@node1 ~]# systemctl enable ceph-mgr.target- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Enable and start the - ceph-mgrinstance:- systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname - systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace hostname with the host name of the node where the - ceph-mgrwill be deployed, for example:- systemctl enable ceph-mgr@node1 systemctl start ceph-mgr@node1 - [root@node1 ~]# systemctl enable ceph-mgr@node1 [root@node1 ~]# systemctl start ceph-mgr@node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the - ceph-mgrdaemon started successfully:- ceph -s - ceph -s- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The output will include a line similar to the following one under the - services:section:- mgr: node1(active) - mgr: node1(active)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
						Install more ceph-mgrdaemons to serve as standby daemons that become active if the current active daemon fails.
Additional resources
B.4. Manually Installing Ceph Block Device
The following procedure shows how to install and mount a thin-provisioned, resizable Ceph Block Device.
Ceph Block Devices must be deployed on separate nodes from the Ceph Monitor and OSD nodes. Running kernel clients and kernel server daemons on the same node can lead to kernel deadlocks.
Prerequisites
- Ensure to perform the tasks listed in the Section B.1, “Installing the Ceph Command Line Interface” section.
- If you use Ceph Block Devices as a back end for virtual machines (VMs) that use QEMU, increase the default file descriptor. See the Ceph - VM hangs when transferring large amounts of data to RBD disk Knowledgebase article for details.
Procedure
- Create a Ceph Block Device user named - client.rbdwith full permissions to files on OSD nodes (- osd 'allow rwx') and output the result to a keyring file:- ceph auth get-or-create client.rbd mon 'profile rbd' osd 'profile rbd pool=<pool_name>' \ -o /etc/ceph/rbd.keyring - ceph auth get-or-create client.rbd mon 'profile rbd' osd 'profile rbd pool=<pool_name>' \ -o /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Replace - <pool_name>with the name of the pool that you want to allow- client.rbdto have access to, for example- rbd:- ceph auth get-or-create \ client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \ -o /etc/ceph/rbd.keyring - # ceph auth get-or-create \ client.rbd mon 'allow r' osd 'allow rwx pool=rbd' \ -o /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - See the User Management section in the Red Hat Ceph Storage 4 Administration Guide for more information about creating users. 
- Create a block device image: - rbd create <image_name> --size <image_size> --pool <pool_name> \ --name client.rbd --keyring /etc/ceph/rbd.keyring - rbd create <image_name> --size <image_size> --pool <pool_name> \ --name client.rbd --keyring /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Specify - <image_name>,- <image_size>, and- <pool_name>, for example:- rbd create image1 --size 4G --pool rbd \ --name client.rbd --keyring /etc/ceph/rbd.keyring - $ rbd create image1 --size 4G --pool rbd \ --name client.rbd --keyring /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Warning- The default Ceph configuration includes the following Ceph Block Device features: - 
									layering
- 
									exclusive-lock
- 
									object-map
- 
									deep-flatten
- 
									fast-diff
 - If you use the kernel RBD ( - krbd) client, you may not be able to map the block device image.- To work around this problem, disable the unsupported features. Use one of the following options to do so: - Disable the unsupported features dynamically: - rbd feature disable <image_name> <feature_name> - rbd feature disable <image_name> <feature_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - rbd feature disable image1 object-map deep-flatten fast-diff - # rbd feature disable image1 object-map deep-flatten fast-diff- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
									Use the --image-feature layeringoption with therbd createcommand to enable onlylayeringon newly created block device images.
- Disable the features be default in the Ceph configuration file: - rbd_default_features = 1 - rbd_default_features = 1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 - This is a known issue, for details see the Known Issues chapter in the Release Notes for Red Hat Ceph Storage 4. - All these features work for users that use the user-space RBD client to access the block device images. 
- 
									
- Map the newly created image to the block device: - rbd map <image_name> --pool <pool_name>\ --name client.rbd --keyring /etc/ceph/rbd.keyring - rbd map <image_name> --pool <pool_name>\ --name client.rbd --keyring /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - rbd map image1 --pool rbd --name client.rbd \ --keyring /etc/ceph/rbd.keyring - # rbd map image1 --pool rbd --name client.rbd \ --keyring /etc/ceph/rbd.keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Use the block device by creating a file system: - mkfs.ext4 /dev/rbd/<pool_name>/<image_name> - mkfs.ext4 /dev/rbd/<pool_name>/<image_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Specify the pool name and the image name, for example: - mkfs.ext4 /dev/rbd/rbd/image1 - # mkfs.ext4 /dev/rbd/rbd/image1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - This action can take a few moments. 
- Mount the newly created file system: - mkdir <mount_directory> mount /dev/rbd/<pool_name>/<image_name> <mount_directory> - mkdir <mount_directory> mount /dev/rbd/<pool_name>/<image_name> <mount_directory>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - For example: - mkdir /mnt/ceph-block-device mount /dev/rbd/rbd/image1 /mnt/ceph-block-device - # mkdir /mnt/ceph-block-device # mount /dev/rbd/rbd/image1 /mnt/ceph-block-device- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Additional Resources
- The Block Device Guide for Red Hat Ceph Storage 4.
B.5. Manually Installing Ceph Object Gateway
				The Ceph object gateway, also know as the RADOS gateway, is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters.
			
Prerequisites
- 
						A running Ceph storage cluster, preferably in the active + cleanstate.
- Perform the tasks listed in Chapter 3, Requirements for Installing Red Hat Ceph Storage.
Procedure
- Enable the Red Hat Ceph Storage 4 Tools repository: - subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-debug-rpms - [root@gateway ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-debug-rpms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the Object Gateway node, install the - ceph-radosgwpackage:- yum install ceph-radosgw - # yum install ceph-radosgw- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the initial Monitor node, do the following steps. - Update the Ceph configuration file as follows: - [client.rgw.<obj_gw_hostname>] host = <obj_gw_hostname> rgw frontends = "civetweb port=80" rgw dns name = <obj_gw_hostname>.example.com - [client.rgw.<obj_gw_hostname>] host = <obj_gw_hostname> rgw frontends = "civetweb port=80" rgw dns name = <obj_gw_hostname>.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Where - <obj_gw_hostname>is a short host name of the gateway node. To view the short host name, use the- hostname -scommand.
- Copy the updated configuration file to the new Object Gateway node and all other nodes in the Ceph storage cluster: - Syntax - scp /etc/ceph/ceph.conf <user_name>@<target_host_name>:/etc/ceph - # scp /etc/ceph/ceph.conf <user_name>@<target_host_name>:/etc/ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - scp /etc/ceph/ceph.conf root@node1:/etc/ceph/ - # scp /etc/ceph/ceph.conf root@node1:/etc/ceph/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Copy the - ceph.client.admin.keyringfile to the new Object Gateway node:- Syntax - scp /etc/ceph/ceph.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/ - # scp /etc/ceph/ceph.client.admin.keyring <user_name>@<target_host_name>:/etc/ceph/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/ - # scp /etc/ceph/ceph.client.admin.keyring root@node1:/etc/ceph/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- On the Object Gateway node, create the data directory: - mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s` - # mkdir -p /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the Object Gateway node, add a user and keyring to bootstrap the object gateway: - Syntax - ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring - # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring - # ceph auth get-or-create client.rgw.`hostname -s` osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/keyring- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Important- When you provide capabilities to the gateway key you must provide the read capability. However, providing the Monitor write capability is optional; if you provide it, the Ceph Object Gateway will be able to create pools automatically. - In such a case, ensure to specify a reasonable number of placement groups in a pool. Otherwise, the gateway uses the default number, which is most likely not suitable for your needs. See Ceph Placement Groups (PGs) per Pool Calculator for details. 
- On the Object Gateway node, create the - donefile:- touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done - # touch /var/lib/ceph/radosgw/ceph-rgw.`hostname -s`/done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the Object Gateway node, change the owner and group permissions: - chown -R ceph:ceph /var/lib/ceph/radosgw chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph - # chown -R ceph:ceph /var/lib/ceph/radosgw # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the Object Gateway node, open TCP port 8080: - firewall-cmd --zone=public --add-port=8080/tcp firewall-cmd --zone=public --add-port=8080/tcp --permanent - # firewall-cmd --zone=public --add-port=8080/tcp # firewall-cmd --zone=public --add-port=8080/tcp --permanent- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the Object Gateway node, start and enable the - ceph-radosgwprocess:- Syntax - systemctl enable ceph-radosgw.target systemctl enable ceph-radosgw@rgw.<rgw_hostname> systemctl start ceph-radosgw@rgw.<rgw_hostname> - # systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.<rgw_hostname> # systemctl start ceph-radosgw@rgw.<rgw_hostname>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example - systemctl enable ceph-radosgw.target systemctl enable ceph-radosgw@rgw.node1 systemctl start ceph-radosgw@rgw.node1 - # systemctl enable ceph-radosgw.target # systemctl enable ceph-radosgw@rgw.node1 # systemctl start ceph-radosgw@rgw.node1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Once installed, the Ceph Object Gateway automatically creates pools if the write capability is set on the Monitor. See the Pools chapter in the Storage Strategies Guide for details on creating pools manually.
Additional Resources
- The Red Hat Ceph Storage 4 Object Gateway Configuration and Administration Guide