Chapter 1. Pre-Installation Requirements
If you are installing Red Hat Ceph Storage v1.2.3 for the first time, you should review the pre-installation requirements first. Depending on your Linux distribution, you may need to adjust default settings and install required software before setting up a local repository and installing Calamari and Ceph.
1.1. Operating System Copy linkLink copied to clipboard!
Red Hat Ceph Storage v1.2.3 and beyond requires a homogeneous operating system distribution and version e.g. CentOS 6 on x86_64 architecture for all Ceph nodes, including the Calamari cluster. We do not support clusters with heterogeneous operating systems and versions.
1.2. DNS Name Resolution Copy linkLink copied to clipboard!
Ceph nodes must be able to resolve short host names, not just fully qualified domain names. Set up a default search domain to resolve short host names. To retrieve a Ceph node’s short host name, execute:
hostname -s
hostname -s
Each Ceph node MUST be able to ping every other Ceph node in the cluster by its short host name.
1.3. NICs Copy linkLink copied to clipboard!
All Ceph clusters require a public network. You MUST have a network interface card configured to a public network where Ceph clients can reach Ceph Monitors and Ceph OSDs. You SHOULD have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication and recovery on a network separate from the public network.
We DO NOT RECOMMEND using a single NIC for both a public and private network.
1.4. Network Copy linkLink copied to clipboard!
Ensure that you configure your network interfaces and make them persistent so that the settings are identical on reboot. For example:
-
BOOTPROTO
will usually benone
for static IP addresses. -
IPV6{opt}
settings MUST be set toyes
except forFAILURE_FATAL
if you intend to use IPv6. You must also set your Ceph configuration file to tell Ceph to use IPv6 if you intend to use it. Otherwise, Ceph will use IPv4. -
ONBOOT
MUST be set toyes.
If it is set tono
, Ceph may fail to peer on reboot.
Navigate to /etc/sysconfig/network-scripts
and ensure that the ifcfg-<iface>
settings for your public and cluster interfaces (assuming you will use a cluster network too [RECOMMENDED]) are properly configured.
For details on configuring network interface scripts for CentOS 6, see Ethernet Interfaces.
1.5. Firewall for CentOS 6 Copy linkLink copied to clipboard!
The default firewall configuration for CentOS is fairly strict. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80
so that clients in your network can access the Calamari web user interface.
Calamari also communicates with Ceph nodes via ports 2003
, 4505
and 4506
. You MUST open ports 80
, 2003
, and 4505-4506
on your Calamari node.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 2003 -j ACCEPT sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 4505:4506 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 2003 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 4505:4506 -j ACCEPT
You MUST open port 6789
on your public network on ALL Ceph monitor nodes.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 6789 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 6789 -j ACCEPT
Finally, you MUST also open ports for OSD traffic (e.g., 6800-7100
). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating (cluster network, if available; otherwise, public network). For example, if you have 4 OSDs, open 4 x 3
ports (12
).
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 6800:6811 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 6800:6811 -j ACCEPT
Once you have finished configuring iptables
, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:
/sbin/service iptables save
/sbin/service iptables save
1.6. NTP Copy linkLink copied to clipboard!
You MUST install Network Time Protocol (NTP) on all Ceph monitor hosts and ensure that monitor hosts are NTP peers. You SHOULD consider installing NTP on Ceph OSD nodes, but it is not required. NTP helps preempt issues that arise from clock drift.
Install NTP
sudo yum install ntp
sudo yum install ntp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure NTP starts on reboot.
sudo chkconfig ntpd on
sudo chkconfig ntpd on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the NTP service and ensure it’s running.
sudo /etc/init.d/ntpd start sudo /etc/init.d/ntpd status
sudo /etc/init.d/ntpd start sudo /etc/init.d/ntpd status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that NTP is synchronizing Ceph monitor node clocks properly.
ntpq -p
ntpq -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For additional details on NTP for CentOS 6, see Network Time Protocol Setup.
1.7. Install SSH Server Copy linkLink copied to clipboard!
For ALL Ceph Nodes perform the following steps:
Install an SSH server (if necessary) on each Ceph Node:
sudo yum install openssh-server
sudo yum install openssh-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure the SSH server is running on ALL Ceph Nodes.
For additional details on OpenSSH for CentOS 6, see OpenSSH.
1.8. Create a Ceph User Copy linkLink copied to clipboard!
The ceph-deploy
utility must login to a Ceph node as a user that has passwordless sudo
privileges, because it needs to install software and configuration files without prompting for passwords.
ceph-deploy
supports a --username
option so you can specify any user that has password-less sudo
(including root
, although this is NOT recommended). To use ceph-deploy --username <username>
, the user you specify must have password-less SSH access to the Ceph node, because ceph-deploy
will not prompt you for a password.
We recommend creating a Ceph user on ALL Ceph nodes in the cluster. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root
, admin
, <productname>
). The following procedure, substituting <username>
for the user name you define, describes how to create a user with passwordless sudo
on a node called ceph-server
.
Create a user on each Ceph Node. :
ssh user@ceph-server sudo useradd -d /home/<username> -m <username> sudo passwd <username>
ssh user@ceph-server sudo useradd -d /home/<username> -m <username> sudo passwd <username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the user you added to each Ceph node, ensure that the user has
sudo
privileges. :echo "<username> ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/<username> sudo chmod 0440 /etc/sudoers.d/<username>
echo "<username> ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/<username> sudo chmod 0440 /etc/sudoers.d/<username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.9. Enable Password-less SSH Copy linkLink copied to clipboard!
Since ceph-deploy
will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.
Generate the SSH keys, but do not use
sudo
or theroot
user. Leave the passphrase empty:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the key to each Ceph Node, replacing
<username>
with the user name you created with Create a Ceph User_. :ssh-copy-id <username>@node1 ssh-copy-id <username>@node2 ssh-copy-id <username>@node3
ssh-copy-id <username>@node1 ssh-copy-id <username>@node2 ssh-copy-id <username>@node3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow (Recommended) Modify the
~/.ssh/config
file of yourceph-deploy
admin node so thatceph-deploy
can log in to Ceph nodes as the user you created without requiring you to specify--username <username>
each time you executeceph-deploy
. This has the added benefit of streamliningssh
andscp
usage. Replace<username>
with the user name you created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.10. Disable RAID Copy linkLink copied to clipboard!
If you have RAID (not recommended), configure your RAID controllers to RAID 0 (JBOD).
1.11. Adjust PID Count Copy linkLink copied to clipboard!
Hosts with high numbers of OSDs (e.g., > 20) may spawn a lot of threads, especially during recovery and re-balancing. Many Linux kernels default to a relatively small maximum number of threads (e.g., 32768
). Check your default settings to see if they are suitable.
cat /proc/sys/kernel/pid_max
cat /proc/sys/kernel/pid_max
Consider setting kernel.pid_max
to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, you could add the following to the /etc/sysctl.conf
file to set it to the maximum:
kernel.pid_max = 4194303
kernel.pid_max = 4194303
To see the changes you made without a reboot, execute:
sudo sysctl -p
sudo sysctl -p
To verify the changes, execute:
sudo sysctl -a | grep kernel.pid_max
sudo sysctl -a | grep kernel.pid_max
1.12. Hard Drive Prep on CentOS 6 Copy linkLink copied to clipboard!
Ceph aims for data safety, which means that when the Ceph Client receives notice that data was written to a storage drive, that data was actually written to the storage drive (i.e., it’s not in a journal or drive cache, but yet to be written). On CentOS 6, disable the write cache if the journal is on a raw drive.
Use hdparm
to disable write caching on OSD storage drives:
sudo hdparm -W 0 /<path-to>/<disk> 0
sudo hdparm -W 0 /<path-to>/<disk> 0
1.13. TTY Copy linkLink copied to clipboard!
You may receive an error while trying to execute ceph-deploy
commands. If requiretty
is set by default on your Ceph hosts, disable it by executing sudo visudo
and locate the Defaults requiretty
setting. Change it to Defaults:ceph !requiretty
where ceph
is the user name from the step of Create a Ceph User to ensure that ceph-deploy
can connect using the ceph user and execute commands as root
.
1.14. SELinux Copy linkLink copied to clipboard!
SELinux is set to Enforcing
by default. For Ceph Storage v1.2.3, set SELinux to Permissive
or disable it entirely and ensure that your installation and cluster is working properly. To set SELinux to Permissive
, execute the following:
sudo setenforce 0
sudo setenforce 0
To configure SELinux persistently, modify the configuration file at /etc/selinux/config
.
1.15. Disable EPEL on Ceph Cluster Nodes Copy linkLink copied to clipboard!
Some Ceph package dependencies require versions that differ from the package versions from EPEL. Disable EPEL to ensure that you install the packages required for use with Ceph.