此内容没有您所选择的语言版本。
Chapter 1. Pre-Installation Requirements
If you are installing Red Hat Ceph Storage v1.2.3 for the first time, you should review the pre-installation requirements first. Depending on your Linux distribution, you may need to adjust default settings and install required software before setting up a local repository and installing Calamari and Ceph.
1.1. Operating System 复制链接链接已复制到粘贴板!
Red Hat Ceph Storage v1.2.3 and beyond requires a homogeneous operating system distribution and version e.g. CentOS 6 on x86_64 architecture for all Ceph nodes, including the Calamari cluster. We do not support clusters with heterogeneous operating systems and versions.
1.2. DNS Name Resolution 复制链接链接已复制到粘贴板!
Ceph nodes must be able to resolve short host names, not just fully qualified domain names. Set up a default search domain to resolve short host names. To retrieve a Ceph node’s short host name, execute:
hostname -s
Each Ceph node MUST be able to ping every other Ceph node in the cluster by its short host name.
1.3. NICs 复制链接链接已复制到粘贴板!
All Ceph clusters require a public network. You MUST have a network interface card configured to a public network where Ceph clients can reach Ceph Monitors and Ceph OSDs. You SHOULD have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication and recovery on a network separate from the public network.
We DO NOT RECOMMEND using a single NIC for both a public and private network.
1.4. Network 复制链接链接已复制到粘贴板!
Ensure that you configure your network interfaces and make them persistent so that the settings are identical on reboot. For example:
-
BOOTPROTOwill usually benonefor static IP addresses. -
IPV6{opt}settings MUST be set toyesexcept forFAILURE_FATALif you intend to use IPv6. You must also set your Ceph configuration file to tell Ceph to use IPv6 if you intend to use it. Otherwise, Ceph will use IPv4. -
ONBOOTMUST be set toyes.If it is set tono, Ceph may fail to peer on reboot.
Navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-<iface> settings for your public and cluster interfaces (assuming you will use a cluster network too [RECOMMENDED]) are properly configured.
For details on configuring network interface scripts for CentOS 6, see Ethernet Interfaces.
1.5. Firewall for CentOS 6 复制链接链接已复制到粘贴板!
The default firewall configuration for CentOS is fairly strict. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80 so that clients in your network can access the Calamari web user interface.
Calamari also communicates with Ceph nodes via ports 2003, 4505 and 4506. You MUST open ports 80, 2003, and 4505-4506 on your Calamari node.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 2003 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 4505:4506 -j ACCEPT
You MUST open port 6789 on your public network on ALL Ceph monitor nodes.
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 6789 -j ACCEPT
Finally, you MUST also open ports for OSD traffic (e.g., 6800-7100). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating (cluster network, if available; otherwise, public network). For example, if you have 4 OSDs, open 4 x 3 ports (12).
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 6800:6811 -j ACCEPT
Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:
/sbin/service iptables save
1.6. NTP 复制链接链接已复制到粘贴板!
You MUST install Network Time Protocol (NTP) on all Ceph monitor hosts and ensure that monitor hosts are NTP peers. You SHOULD consider installing NTP on Ceph OSD nodes, but it is not required. NTP helps preempt issues that arise from clock drift.
Install NTP
sudo yum install ntpMake sure NTP starts on reboot.
sudo chkconfig ntpd onStart the NTP service and ensure it’s running.
sudo /etc/init.d/ntpd start sudo /etc/init.d/ntpd statusEnsure that NTP is synchronizing Ceph monitor node clocks properly.
ntpq -p
For additional details on NTP for CentOS 6, see Network Time Protocol Setup.
1.7. Install SSH Server 复制链接链接已复制到粘贴板!
For ALL Ceph Nodes perform the following steps:
Install an SSH server (if necessary) on each Ceph Node:
sudo yum install openssh-server- Ensure the SSH server is running on ALL Ceph Nodes.
For additional details on OpenSSH for CentOS 6, see OpenSSH.
1.8. Create a Ceph User 复制链接链接已复制到粘贴板!
The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.
ceph-deploy supports a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username <username>, the user you specify must have password-less SSH access to the Ceph node, because ceph-deploy will not prompt you for a password.
We recommend creating a Ceph user on ALL Ceph nodes in the cluster. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, <productname>). The following procedure, substituting <username> for the user name you define, describes how to create a user with passwordless sudo on a node called ceph-server.
Create a user on each Ceph Node. :
ssh user@ceph-server sudo useradd -d /home/<username> -m <username> sudo passwd <username>For the user you added to each Ceph node, ensure that the user has
sudoprivileges. :echo "<username> ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/<username> sudo chmod 0440 /etc/sudoers.d/<username>
1.9. Enable Password-less SSH 复制链接链接已复制到粘贴板!
Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.
Generate the SSH keys, but do not use
sudoor therootuser. Leave the passphrase empty:ssh-keygen Generating public/private key pair. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /ceph-admin/.ssh/id_rsa. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.Copy the key to each Ceph Node, replacing
<username>with the user name you created with Create a Ceph User_. :ssh-copy-id <username>@node1 ssh-copy-id <username>@node2 ssh-copy-id <username>@node3(Recommended) Modify the
~/.ssh/configfile of yourceph-deployadmin node so thatceph-deploycan log in to Ceph nodes as the user you created without requiring you to specify--username <username>each time you executeceph-deploy. This has the added benefit of streamliningsshandscpusage. Replace<username>with the user name you created:Host node1 Hostname node1 User <username> Host node2 Hostname node2 User <username> Host node3 Hostname node3 User <username>
1.10. Disable RAID 复制链接链接已复制到粘贴板!
If you have RAID (not recommended), configure your RAID controllers to RAID 0 (JBOD).
1.11. Adjust PID Count 复制链接链接已复制到粘贴板!
Hosts with high numbers of OSDs (e.g., > 20) may spawn a lot of threads, especially during recovery and re-balancing. Many Linux kernels default to a relatively small maximum number of threads (e.g., 32768). Check your default settings to see if they are suitable.
cat /proc/sys/kernel/pid_max
Consider setting kernel.pid_max to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, you could add the following to the /etc/sysctl.conf file to set it to the maximum:
kernel.pid_max = 4194303
To see the changes you made without a reboot, execute:
sudo sysctl -p
To verify the changes, execute:
sudo sysctl -a | grep kernel.pid_max
1.12. Hard Drive Prep on CentOS 6 复制链接链接已复制到粘贴板!
Ceph aims for data safety, which means that when the Ceph Client receives notice that data was written to a storage drive, that data was actually written to the storage drive (i.e., it’s not in a journal or drive cache, but yet to be written). On CentOS 6, disable the write cache if the journal is on a raw drive.
Use hdparm to disable write caching on OSD storage drives:
sudo hdparm -W 0 /<path-to>/<disk> 0
1.13. TTY 复制链接链接已复制到粘贴板!
You may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph hosts, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph !requiretty where ceph is the user name from the step of Create a Ceph User to ensure that ceph-deploy can connect using the ceph user and execute commands as root.
1.14. SELinux 复制链接链接已复制到粘贴板!
SELinux is set to Enforcing by default. For Ceph Storage v1.2.3, set SELinux to Permissive or disable it entirely and ensure that your installation and cluster is working properly. To set SELinux to Permissive, execute the following:
sudo setenforce 0
To configure SELinux persistently, modify the configuration file at /etc/selinux/config.
1.15. Disable EPEL on Ceph Cluster Nodes 复制链接链接已复制到粘贴板!
Some Ceph package dependencies require versions that differ from the package versions from EPEL. Disable EPEL to ensure that you install the packages required for use with Ceph.