Chapter 1. Pre-Installation Requirements


If you are installing Red Hat Ceph Storage v1.2.3 for the first time, you should review the pre-installation requirements first. Depending on your Linux distribution, you may need to adjust default settings and install required software before setting up a local repository and installing Calamari and Ceph.

1.1. Operating System

Red Hat Ceph Storage v1.2.3 and beyond requires a homogeneous operating system distribution and version e.g. CentOS 6 on x86_64 architecture for all Ceph nodes, including the Calamari cluster. We do not support clusters with heterogeneous operating systems and versions.

1.2. DNS Name Resolution

Ceph nodes must be able to resolve short host names, not just fully qualified domain names. Set up a default search domain to resolve short host names. To retrieve a Ceph node’s short host name, execute:

hostname -s
Copy to Clipboard Toggle word wrap

Each Ceph node MUST be able to ping every other Ceph node in the cluster by its short host name.

1.3. NICs

All Ceph clusters require a public network. You MUST have a network interface card configured to a public network where Ceph clients can reach Ceph Monitors and Ceph OSDs. You SHOULD have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication and recovery on a network separate from the public network.

We DO NOT RECOMMEND using a single NIC for both a public and private network.

1.4. Network

Ensure that you configure your network interfaces and make them persistent so that the settings are identical on reboot. For example:

  • BOOTPROTO will usually be none for static IP addresses.
  • IPV6{opt} settings MUST be set to yes except for FAILURE_FATAL if you intend to use IPv6. You must also set your Ceph configuration file to tell Ceph to use IPv6 if you intend to use it. Otherwise, Ceph will use IPv4.
  • ONBOOT MUST be set to yes. If it is set to no, Ceph may fail to peer on reboot.

Navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-<iface> settings for your public and cluster interfaces (assuming you will use a cluster network too [RECOMMENDED]) are properly configured.

For details on configuring network interface scripts for CentOS 6, see Ethernet Interfaces.

1.5. Firewall for CentOS 6

The default firewall configuration for CentOS is fairly strict. You MUST adjust your firewall settings on the Calamari node to allow inbound requests on port 80 so that clients in your network can access the Calamari web user interface.

Calamari also communicates with Ceph nodes via ports 2003, 4505 and 4506. You MUST open ports 80, 2003, and 4505-4506 on your Calamari node.

sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 2003 -j ACCEPT
sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 4505:4506 -j ACCEPT
Copy to Clipboard Toggle word wrap

You MUST open port 6789 on your public network on ALL Ceph monitor nodes.

sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 6789 -j ACCEPT
Copy to Clipboard Toggle word wrap

Finally, you MUST also open ports for OSD traffic (e.g., 6800-7100). Each OSD on each Ceph node needs three ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating (cluster network, if available; otherwise, public network). For example, if you have 4 OSDs, open 4 x 3 ports (12).

sudo iptables -I INPUT 1 -i <iface> -m multiport -p tcp -s <ip-address>/<netmask> --dports 6800:6811 -j ACCEPT
Copy to Clipboard Toggle word wrap

Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:

/sbin/service iptables save
Copy to Clipboard Toggle word wrap

1.6. NTP

You MUST install Network Time Protocol (NTP) on all Ceph monitor hosts and ensure that monitor hosts are NTP peers. You SHOULD consider installing NTP on Ceph OSD nodes, but it is not required. NTP helps preempt issues that arise from clock drift.

  1. Install NTP

    sudo yum install ntp
    Copy to Clipboard Toggle word wrap
  2. Make sure NTP starts on reboot.

    sudo chkconfig ntpd on
    Copy to Clipboard Toggle word wrap
  3. Start the NTP service and ensure it’s running.

    sudo /etc/init.d/ntpd start
    sudo /etc/init.d/ntpd status
    Copy to Clipboard Toggle word wrap
  4. Ensure that NTP is synchronizing Ceph monitor node clocks properly.

    ntpq -p
    Copy to Clipboard Toggle word wrap

For additional details on NTP for CentOS 6, see Network Time Protocol Setup.

1.7. Install SSH Server

For ALL Ceph Nodes perform the following steps:

  1. Install an SSH server (if necessary) on each Ceph Node:

    sudo yum install openssh-server
    Copy to Clipboard Toggle word wrap
  2. Ensure the SSH server is running on ALL Ceph Nodes.

For additional details on OpenSSH for CentOS 6, see OpenSSH.

1.8. Create a Ceph User

The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

ceph-deploy supports a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username <username>, the user you specify must have password-less SSH access to the Ceph node, because ceph-deploy will not prompt you for a password.

We recommend creating a Ceph user on ALL Ceph nodes in the cluster. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, <productname>). The following procedure, substituting <username> for the user name you define, describes how to create a user with passwordless sudo on a node called ceph-server.

  1. Create a user on each Ceph Node. :

    ssh user@ceph-server
    sudo useradd -d /home/<username> -m <username>
    sudo passwd <username>
    Copy to Clipboard Toggle word wrap
  2. For the user you added to each Ceph node, ensure that the user has sudo privileges. :

    echo "<username> ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/<username>
    sudo chmod 0440 /etc/sudoers.d/<username>
    Copy to Clipboard Toggle word wrap

1.9. Enable Password-less SSH

Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.

  1. Generate the SSH keys, but do not use sudo or the root user. Leave the passphrase empty:

    ssh-keygen
    
    Generating public/private key pair.
    Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /ceph-admin/.ssh/id_rsa.
    Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
    Copy to Clipboard Toggle word wrap
  2. Copy the key to each Ceph Node, replacing <username> with the user name you created with Create a Ceph User_. :

    ssh-copy-id <username>@node1
    ssh-copy-id <username>@node2
    ssh-copy-id <username>@node3
    Copy to Clipboard Toggle word wrap
  3. (Recommended) Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username <username> each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage. Replace <username> with the user name you created:

    Host node1
       Hostname node1
       User <username>
    Host node2
       Hostname node2
       User <username>
    Host node3
       Hostname node3
       User <username>
    Copy to Clipboard Toggle word wrap

1.10. Disable RAID

If you have RAID (not recommended), configure your RAID controllers to RAID 0 (JBOD).

1.11. Adjust PID Count

Hosts with high numbers of OSDs (e.g., > 20) may spawn a lot of threads, especially during recovery and re-balancing. Many Linux kernels default to a relatively small maximum number of threads (e.g., 32768). Check your default settings to see if they are suitable.

cat /proc/sys/kernel/pid_max
Copy to Clipboard Toggle word wrap

Consider setting kernel.pid_max to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, you could add the following to the /etc/sysctl.conf file to set it to the maximum:

kernel.pid_max = 4194303
Copy to Clipboard Toggle word wrap

To see the changes you made without a reboot, execute:

sudo sysctl -p
Copy to Clipboard Toggle word wrap

To verify the changes, execute:

sudo sysctl -a | grep kernel.pid_max
Copy to Clipboard Toggle word wrap

1.12. Hard Drive Prep on CentOS 6

Ceph aims for data safety, which means that when the Ceph Client receives notice that data was written to a storage drive, that data was actually written to the storage drive (i.e., it’s not in a journal or drive cache, but yet to be written). On CentOS 6, disable the write cache if the journal is on a raw drive.

Use hdparm to disable write caching on OSD storage drives:

sudo hdparm -W 0 /<path-to>/<disk> 0
Copy to Clipboard Toggle word wrap

1.13. TTY

You may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph hosts, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph !requiretty where ceph is the user name from the step of Create a Ceph User to ensure that ceph-deploy can connect using the ceph user and execute commands as root.

1.14. SELinux

SELinux is set to Enforcing by default. For Ceph Storage v1.2.3, set SELinux to Permissive or disable it entirely and ensure that your installation and cluster is working properly. To set SELinux to Permissive, execute the following:

sudo setenforce 0
Copy to Clipboard Toggle word wrap

To configure SELinux persistently, modify the configuration file at /etc/selinux/config.

1.15. Disable EPEL on Ceph Cluster Nodes

Some Ceph package dependencies require versions that differ from the package versions from EPEL. Disable EPEL to ensure that you install the packages required for use with Ceph.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat