Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Requirements for Installing Red Hat Ceph Storage
Figure 2.1. Prerequisite Workflow

Before installing Red Hat Ceph Storage (RHCS), review the following requirements and prepare each Monitor, OSD, Metadata Server, and client nodes accordingly.
2.1. Prerequisites
- Verify the hardware meets the minimum requirements. For details, see the Hardware Guide for Red Hat Ceph Storage 3.
2.2. Requirements Checklist for Installing Red Hat Ceph Storage
Task | Required | Section | Recommendation |
---|---|---|---|
Verifying the operating system version | Yes | Section 2.3, “Operating system requirements for Red Hat Ceph Storage” | |
Registering Ceph nodes | Yes | Section 2.4, “Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions” | |
Enabling Ceph software repositories | Yes | Section 2.5, “Enabling the Red Hat Ceph Storage Repositories” | |
Using a RAID controller with OSD nodes | No | Section 2.6, “Considerations for Using a RAID Controller with OSD Nodes (optional)” | Enabling write-back caches on a RAID controller might result in increased small I/O write throughput for OSD nodes. |
Configuring the network | Yes | Section 2.8, “Verifying the Network Configuration for Red Hat Ceph Storage” | At minimum, a public network is required. However, a private network for cluster communication is recommended. |
Configuring a firewall | No | Section 2.9, “Configuring a firewall for Red Hat Ceph Storage” | A firewall can increase the level of trust for a network. |
Creating an Ansible user | Yes | Creating the Ansible user is required on all Ceph nodes. | |
Enabling password-less SSH | Yes | Required for Ansible. |
By default, ceph-ansible
installs NTP as a requirement. If NTP is customized, refer to Configuring the Network Time Protocol for Red Hat Ceph Storage in Manually Installing Red Hat Ceph Storage to understand how NTP must be configured to function properly with Ceph.
2.3. Operating system requirements for Red Hat Ceph Storage
Red Hat Ceph Storage 3 requires Red Hat Enterprise Linux 7, update 5 or later. Use the same version and architecture across all nodes in the cluster.
Red Hat Ceph Storage 3 is not supported on Red Hat Enterprise Linux 8.
Red Hat does not support clusters with heterogeneous operating systems or versions.
Additional Resources
- The Installation Guide for Red Hat Enterprise Linux 7.
- The System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.4. Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions
Register each Red Hat Ceph Storage (RHCS) node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each RHCS node must be able to access the full Red Hat Enterprise Linux 7 base content and the extras repository content.
For RHCS nodes that cannot access the Internet during the installation, provide the software content by using the Red Hat Satellite server. Alternatively, mount a local Red Hat Enterprise Linux 7 Server ISO image and point the RHCS nodes to the ISO image. For additional details, contact Red Hat Support.
For more information on registering Ceph nodes with the Red Hat Satellite server, see the How to Register Ceph with Satellite 6 and How to Register Ceph with Satellite 5 articles on the Red Hat Customer Portal.
Prerequisites
- A valid Red Hat subscription
- RHCS nodes must be able to connect to the Internet.
Procedure
Perform the following steps on all nodes in the storage cluster as the root
user.
Register the node. When prompted, enter your Red Hat Customer Portal credentials:
# subscription-manager register
Pull the latest subscription data from the CDN:
# subscription-manager refresh
List all available subscriptions for Red Hat Ceph Storage:
# subscription-manager list --available --all --matches="*Ceph*"
Identify the appropriate subscription and retrieve its Pool ID.
Attach the subscription:
# subscription-manager attach --pool=$POOL_ID
- Replace
-
$POOL_ID
with the Pool ID identified in the previous step.
-
Disable the default software repositories. Then, enable the Red Hat Enterprise Linux 7 Server, Red Hat Enterprise Linux 7 Server Extras, and RHCS repositories:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-els-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-els-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
Update the system to receive the latest packages:
# yum update
Additional Resources
- See the Registering a System and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
- Section 2.5, “Enabling the Red Hat Ceph Storage Repositories”
2.5. Enabling the Red Hat Ceph Storage Repositories
Before you can install Red Hat Ceph Storage, you must choose an installation method. Red Hat Ceph Storage supports two installation methods:
Content Delivery Network (CDN)
For Ceph Storage clusters with Ceph nodes that can connect directly to the internet, use Red Hat Subscription Manager to enable the required Ceph repository.
Local Repository
For Ceph Storage clusters where security measures preclude nodes from accessing the internet, install Red Hat Ceph Storage 3.3 from a single software build delivered as an ISO image, which will allow you to install local repositories.
Prerequisites
- Valid customer subscription.
- For CDN installations, RHCS nodes must be able to connect to the internet.
- For CDN installations, register the cluster nodes with CDN.
Disable the EPEL software repository:
[root@monitor ~]# yum install yum-utils vim -y [root@monitor ~]# yum-config-manager --disable epel
Procedure
For CDN installations:
On the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository and Ansible repository:
[root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms --enable=rhel-7-server-ansible-2.6-rpms
For ISO installations:
- Log in to the Red Hat Customer Portal.
- Click Downloads to visit the Software & Download center.
- In the Red Hat Ceph Storage area, click Download Software to download the latest version of the software.
Additional Resources
- The Registering and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux.
2.6. Considerations for Using a RAID Controller with OSD Nodes (optional)
If an OSD node has a RAID controller with 1-2GB of cache installed, enabling the write-back cache might result in increased small I/O write throughput. However, the cache must be non-volatile.
Modern RAID controllers usually have super capacitors that provide enough power to drain volatile memory to non-volatile NAND memory during a power loss event. It is important to understand how a particular controller and its firmware behave after power is restored.
Some RAID controllers require manual intervention. Hard drives typically advertise to the operating system whether their disk caches should be enabled or disabled by default. However, certain RAID controllers and some firmware do not provide such information. Verify that disk level caches are disabled to avoid file system corruption.
Create a single RAID 0 volume with write-back for each Ceph OSD data drive with write-back cache enabled.
If Serial Attached SCSI (SAS) or SATA connected Solid-state Drive (SSD) disks are also present on the RAID controller, then investigate whether the controller and firmware support pass-through mode. Enabling pass-through mode helps avoid caching logic, and generally results in much lower latency for fast media.
2.7. Considerations for Using NVMe with Object Gateway (optional)
If you plan to use the Object Gateway feature of Red Hat Ceph Storage and your OSD nodes have NVMe based SSDs or SATA SSDs, consider following the procedures in Ceph Object Gateway for Production to use NVMe with LVM optimally. These procedures explain how to use specially designed Ansible playbooks which will place journals and bucket indexes together on SSDs, which can increase performance compared to having all journals on one device. The information on using NVMe with LVM optimally should be referenced in combination with this Installation Guide.
2.8. Verifying the Network Configuration for Red Hat Ceph Storage
All Red Hat Ceph Storage (RHCS) nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes.
You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Configure the network interface settings and ensure to make the changes persistent.
Red Hat does not recommend using a single network interface card for both a public and private network.
Prerequisites
- Network interface card connected to the network.
Procedure
Do the following steps on all RHCS nodes in the storage cluster, as the root
user.
Verify the following settings are in the
/etc/sysconfig/network-scripts/ifcfg-*
file corresponding the public-facing network interface card:-
The
BOOTPROTO
parameter is set tonone
for static IP addresses. The
ONBOOT
parameter must be set toyes
.If it is set to
no
, the Ceph storage cluster might fail to peer on reboot.If you intend to use IPv6 addressing, you must set the IPv6 parameters such as
IPV6INIT
toyes
, except theIPV6_FAILURE_FATAL
parameter.Also, edit the Ceph configuration file,
/etc/ceph/ceph.conf
, to instruct Ceph to use IPv6, otherwise, Ceph will use IPv4.
-
The
Additional Resources
- For details on configuring network interface scripts for Red Hat Enterprise Linux 7, see the Configuring a Network Interface Using ifcfg Files chapter in the Networking Guide for Red Hat Enterprise Linux 7.
- For more information on network configuration see the Network Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 3.
2.9. Configuring a firewall for Red Hat Ceph Storage
Red Hat Ceph Storage (RHCS) uses the firewalld
service.
The Monitor daemons use port 6789
for communication within the Ceph storage cluster.
On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300
:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
The Ceph Manager (ceph-mgr
) daemons use ports in range 6800-7300
. Consider colocating the ceph-mgr
daemons with Ceph Monitors on same nodes.
The Ceph Metadata Server nodes (ceph-mds
) use ports in the range 6800-7300
.
The Ceph Object Gateway nodes are configured by Ansible to use port 8080
by default. However, you can change the default port, for example to port 80
.
To use the SSL/TLS service, open port 443
.
Prerequisite
- Network hardware is connected.
Procedure
Run the following commands as the root
user.
On all RHCS nodes, start the
firewalld
service. Enable it to run on boot, and ensure that it is running:# systemctl enable firewalld # systemctl start firewalld # systemctl status firewalld
On all Monitor nodes, open port
6789
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="6789" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="6789" accept" --permanent
- Replace
-
IP_address
with the network address of the Monitor node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept"
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept" --permanent
On all OSD nodes, open ports
6800-7300
on the public network:[root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Manager (
ceph-mgr
) nodes (usually the same nodes as Monitor ones), open ports6800-7300
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Metadata Server (
ceph-mds
) nodes, open port6800
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.
To open the default Ansible configured port of
8080
:[root@gateway ~]# firewall-cmd --zone=public --add-port=8080/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="8080" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="8080" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="8080" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="8080" accept" --permanent
Optional. If you installed Ceph Object Gateway using Ansible and changed the default port that Ansible configures Ceph Object Gateway to use from
8080
, for example, to port80
, open this port:[root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="80" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="80" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept" --permanent
Optional. To use SSL/TLS, open port
443
:[root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="443" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="443" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" --permanent
Additional Resources
- For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage.
-
For additional details on
firewalld
, see the Using Firewalls chapter in the Security Guide for Red Hat Enterprise Linux 7.
2.10. Creating an Ansible user with sudo
access
Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root
privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root
access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.
Prerequisite
-
Having
root
orsudo
access to all nodes in the storage cluster.
Procedure
Log in to a Ceph node as the
root
user:ssh root@$HOST_NAME
- Replace
-
$HOST_NAME
with the host name of the Ceph node.
-
Example
# ssh root@mon01
Enter the
root
password when prompted.Create a new Ansible user:
adduser $USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# adduser admin
ImportantDo not use
ceph
as the user name. Theceph
user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Set a new password for this user:
# passwd $USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# passwd admin
Enter the new password twice when prompted.
Configure
sudo
access for the newly created user:cat << EOF >/etc/sudoers.d/$USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF
Assign the correct file permissions to the new file:
chmod 0440 /etc/sudoers.d/$USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# chmod 0440 /etc/sudoers.d/admin
Additional Resources
- The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
2.11. Enabling Password-less SSH for Ansible
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
Procedure
Do the following steps from the Ansible administration node, and as the Ansible user.
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
[user@admin ~]$ ssh-keygen
Copy the public key to all nodes in the storage cluster:
ssh-copy-id $USER_NAME@$HOST_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user. -
$HOST_NAME
with the host name of the Ceph node.
-
Example
[user@admin ~]$ ssh-copy-id admin@ceph-mon01
Create and edit the
~/.ssh/config
file.ImportantBy creating and editing the
~/.ssh/config
file you do not have to specify the-u $USER_NAME
option each time you execute theansible-playbook
command.Create the SSH
config
file:[user@admin ~]$ touch ~/.ssh/config
Open the
config
file for editing. Set theHostname
andUser
options for each node in the storage cluster:Host node1 Hostname $HOST_NAME User $USER_NAME Host node2 Hostname $HOST_NAME User $USER_NAME ...
- Replace
-
$HOST_NAME
with the host name of the Ceph node. -
$USER_NAME
with the new user name for the Ansible user.
-
Example
Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin
Set the correct file permissions for the
~/.ssh/config
file:[admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
-
The
ssh_config(5)
manual page - The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7