Chapter 5. Infrastructure Security
The scope of this guide is Red Hat Ceph Storage. However, a proper Red Hat Ceph Storage security plan requires consideration of the following prerequisites.
Prerequisites
- Review the Using SELinux Guide within the Product Documentation for Red Hat Enterprise Linux for your OS version, on the Red Hat Customer Portal.
- Review the Security Hardening Guide within the Product Documentation for Red Hat Enterprise Linux for your OS version, on the Red Hat Customer Portal.
5.1. Administration
Administering a Red Hat Ceph Storage cluster involves using command line tools. The CLI tools require an administrator key for administrator access privileges to the cluster. By default, Ceph stores the administrator key in the /etc/ceph
directory. The default file name is ceph.client.admin.keyring
. Take steps to secure the keyring so that only a user with administrative privileges to the cluster may access the keyring.
5.2. Network Communication
Red Hat Ceph Storage provides two networks:
- A public network.
- A cluster network.
All Ceph daemons and Ceph clients require access to the public network, which is part of the storage access security zone. By contrast, ONLY the OSD daemons require access to the cluster network, which is part of the Ceph cluster security zone.

The Ceph configuration contains public_network
and cluster_network
settings. For hardening purposes, specify the IP address and the netmask using CIDR notation. Specify multiple comma-delimited IP address and netmask entries if the cluster will have multiple subnets.
public_network = <public-network/netmask>[,<public-network/netmask>] cluster_network = <cluster-network/netmask>[,<cluster-network/netmask>]
See the Ceph network configuration section of the Red Hat Ceph Storage Configuration Guide for details.
5.3. Hardening the Network Service
System administrators deploy Red Hat Ceph Storage clusters on Red Hat Enterprise Linux 8 Server. SELinux is on by default and the firewall blocks all inbound traffic except for the SSH service port 22
; however, you MUST ensure that this is the case so that no other unauthorized ports are open or unnecessary services are enabled.
On each server node, execute the following:
Start the
firewalld
service, enable it to run on boot, and ensure that it is running:# systemctl enable firewalld # systemctl start firewalld # systemctl status firewalld
Take an inventory of all open ports.
# firewall-cmd --list-all
On a new installation, the
sources:
section should be blank indicating that no ports have been opened specifically. Theservices
section should indicatessh
indicating that the SSH service (and port22
) anddhcpv6-client
are enabled.sources: services: ssh dhcpv6-client
Ensure SELinux is running and
Enforcing
.# getenforce Enforcing
If SELinux is
Permissive
, set it toEnforcing
.# setenforce 1
If SELinux is not running, enable it. See the Using SELinux Guide within the Security Hardening Guide within the Configuring basic system settings guide within the Product Documentation for Red Hat Enterprise Linux for your OS version, on the Red Hat Customer Portal.
Each Ceph daemon uses one or more ports to communicate with other daemons in the Red Hat Ceph Storage cluster. In some cases, you may change the default port settings. Administrators typically only change the default port with the Ceph Object Gateway or ceph-radosgw
daemon.
TCP/UDP Port | Daemon | Configuration Option |
---|---|---|
|
|
|
|
| N/A |
|
|
|
|
|
|
|
| N/A |
The Ceph Storage Cluster daemons include ceph-mon
, ceph-mgr
, and ceph-osd
. These daemons and their hosts comprise the Ceph cluster security zone, which should use its own subnet for hardening purposes.
The Ceph clients include ceph-radosgw
, ceph-mds
, ceph-fuse
, libcephfs
, rbd
, librbd
, and librados
. These daemons and their hosts comprise the storage access security zone, which should use its own subnet for hardening purposes.
On the Ceph Storage Cluster zone’s hosts, consider enabling only hosts running Ceph clients to connect to the Ceph Storage Cluster daemons. For example:
# firewall-cmd --zone=<zone-name> --add-rich-rule="rule family="ipv4" \ source address="<ip-address>/<netmask>" port protocol="tcp" \ port="<port-number>" accept"
Replace <zone-name>
with the zone name, <ipaddress>
with the IP address, <netmask>
with the subnet mask in CIDR notation, and <port-number>
with the port number or range. Repeat the process with the --permanent
flag so that the changes persist after reboot. For example:
# firewall-cmd --zone=<zone-name> --add-rich-rule="rule family="ipv4" \ source address="<ip-address>/<netmask>" port protocol="tcp" \ port="<port-number>" accept" --permanent
5.4. Reporting
Red Hat Ceph Storage provides basic system monitoring and reporting with the ceph-mgr
daemon plug-ins, namely, the RESTful API, the dashboard, and other plug-ins such as Prometheus
and Zabbix
. Ceph collects this information using collectd
and sockets to retrieve settings, configuration details, and statistical information.
In addition to default system behavior, system administrators may configure collectd
to report on security matters, such as configuring the IP-Tables
or ConnTrack
plug-ins to track open ports and connections respectively.
System administrators may also retrieve configuration settings at runtime. See Viewing the Ceph configuration at runtime.
5.5. Auditing Administrator Actions
An important aspect of system security is to periodically audit administrator actions on the cluster. Red Hat Ceph Storage stores a history of administrator actions in the /var/log/ceph/CLUSTER_FSID/ceph.audit.log
file. Run the following command on the monitor host.
Example
[root@host04 ~]# cat /var/log/ceph/6c58dfb8-4342-11ee-a953-fa163e843234/ceph.audit.log
Each entry will contain:
- Timestamp: Indicates when the command was executed.
- Monitor Address: Identifies the monitor modified.
- Client Node: Identifies the client node initiating the change.
- Entity: Identifies the user making the change.
- Command: Identifies the command executed.
The following is an output of the Ceph audit log:
2023-09-01T10:20:21.445990+0000 mon.host01 (mon.0) 122301 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2023-09-01T10:20:21.446972+0000 mon.host01 (mon.0) 122302 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2023-09-01T10:20:21.453790+0000 mon.host01 (mon.0) 122303 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' 2023-09-01T10:20:21.457119+0000 mon.host01 (mon.0) 122304 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2023-09-01T10:20:30.671816+0000 mon.host01 (mon.0) 122305 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
In distributed systems such as Ceph, actions may begin on one instance and get propagated to other nodes in the cluster. When the action begins, the log indicates dispatch
. When the action ends, the log indicates finished
.