Chapter 4. Server and rack solutions
Hardware vendors have responded to the enthusiasm around Ceph by providing both optimized server-level and rack-level solution SKUs. Validated through joint testing with Red Hat, these solutions offer predictable price-to-performance ratios for Ceph deployments, with a convenient modular approach to expand Ceph storage for specific workloads.
Typical rack-level solutions include:
- Network switching: Redundant network switching interconnects the cluster and provides access to clients.
- Ceph MON nodes: The Ceph monitor is a datastore for the health of the entire cluster, and contains the cluster log. A minimum of three monitor nodes are strongly recommended for a cluster quorum in production.
- Ceph OSD hosts: Ceph OSD hosts house the storage capacity for the cluster, with one or more OSDs running per individual storage device. OSD hosts are selected and configured differently depending on both workload optimization and the data devices installed: HDDs, SSDs, or NVMe SSDs.
- Red Hat Ceph Storage: Many vendors provide a capacity-based subscription for Red Hat Ceph Storage bundled with both server and rack-level solution SKUs.
Red Hat recommends to review the Red Hat Ceph Storage:Supported Configurations article prior to committing to any server and rack solution. Contact Red Hat support for any additional assistance.
IOPS-optimized solutions
With the growing use of flash storage, organizations increasingly host IOPS-intensive workloads on Ceph storage clusters to let them emulate high-performance public cloud solutions with private cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or PostgreSQL-based applications.
Typical servers include the following elements:
- CPU: 10 cores per NVMe SSD, assuming a 2 GHz CPU.
- RAM: 16 GB baseline, plus 5 GB per OSD.
- Networking: 10 Gigabit Ethernet (GbE) per 2 OSDs.
- OSD media: High-performance, high-endurance enterprise NVMe SSDs.
- OSDs: Two per NVMe SSD.
- Bluestore WAL/DB: High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs.
- Controller: Native PCIe bus.
For Non-NVMe SSDs, for CPU, use two cores per SSD OSD.
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
SuperMicro [a] | SYS-5038MR-OSD006P | N/A | N/A |
[a]
See Supermicro® Total Solution for Ceph for details.
|
Throughput-optimized Solutions
Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured data. Large-block sequential I/O is typical.
Typical server elements include:
- CPU: 0.5 cores per HDD, assuming a 2 GHz CPU.
- RAM: 16 GB baseline, plus 5 GB per OSD.
- Networking: 10 GbE per 12 OSDs each for client- and cluster-facing networks.
- OSD media: 7,200 RPM enterprise HDDs.
- OSDs: One per HDD.
- Bluestore WAL/DB: High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs.
- Host bus adapter (HBA): Just a bunch of disks (JBOD).
Several vendors provide pre-configured server and rack-level solutions for throughput-optimized Ceph workloads. Red Hat has conducted extensive testing and evaluation of servers from Supermicro and Quanta Cloud Technologies (QCT).
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
SuperMicro [a] | SRS-42E112-Ceph-03 | SRS-42E136-Ceph-03 | SRS-42E136-Ceph-03 |
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
SuperMicro [a] | SSG-6028R-OSD072P | SSG-6048-OSD216P | SSG-6048-OSD216P |
QCT [a] | QxStor RCT-200 | QxStor RCT-400 | QxStor RCT-400 |
[a]
See QCT: QxStor Red Hat Ceph Storage Edition for details.
|
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
Dell | PowerEdge R730XD [a] | DSS 7000 [b], twin node | DSS 7000, twin node |
Cisco | UCS C240 M4 | UCS C3260 [c] | UCS C3260 [d] |
Lenovo | System x3650 M5 | System x3650 M5 | N/A |
[b]
See Dell EMC DSS 7000 Performance & Sizing Guide for Red Hat Ceph Storage for details.
[c]
See Red Hat Ceph Storage hardware reference architecture for details.
|
Cost and capacity-optimized solutions
Cost- and capacity-optimized solutions typically focus on higher capacity, or longer archival scenarios. Data can be either semi-structured or unstructured. Workloads include media archives, big data analytics archives, and machine image backups. Large-block sequential I/O is typical.
Solutions typically include the following elements:
- CPU. 0.5 cores per HDD, assuming a 2 GHz CPU.
- RAM. 16 GB baseline, plus 5 GB per OSD.
- Networking. 10 GbE per 12 OSDs (each for client- and cluster-facing networks).
- OSD media. 7,200 RPM enterprise HDDs.
- OSDs. One per HDD.
- Bluestore WAL/DB Co-located on the HDD.
- HBA. JBOD.
Supermicro and QCT provide pre-configured server and rack-level solution SKUs for cost- and capacity-focused Ceph workloads.
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
SuperMicro [a] | N/A | SRS-42E136-Ceph-03 | SRS-42E172-Ceph-03 |
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
SuperMicro [a] | N/A | SSG-6048R-OSD216P [a] | SSD-6048R-OSD360P |
QCT | N/A | QxStor RCC-400 [a] | QxStor RCC-400 [a] |
[a]
See Supermicro’s Total Solution for Ceph for details.
|
Vendor | Small (250TB) | Medium (1PB) | Large (2PB+) |
---|---|---|---|
Dell | N/A | DSS 7000, twin node | DSS 7000, twin node |
Cisco | N/A | UCS C3260 | UCS C3260 |
Lenovo | N/A | System x3650 M5 | N/A |
Additional Resources
- Red Hat Ceph Storage on Samsung NVMe SSDs
- Red Hat Ceph Storage on the InfiniFlash All-Flash Storage System from SanDisk
- Deploying MySQL Databases on Red Hat Ceph Storage
- Intel® Data Center Blocks for Cloud – Red Hat OpenStack Platform with Red Hat Ceph Storage
- Red Hat Ceph Storage on QCT Servers
- Red Hat Ceph Storage on Servers with Intel Processors and SSDs