此内容没有您所选择的语言版本。

Chapter 3. Selecting OSD Hardware


Red Hat reference architectures generally involve a 3x3 matrix reflecting small, medium or large clusters optimized for one of IOPS, throughput or capacity. Each scenario balances tradeoffs between:

  • Server density v. OSD ratio.
  • Network bandwidth v. server density.
  • CPU v. OSD ratio.
  • RAM v. OSD Ratio.
  • SSD Write Journal (if applicable) v. OSD ratio.
  • Host Bus Adapter/controller tradeoffs.
Note

The following documents do not constitute a recommendation for any particular hardware vendor. These documents reflect Ceph storage clusters that have been deployed, configured and tested. The intent of providing these documents is to illustrate specific hardware selection, as well as Ceph configuration and performance characteristics for real world use cases.

3.1. Intel Hardware Guide

Based on extensive testing by Red Hat and Intel with a variety of hardware providers, the following document provides general performance, capacity, and sizing guidance for servers based on Intel® Xeon® processors, optionally equipped with Intel® Solid State Drive Data Center (Intel® SSD DC) Series.

Red Hat Ceph Storage on servers with Intel® processors and SSDs.

3.2. Supermicro Server Family Guide

To address the need for performance, capacity, and sizing guidance, Red Hat and Supermicro have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on a range of Supermicro storage servers. For details, see the following document:

Red Hat Ceph Storage clusters on Supermicro storage servers

Note

This document was published in August of 2015. It does not contain testing for newly available features in Ceph 2.0 and beyond.

3.3. Quanta/QCT Server Family Guide

Use of standard hardware components helps ensure low costs, while QCT’s innovative development model enables organizations to iterate more rapidly on a family of server designs optimized for different types of Ceph workloads. Red Hat Ceph Storage on QCT servers lets organizations scale out to thousands of nodes, with the ability to scale storage performance and capacity independently, depending on the needs of the application and the chosen storage server platform.

To address the need for performance, capacity, and sizing guidance, Red Hat and QCT (Quanta Cloud Technology) have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on a range of QCT servers.

For details, see the following document:

Red Hat Ceph Storage on QCT Servers

3.4. Cisco C3160 Guide

This document provides an overview of the use of a Cisco UCS C3160 server with Ceph in a scaling multinode setup with petabytes (PB) of storage. It demonstrates the suitability of the Cisco UCS C3160 in object and block storage environments, and the server’s dense storage capabilities, performance, and scalability as you add more nodes.

Cisco UCS C3160 high Density Rack Server with Red Hat Ceph Storage

3.5. Samsung Sierra Flash Arrays Guide

To address the needs of Ceph users to effectively deploy All-Flash Ceph clusters optimized for performance, Samsung Semiconductor Inc. and Red Hat have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on Samsung NVMe SSDs deployed in a Samsung NVMe Reference Architecture. For details, see:

Red Hat Ceph Storage on Samsung NVMe SSDs

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat