このコンテンツは選択した言語では利用できません。

Chapter 3. Selecting OSD Hardware


Red Hat reference architectures generally involve a 3x3 matrix reflecting small, medium or large clusters optimized for one of IOPS, throughput or capacity. Each scenario balances tradeoffs between:

  • Server density v. OSD ratio.
  • Network bandwidth v. server density.
  • CPU v. OSD ratio.
  • RAM v. OSD Ratio.
  • SSD Write Journal (if applicable) v. OSD ratio.
  • Host Bus Adapter/controller tradeoffs.
Note

The following documents do not constitute a recommendation for any particular hardware vendor. These documents reflect Ceph storage clusters that have been deployed, configured and tested. The intent of providing these documents is to illustrate specific hardware selection, as well as Ceph configuration and performance characteristics for real world use cases.

3.1. Intel Hardware Guide

Based on extensive testing by Red Hat and Intel with a variety of hardware providers, the following document provides general performance, capacity, and sizing guidance for servers based on Intel® Xeon® processors, optionally equipped with Intel® Solid State Drive Data Center (Intel® SSD DC) Series.

Red Hat Ceph Storage on servers with Intel® processors and SSDs.

3.2. Supermicro Server Family Guide

To address the need for performance, capacity, and sizing guidance, Red Hat and Supermicro have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on a range of Supermicro storage servers. For details, see the following document:

Red Hat Ceph Storage clusters on Supermicro storage servers

Note

This document was published in August of 2015. It does not contain testing for newly available features in Ceph 2.0 and beyond.

3.3. Quanta/QCT Server Family Guide

Use of standard hardware components helps ensure low costs, while QCT’s innovative development model enables organizations to iterate more rapidly on a family of server designs optimized for different types of Ceph workloads. Red Hat Ceph Storage on QCT servers lets organizations scale out to thousands of nodes, with the ability to scale storage performance and capacity independently, depending on the needs of the application and the chosen storage server platform.

To address the need for performance, capacity, and sizing guidance, Red Hat and QCT (Quanta Cloud Technology) have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on a range of QCT servers.

For details, see the following document:

Red Hat Ceph Storage on QCT Servers

3.4. Cisco C3160 Guide

This document provides an overview of the use of a Cisco UCS C3160 server with Ceph in a scaling multinode setup with petabytes (PB) of storage. It demonstrates the suitability of the Cisco UCS C3160 in object and block storage environments, and the server’s dense storage capabilities, performance, and scalability as you add more nodes.

Cisco UCS C3160 high Density Rack Server with Red Hat Ceph Storage

3.5. Samsung Sierra Flash Arrays Guide

To address the needs of Ceph users to effectively deploy All-Flash Ceph clusters optimized for performance, Samsung Semiconductor Inc. and Red Hat have performed extensive testing to characterize optimized configurations for deploying Red Hat Ceph Storage on Samsung NVMe SSDs deployed in a Samsung NVMe Reference Architecture. For details, see:

Red Hat Ceph Storage on Samsung NVMe SSDs

トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat