此内容没有您所选择的语言版本。

Chapter 4. Minimum Recommendations


Ceph can run on inexpensive commodity hardware. Small production clusters and development clusters can run successfully with modest hardware.

Expand
ProcessCriteriaMinimum Recommended

calamari

Processor

1x AMD64 and Intel 64 quad-core

RAM

4 GB minimum per instance

Disk Space

10 GB per instance

Network

2x 1GB Ethernet NICs

ceph-osd

Processor

1x AMD64 and Intel 64

RAM

2 GB of RAM per deamon

Volume Storage

1x storage drive per daemon

Journal

1x SSD partition per daemon (optional)

Network

2x 1GB Ethernet NICs

ceph-mon

Processor

1x AMD64 and Intel 64

RAM

1 GB per daemon

Disk Space

10 GB per daemon

Network

2x 1GB Ethernet NICs

Tip

If you are running an OSD with a single disk, create a partition for your volume storage that is separate from the partition containing the OS. Generally, Red Hat recommendsst separate disks for the OS and the volume storage.

4.1. Production Clusters

Production clusters for petabyte scale data storage may also use commodity hardware, but should have considerably more memory, processing power and data storage to account for heavy traffic loads.

4.1.1. Calamari

The administration server hardware requirements vary with the size of your cluster. A minimum recommended hardware configuration for a Calamari server includes at least 4GB of RAM, a dual core CPU on x86_64 architecture and enough network throughput to handle communication with Ceph hosts. The hardware requirements scale linearly with the number of Ceph servers, so if you intend to run a fairly large cluster, ensure that you have enough RAM, processing power and network throughput for your administration node.

4.1.2. Monitors

The Ceph monitor is a data store for the health of the entire cluster, and contains the cluster log. Red Hat strongly recommends to use at least three monitors for a cluster quorum in production. Monitor nodes typically have fairly modest CPU and memory requirements. A 1 rack unit (1U) server with a low-cost CPU (such as a processor with 6 cores @ 1.7 GHz), 16 GB of RAM, and Gigabit Ethernet (GbE) networking should suffice in most cases. Because logs are stored on local disk(s) on the monitor node, it is important to make sure that sufficient disk space is provisioned. The monitor store should be placed on an Solid-state Drive (SSD), because the leveldb store can become I/O bound.

4.1.3. OSDs

Ensure that your network interface, controllers and drive throughput don’t leave bottlenecks—​e.g., fast drives, but networks too slow to accommodate them. SSDs are typically used for journals and fast pools. Where the use of SSDs is write intensive (e.g., journals), make sure you select high performance SSDs.

When using Ceph as a backend for OpenStack volumes and images, we recommend a host bus adapter with SAS drives (10-15k rpms) and enterprise-grade SSDs using the same controller for journals.

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat