Chapter 4. Minimum Recommendations


Ceph can run on inexpensive commodity hardware. Small production clusters and development clusters can run successfully with modest hardware.

ProcessCriteriaMinimum Recommended

calamari

Processor

1x AMD64 and Intel 64 quad-core

RAM

4 GB minimum per instance

Disk Space

10 GB per instance

Network

2x 1GB Ethernet NICs

ceph-osd

Processor

1x AMD64 and Intel 64

RAM

2 GB of RAM per deamon

Volume Storage

1x storage drive per daemon

Journal

1x SSD partition per daemon (optional)

Network

2x 1GB Ethernet NICs

ceph-mon

Processor

1x AMD64 and Intel 64

RAM

1 GB per daemon

Disk Space

10 GB per daemon

Network

2x 1GB Ethernet NICs

Tip

If you are running an OSD with a single disk, create a partition for your volume storage that is separate from the partition containing the OS. Generally, Red Hat recommendsst separate disks for the OS and the volume storage.

4.1. Production Clusters

Production clusters for petabyte scale data storage may also use commodity hardware, but should have considerably more memory, processing power and data storage to account for heavy traffic loads.

4.1.1. Calamari

The administration server hardware requirements vary with the size of your cluster. A minimum recommended hardware configuration for a Calamari server includes at least 4GB of RAM, a dual core CPU on x86_64 architecture and enough network throughput to handle communication with Ceph hosts. The hardware requirements scale linearly with the number of Ceph servers, so if you intend to run a fairly large cluster, ensure that you have enough RAM, processing power and network throughput for your administration node.

4.1.2. Monitors

The Ceph monitor is a data store for the health of the entire cluster, and contains the cluster log. Red Hat strongly recommends to use at least three monitors for a cluster quorum in production. Monitor nodes typically have fairly modest CPU and memory requirements. A 1 rack unit (1U) server with a low-cost CPU (such as a processor with 6 cores @ 1.7 GHz), 16 GB of RAM, and Gigabit Ethernet (GbE) networking should suffice in most cases. Because logs are stored on local disk(s) on the monitor node, it is important to make sure that sufficient disk space is provisioned. The monitor store should be placed on an Solid-state Drive (SSD), because the leveldb store can become I/O bound.

4.1.3. OSDs

Ensure that your network interface, controllers and drive throughput don’t leave bottlenecks—​e.g., fast drives, but networks too slow to accommodate them. SSDs are typically used for journals and fast pools. Where the use of SSDs is write intensive (e.g., journals), make sure you select high performance SSDs.

When using Ceph as a backend for OpenStack volumes and images, we recommend a host bus adapter with SAS drives (10-15k rpms) and enterprise-grade SSDs using the same controller for journals.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.