Chapter 1. Red Hat Ceph Storage
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines an enterprise-hardened version of the Ceph storage system, with a Ceph management platform, deployment utilities, and support services.
Red Hat Ceph Storage is designed for cloud infrastructure and web-scale object storage. Red Hat Ceph Storage clusters consist of the following types of nodes:
Ceph Monitor
Each Ceph Monitor node runs the ceph-mon
daemon, which maintains a master copy of the storage cluster map. The storage cluster map includes the storage cluster topology. A client connecting to the Ceph storage cluster retrieves the current copy of the storage cluster map from the Ceph Monitor, which enables the client to read from and write data to the storage cluster.
The storage cluster can run with only one Ceph Monitor; however, to ensure high availability in a production storage cluster, Red Hat will only support deployments with at least three Ceph Monitor nodes. Red Hat recommends deploying a total of 5 Ceph Monitors for storage clusters exceeding 750 Ceph OSDs.
Ceph Manager
The Ceph Manager daemon, ceph-mgr
, co-exists with the Ceph Monitor daemons running on Ceph Monitor nodes to provide additional services. The Ceph Manager provides an interface for other monitoring and management systems using Ceph Manager modules. Running the Ceph Manager daemons is a requirement for normal storage cluster operations.
Ceph OSD
Each Ceph Object Storage Device (OSD) node runs the ceph-osd
daemon, which interacts with logical disks attached to the node. The storage cluster stores data on these Ceph OSD nodes.
Ceph can run with very few OSD nodes, of which the default is three, but production storage clusters realize better performance beginning at modest scales. For example, 50 Ceph OSDs in a storage cluster. Ideally, a Ceph storage cluster has multiple OSD nodes, allowing for the possibility to isolate failure domains by configuring the CRUSH map accordingly.
Ceph MDS
Each Ceph Metadata Server (MDS) node runs the ceph-mds
daemon, which manages metadata related to files stored on the Ceph File System (CephFS). The Ceph MDS daemon also coordinates access to the shared storage cluster.
Ceph Object Gateway
Ceph Object Gateway node runs the ceph-radosgw
daemon, and is an object storage interface built on top of librados
to provide applications with a RESTful access point to the Ceph storage cluster. The Ceph Object Gateway supports two interfaces:
S3
Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.
Swift
Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.
Additional Resources
- For details on the Ceph architecture, see the Red Hat Ceph Storage Architecture Guide.
- For the minimum hardware recommendations, see the Red Hat Ceph Storage Hardware Selection Guide.