此内容没有您所选择的语言版本。

Chapter 11. Add a Monitor


Ceph monitors are light-weight processes that maintain a master copy of the cluster map. All Ceph clients contact a Ceph monitor and retrieve the current copy of the cluster map, enabling them to bind to pool and read/write data.

When you have a cluster up and running, you may add or remove monitors from the cluster at runtime. You can run a cluster with 1 monitor. We recommend at least 3 monitors for a production cluster. Ceph monitors use a variation of the Paxos protocol to establish consensus about maps and other critical information across the cluster. Due to the nature of Paxos, Ceph requires a majority of monitors running to establish a quorum (thus establishing consensus).

We recommend deploying an odd-number of monitors, but it is not mandatory. An odd-number of monitors has a higher resiliency to failures than an even-number of monitors. To maintain a quorum on a 2 monitor deployment, Ceph cannot tolerate any failures in order; with 3 monitors, one failure; with 4 monitors, one failure; with 5 monitors, two failures. This is why an odd-number is advisable. Summarizing, Ceph needs a majority of monitors to be running (and able to communicate with each other), but that majority can be achieved using a single monitor, or 2 out of 2 monitors, 2 out of 3, 3 out of 4, etc.

For an initial deployment of a multi-node Ceph cluster, we recommend to deploying three monitors, increasing the number two at a time if a valid need for more than three exists.

Since monitors are light-weight, it is possible to run them on the same host as an OSD; however, we recommend running them on separate hosts, because fsync issues with the kernel may impair performance.

Note

A majority of monitors in your cluster must be able to reach each other in order to establish a quorum.

11.1. Host Configuration

When adding Ceph monitors to your cluster, you should be deploying them on separate hosts. Running Ceph monitors on the same host does not provide any additional high availability assurance if a host fails. Ideally, the host hardware should be uniform throughout your monitor cluster.

For details on the minimum recommendations for Ceph monitor hardware, see Hardware Recommendations.

For installation, see the Red Hat Ceph Installation Guide and be sure to address the pre-installation requirements.

Add your monitor host to a rack in your cluster, connect it to the network and ensure that it has network connectivity.

Important

You must install NTP, and you must open port 6789.

11.2. ceph-deploy

From your admin node in the directory where you keep your Ceph cluster configuration, install Red Hat Ceph Storage.

ceph-deploy install --release <version> <ceph-node> [<ceph-node>]
Copy to Clipboard Toggle word wrap

For example, to install Ceph on two new monitor hosts node5 and node6, you would execute the following:

ceph-deploy install --release ice1.2.3 node5 node6
Copy to Clipboard Toggle word wrap

Once you have installed Ceph, you can create new monitors.

ceph-deploy mon create <ceph-node> [<ceph-node>]
Copy to Clipboard Toggle word wrap

For example, to add Ceph monitors on monitor hosts node5 and node6, you would execute the following:

ceph-deploy mon create node5 node6
Copy to Clipboard Toggle word wrap

Check to see that your monitors have joined the quorum.

ceph quorum_status --format json-pretty
Copy to Clipboard Toggle word wrap

To ensure the cluster identifies the monitors on start/restart, add the monitor hostname and IP address to your Ceph configuration file.

[mon.node5]
host = node5
addr = 10.0.0.1:6789

[mon.node6]
host = node6
addr = 10.0.0.2:6789
Copy to Clipboard Toggle word wrap

Then, push a new copy of the Ceph configuration file to your Ceph nodes.

Finally, connect your monitors to Calamari.

ceph-deploy calamari connect <ceph-node>[<ceph-node> ...]
Copy to Clipboard Toggle word wrap

For example, using the exemplary node5 and node6 from above, you would execute:

ceph-deploy calamari connect node5 node6
Copy to Clipboard Toggle word wrap

11.3. manual

This section is intended for users who wish to use a third party deployment tool (e.g., Puppet, Chef, Juju).

This procedure creates a ceph-mon data directory, retrieves the monitor map and monitor keyring, and adds a ceph-mon daemon to your cluster. If this results in only two monitor daemons, you may add more monitors by repeating this procedure until you have a sufficient number of ceph-mon daemons to achieve a quorum.

At this point you should define your monitor’s id. Traditionally, monitors have been named with single letters (a, b, c, …​), but you are free to define the id as you see fit. For the purpose of this document, please take into account that {mon-id} should be the id you chose, without the mon. prefix (i.e., {mon-id} should be the a on mon.a).

  1. Create the default directory on the machine that will host your new monitor.

    ssh {new-mon-host}
    sudo mkdir /var/lib/ceph/mon/ceph-{mon-id}
    Copy to Clipboard Toggle word wrap
  2. Create a temporary directory {tmp} to keep the files needed during this process. This directory should be different from the monitor’s default directory created in the previous step, and can be removed after all the steps are executed.

    mkdir {tmp}
    Copy to Clipboard Toggle word wrap
  3. Retrieve the keyring for your monitors, where {tmp} is the path to the retrieved keyring, and {key-filename} is the name of the file containing the retrieved monitor key.

    ceph auth get mon. -o {tmp}/{key-filename}
    Copy to Clipboard Toggle word wrap
  4. Retrieve the monitor map, where {tmp} is the path to the retrieved monitor map, and {map-filename} is the name of the file containing the retrieved monitor monitor map.

    ceph mon getmap -o {tmp}/{map-filename}
    Copy to Clipboard Toggle word wrap
  5. Prepare the monitor’s data directory created in the first step. You must specify the path to the monitor map so that you can retrieve the information about a quorum of monitors and their fsid. You must also specify a path to the monitor keyring:

    sudo ceph-mon -i {mon-id} --mkfs --monmap {tmp}/{map-filename} --keyring {tmp}/{key-filename}
    Copy to Clipboard Toggle word wrap
  6. Add the new monitor to the list of monitors for you cluster (runtime). This enables other nodes to use this monitor during their initial startup.

    ceph mon add <mon-id> <ip>[:<port>]
    Copy to Clipboard Toggle word wrap
  7. Start the new monitor and it will automatically join the cluster. The daemon needs to know which address to bind to, either via --public-addr {ip:port} or by setting mon addr in the appropriate section of ceph.conf. For example:

    ceph-mon -i {mon-id} --public-addr {ip:port}
    Copy to Clipboard Toggle word wrap
  8. To ensure the cluster identifies the monitor on start/restart, add the monitor hostname and IP address to your Ceph configuration file.

    [mon.{mon-id}]
    host = {mon-id}
    addr = {ip:port}
    Copy to Clipboard Toggle word wrap

    Then, push a new copy of the Ceph configuration file to your Ceph nodes.

  9. Finally, from the admin node in the directory where you keep you cluster’s Ceph configuration, connect your monitor to Calamari.
ceph-deploy calamari connect <ceph-node>[<ceph-node> ...]
Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat