此内容没有您所选择的语言版本。

Chapter 15. Removing OSDs (Manual)


When you want to reduce the size of a cluster or replace hardware, you may remove an OSD at runtime. With Ceph, an OSD is generally one Ceph ceph-osd daemon for one storage drive within a host machine. If your host has multiple storage drives, you may need to remove one ceph-osd daemon for each drive. Generally, it’s a good idea to check the capacity of your cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that your cluster is not at its near full ratio.

Warning

Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio.

15.1. Take the OSD out of the Cluster

Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs.

ceph osd out {osd-num}
Copy to Clipboard Toggle word wrap

15.2. Observe the Data Migration

Once you have taken your OSD out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSD you removed. You can observe this process with the ceph CLI tool.

ceph -w
Copy to Clipboard Toggle word wrap

You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

15.3. Stopping the OSD

After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration.

ssh {osd-host}
sudo /etc/init.d/ceph stop osd.{osd-num}
Copy to Clipboard Toggle word wrap

Once you stop your OSD, it is down.

15.4. Removing the OSD

This procedure removes an OSD from a cluster map, removes its authentication key, removes the OSD from the OSD map, and removes the OSD from the ceph.conf file. If your host has multiple drives, you may need to remove an OSD for each drive by repeating this procedure.

  1. Remove the OSD from the CRUSH map so that it no longer receives data. You may also decompile the CRUSH map, remove the OSD from the device list, remove the device as an item in the host bucket or remove the host bucket (if it’s in the CRUSH map and you intend to remove the host), recompile the map and set it. See the Storage Strategies guide for details.

    ceph osd crush remove {name}
    Copy to Clipboard Toggle word wrap
  2. Remove the OSD authentication key.

    ceph auth del osd.{osd-num}
    Copy to Clipboard Toggle word wrap

    The value of ceph for ceph-{osd-num} in the path is the $cluster-$id. If your cluster name differs from ceph, use your cluster name instead.

  3. Remove the OSD.

    ceph osd rm {osd-num}
    #for example
    ceph osd rm 1
    Copy to Clipboard Toggle word wrap
  4. Navigate to the host where you keep the master copy of the cluster’s ceph.conf file.

    ssh {admin-host}
    cd /etc/ceph
    vim ceph.conf
    Copy to Clipboard Toggle word wrap
  5. Remove the OSD entry from your ceph.conf file (if it exists). :

    [osd.1]
    host = {hostname}
    Copy to Clipboard Toggle word wrap
  6. From the host where you keep the master copy of the cluster’s ceph.conf file, copy the updated ceph.conf file to the /etc/ceph directory of other hosts in your cluster.
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat