Chapter 15. Removing OSDs (Manual)
When you want to reduce the size of a cluster or replace hardware, you may remove an OSD at runtime. With Ceph, an OSD is generally one Ceph ceph-osd daemon for one storage drive within a host machine. If your host has multiple storage drives, you may need to remove one ceph-osd daemon for each drive. Generally, it’s a good idea to check the capacity of your cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that your cluster is not at its near full ratio.
Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio.
15.1. Take the OSD out of the Cluster Copy linkLink copied to clipboard!
Before you remove an OSD, it is usually up and in. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs.
ceph osd out {osd-num}
15.2. Observe the Data Migration Copy linkLink copied to clipboard!
Once you have taken your OSD out of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSD you removed. You can observe this process with the ceph CLI tool.
ceph -w
You should see the placement group states change from active+clean to active, some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)
15.3. Stopping the OSD Copy linkLink copied to clipboard!
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up and out. You must stop your OSD before you remove it from the configuration.
ssh {osd-host}
sudo /etc/init.d/ceph stop osd.{osd-num}
Once you stop your OSD, it is down.
15.4. Removing the OSD Copy linkLink copied to clipboard!
This procedure removes an OSD from a cluster map, removes its authentication key, removes the OSD from the OSD map, and removes the OSD from the ceph.conf file. If your host has multiple drives, you may need to remove an OSD for each drive by repeating this procedure.
Remove the OSD from the CRUSH map so that it no longer receives data. You may also decompile the CRUSH map, remove the OSD from the device list, remove the device as an item in the host bucket or remove the host bucket (if it’s in the CRUSH map and you intend to remove the host), recompile the map and set it. See the Storage Strategies guide for details.
ceph osd crush remove {name}Remove the OSD authentication key.
ceph auth del osd.{osd-num}The value of
cephforceph-{osd-num}in the path is the$cluster-$id. If your cluster name differs fromceph, use your cluster name instead.Remove the OSD.
ceph osd rm {osd-num} #for example ceph osd rm 1Navigate to the host where you keep the master copy of the cluster’s
ceph.conffile.ssh {admin-host} cd /etc/ceph vim ceph.confRemove the OSD entry from your
ceph.conffile (if it exists). :[osd.1] host = {hostname}-
From the host where you keep the master copy of the cluster’s
ceph.conffile, copy the updatedceph.conffile to the/etc/cephdirectory of other hosts in your cluster.