此内容没有您所选择的语言版本。
Chapter 15. Removing OSDs (Manual)
When you want to reduce the size of a cluster or replace hardware, you may remove an OSD at runtime. With Ceph, an OSD is generally one Ceph ceph-osd
daemon for one storage drive within a host machine. If your host has multiple storage drives, you may need to remove one ceph-osd
daemon for each drive. Generally, it’s a good idea to check the capacity of your cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that your cluster is not at its near full
ratio.
Do not let your cluster reach its full ratio
when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio
.
15.1. Take the OSD out of the Cluster 复制链接链接已复制到粘贴板!
Before you remove an OSD, it is usually up
and in
. You need to take it out of the cluster so that Ceph can begin rebalancing and copying its data to other OSDs.
ceph osd out {osd-num}
ceph osd out {osd-num}
15.2. Observe the Data Migration 复制链接链接已复制到粘贴板!
Once you have taken your OSD out
of the cluster, Ceph will begin rebalancing the cluster by migrating placement groups out of the OSD you removed. You can observe this process with the ceph
CLI tool.
ceph -w
ceph -w
You should see the placement group states change from active+clean
to active, some degraded objects
, and finally active+clean
when migration completes. (Control-c to exit.)
15.3. Stopping the OSD 复制链接链接已复制到粘贴板!
After you take an OSD out of the cluster, it may still be running. That is, the OSD may be up
and out
. You must stop your OSD before you remove it from the configuration.
ssh {osd-host} sudo /etc/init.d/ceph stop osd.{osd-num}
ssh {osd-host}
sudo /etc/init.d/ceph stop osd.{osd-num}
Once you stop your OSD, it is down
.
15.4. Removing the OSD 复制链接链接已复制到粘贴板!
This procedure removes an OSD from a cluster map, removes its authentication key, removes the OSD from the OSD map, and removes the OSD from the ceph.conf
file. If your host has multiple drives, you may need to remove an OSD for each drive by repeating this procedure.
Remove the OSD from the CRUSH map so that it no longer receives data. You may also decompile the CRUSH map, remove the OSD from the device list, remove the device as an item in the host bucket or remove the host bucket (if it’s in the CRUSH map and you intend to remove the host), recompile the map and set it. See the Storage Strategies guide for details.
ceph osd crush remove {name}
ceph osd crush remove {name}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OSD authentication key.
ceph auth del osd.{osd-num}
ceph auth del osd.{osd-num}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value of
ceph
forceph-{osd-num}
in the path is the$cluster-$id
. If your cluster name differs fromceph
, use your cluster name instead.Remove the OSD.
ceph osd rm {osd-num} #for example ceph osd rm 1
ceph osd rm {osd-num} #for example ceph osd rm 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the host where you keep the master copy of the cluster’s
ceph.conf
file.ssh {admin-host} cd /etc/ceph vim ceph.conf
ssh {admin-host} cd /etc/ceph vim ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the OSD entry from your
ceph.conf
file (if it exists). :[osd.1] host = {hostname}
[osd.1] host = {hostname}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
From the host where you keep the master copy of the cluster’s
ceph.conf
file, copy the updatedceph.conf
file to the/etc/ceph
directory of other hosts in your cluster.