Chapter 7. Ceph OSDs in CRUSH
Once you have a CRUSH hierarchy for your OSDs, you can add OSDs to the CRUSH hierarchy. You can also move or remove OSDs from an existing hierarchy. The Ceph CLI usage has the following values:
id
- Description
- The numeric ID of the OSD.
- Type
- Integer
- Required
- Yes
- Example
-
0
name
- Description
- The full name of the OSD.
- Type
- String
- Required
- Yes
- Example
-
osd.0
weight
- Description
- The CRUSH weight for the OSD.
- Type
- Double
- Required
- Yes
- Example
-
2.0
root
- Description
- The name of the root bucket of the hierarchy/tree in which the OSD resides.
- Type
- Key/value pair.
- Required
- Yes
- Example
-
root=default
,root=replicated_ruleset
, etc.
bucket-type
- Description
- One or more name/value pairs, where the name is the bucket type and the value is the bucket’s name. You may specify the a CRUSH location for an OSD in the CRUSH hierarchy.
- Type
- Key/value pairs.
- Required
- No
- Example
-
datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1
7.1. Adding an OSD to CRUSH
Adding an OSD to a CRUSH hierarchy is the final step before you start an OSD (rendering it up
and in
) and Ceph assigns placement groups to the OSD. You must prepare an OSD before you add it to the CRUSH hierarchy. Deployment tools such as ceph-deploy
may perform this step for you. Refer to Adding/Removing OSDs for additional details.
The CRUSH hierarchy is notional, so the ceph osd crush add
command allows you to add OSDs to the CRUSH hierarchy wherever you wish. The location you specify should reflect its actual location. If you specify at least one bucket, the command will place the OSD into the most specific bucket you specify, and it will move that bucket underneath any other buckets you specify.
To add an OSD to a CRUSH hierarchy, execute the following:
ceph osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
If you specify only the root bucket, the command will attach the OSD directly to the root, but CRUSH rules expect OSDs to be inside of hosts or chassis, and hosts/chassis should be inside of other buckets reflecting your cluster topology.
The following example adds osd.0
to the hierarchy:
ceph osd crush add osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1
You may also use ceph osd crush set
or ceph osd crush create-or-move
to add an OSD to the CRUSH hierarchy.
7.2. Moving an OSD within a CRUSH Hierarchy
If your deployment tool (e.g., ceph-deploy) added your OSD to the CRUSH map at a sub-optimal CRUSH location, or if your cluster topology changes, you may move an OSD in the CRUSH hierarchy to reflect its actual location.
Moving an OSD in the CRUSH hierarchy means that Ceph will recompute which placement groups get assigned to the OSD, potentially resulting in significant redistribution of data.
To move an OSD within the CRUSH hierarchy, execute the following:
ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]
You may also use ceph osd crush create-or-move
to move an OSD within the CRUSH hierarchy.
7.3. Remove an OSD from a CRUSH Hierarchy
Removing an OSD from a CRUSH hierarchy is the first step when you want to remove an OSD from your cluster. When you remove the OSD from the CRUSH map, CRUSH will recompute which OSDs will get the placement groups and data will rebalance accordingly. Refer to Adding/Removing OSDs for additional details.
To remove an OSD from the CRUSH map of a running cluster, execute the following:
ceph osd crush remove {name}