Este conteúdo não está disponível no idioma selecionado.
Chapter 8. The ceph-volume utility
As a storage administrator, you can prepare, list, create, activate, deactivate, batch, trigger, zap, and migrate Ceph OSDs using the ceph-volume
utility. The ceph-volume
utility is a single-purpose command-line tool to deploy logical volumes as OSDs. It uses a plugin-type framework to deploy OSDs with different device technologies. The ceph-volume
utility follows a similar workflow of the ceph-disk
utility for deploying OSDs, with a predictable, and robust way of preparing, activating, and starting OSDs. Currently, the ceph-volume
utility only supports the lvm
plugin, with the plan to support others technologies in the future.
The ceph-disk
command is deprecated.
8.1. Ceph volume lvm
plugin
By making use of LVM tags, the lvm
sub-command is able to store and re-discover by querying devices associated with OSDs so they can be activated. This includes support for lvm-based technologies like dm-cache
as well.
When using ceph-volume
, the use of dm-cache
is transparent, and treats dm-cache
like a logical volume. The performance gains and losses when using dm-cache
will depend on the specific workload. Generally, random and sequential reads will see an increase in performance at smaller block sizes. While random and sequential writes will see a decrease in performance at larger block sizes.
To use the LVM plugin, add lvm
as a subcommand to the ceph-volume
command within the cephadm shell:
[ceph: root@host01 /]# ceph-volume lvm
Following are the lvm
subcommands:
-
prepare
- Format an LVM device and associate it with an OSD. -
activate
- Discover and mount the LVM device associated with an OSD ID and start the Ceph OSD. -
list
- List logical volumes and devices associated with Ceph. -
batch
- Automatically size devices for multi-OSD provisioning with minimal interaction. -
deactivate
- Deactivate OSDs. -
create
- Create a new OSD from an LVM device. -
trigger
- A systemd helper to activate an OSD. -
zap
- Removes all data and filesystems from a logical volume or partition. -
migrate
- Migrate BlueFS data from to another LVM device. -
new-wal
- Allocate new WAL volume for the OSD at specified logical volume. -
new-db
- Allocate new DB volume for the OSD at specified logical volume.
Using the create
subcommand combines the prepare
and activate
subcommands into one subcommand.
Additional Resources
-
See the
create
subcommand section for more details.
8.2. Why does ceph-volume
replace ceph-disk
?
Up to Red Hat Ceph Storage 4, ceph-disk
utility was used to prepare, activate, and create OSDs. Starting with Red Hat Ceph Storage 4, ceph-disk
is replaced by the ceph-volume
utility that aims to be a single purpose command-line tool to deploy logical volumes as OSDs, while maintaining a similar API to ceph-disk
when preparing, activating, and creating OSDs.
How does ceph-volume
work?
The ceph-volume
is a modular tool that currently supports two ways of provisioning hardware devices, legacy ceph-disk
devices and LVM (Logical Volume Manager) devices. The ceph-volume lvm
command uses the LVM tags to store information about devices specific to Ceph and its relationship with OSDs. It uses these tags to later re-discover and query devices associated with OSDS so that it can activate them. It supports technologies based on LVM and dm-cache
as well.
The ceph-volume
utility uses dm-cache
transparently and treats it as a logical volume. You might consider the performance gains and losses when using dm-cache
, depending on the specific workload you are handling. Generally, the performance of random and sequential read operations increases at smaller block sizes; while the performance of random and sequential write operations decreases at larger block sizes. Using ceph-volume
does not introduce any significant performance penalties.
The ceph-disk
utility is deprecated.
The ceph-volume simple
command can handle legacy ceph-disk
devices, if these devices are still in use.
How does ceph-disk
work?
The ceph-disk
utility was required to support many different types of init systems, such as upstart
or sysvinit
, while being able to discover devices. For this reason, ceph-disk
concentrates only on GUID Partition Table (GPT) partitions. Specifically on GPT GUIDs that label devices in a unique way to answer questions like:
-
Is this device a
journal
? - Is this device an encrypted data partition?
- Was the device left partially prepared?
To solve these questions, ceph-disk
uses UDEV rules to match the GUIDs.
What are disadvantages of using ceph-disk
?
Using the UDEV rules to call ceph-disk
can lead to a back-and-forth between the ceph-disk
systemd
unit and the ceph-disk
executable. The process is very unreliable and time consuming and can cause OSDs to not come up at all during the boot process of a node. Moreover, it is hard to debug, or even replicate these problems given the asynchronous behavior of UDEV.
Because ceph-disk
works with GPT partitions exclusively, it cannot support other technologies, such as Logical Volume Manager (LVM) volumes, or similar device mapper devices.
To ensure the GPT partitions work correctly with the device discovery workflow, ceph-disk
requires a large number of special flags to be used. In addition, these partitions require devices to be exclusively owned by Ceph.
8.3. Preparing Ceph OSDs using ceph-volume
The prepare
subcommand prepares an OSD back-end object store and consumes logical volumes (LV) for both the OSD data and journal. It does not modify the logical volumes, except for adding some extra metadata tags using LVM. These tags make volumes easier to discover, and they also identify the volumes as part of the Ceph Storage Cluster and the roles of those volumes in the storage cluster.
The BlueStore OSD backend supports the following configurations:
-
A block device, a
block.wal
device, and ablock.db
device -
A block device and a
block.wal
device -
A block device and a
block.db
device - A single block device
The prepare
subcommand accepts a whole device or partition, or a logical volume for block
.
Prerequisites
- Root-level access to the OSD nodes.
- Optionally, create logical volumes. If you provide a path to a physical device, the subcommand turns the device into a logical volume. This approach is simpler, but you cannot configure or change the way the logical volume is created.
Procedure
Extract the Ceph keyring:
Syntax
ceph auth get client.ID -o ceph.client.ID.keyring
Example
[ceph: root@host01 /]# ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
Prepare the LVM volumes:
Syntax
ceph-volume lvm prepare --bluestore --data VOLUME_GROUP/LOGICAL_VOLUME
Example
[ceph: root@host01 /]# ceph-volume lvm prepare --bluestore --data example_vg/data_lv
Optionally, if you want to use a separate device for RocksDB, specify the
--block.db
and--block.wal
options:Syntax
ceph-volume lvm prepare --bluestore --block.db BLOCK_DB_DEVICE --block.wal BLOCK_WAL_DEVICE --data DATA_DEVICE
Example
[ceph: root@host01 /]# ceph-volume lvm prepare --bluestore --block.db /dev/sda --block.wal /dev/sdb --data /dev/sdc
Optionally, to encrypt data, use the
--dmcrypt
flag:Syntax
ceph-volume lvm prepare --bluestore --dmcrypt --data VOLUME_GROUP/LOGICAL_VOLUME
Example
[ceph: root@host01 /]# ceph-volume lvm prepare --bluestore --dmcrypt --data example_vg/data_lv
Additional Resources
- See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
- See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
8.4. Listing devices using ceph-volume
You can use the ceph-volume lvm list
subcommand to list logical volumes and devices associated with a Ceph cluster, as long as they contain enough metadata to allow for that discovery. The output is grouped by the OSD ID associated with the devices. For logical volumes, the devices key
is populated with the physical devices associated with the logical volume.
In some cases, the output of the ceph -s
command shows the following error message:
1 devices have fault light turned on
In such cases, you can list the devices with ceph device ls-lights
command which gives the details about the lights on the devices. Based on the information, you can turn off the lights on the devices.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
Procedure
List the devices in the Ceph cluster:
Example
[ceph: root@host01 /]# ceph-volume lvm list ====== osd.6 ======= [block] /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block device /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block uuid 4d7gzX-Nzxp-UUG0-bNxQ-Jacr-l0mP-IPD8cX cephx lockbox secret cluster fsid 1ca9f6a8-d036-11ec-8263-fa163ee967ad cluster name ceph crush device class None encrypted 0 osd fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89920 osd id 6 osdspec affinity all-available-devices type block vdo 0 devices /dev/vdc
Optional: List the devices in the storage cluster with the lights:
Example
[ceph: root@host01 /]# ceph device ls-lights { "fault": [ "SEAGATE_ST12000NM002G_ZL2KTGCK0000C149" ], "ident": [] }
Optional: Turn off the lights on the device:
Syntax
ceph device light off DEVICE_NAME FAULT/INDENT --force
Example
[ceph: root@host01 /]# ceph device light off SEAGATE_ST12000NM002G_ZL2KTGCK0000C149 fault --force
8.5. Activating Ceph OSDs using ceph-volume
The activation process enables a systemd
unit at boot time, which allows the correct OSD identifier and its UUID to be enabled and mounted.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
-
Ceph OSDs prepared by the
ceph-volume
utility.
Procedure
Get the OSD ID and OSD FSID from an OSD node:
[ceph: root@host01 /]# ceph-volume lvm list
Activate the OSD:
Syntax
ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[ceph: root@host01 /]# ceph-volume lvm activate --bluestore 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920
To activate all OSDs that are prepared for activation, use the
--all
option:Example
[ceph: root@host01 /]# ceph-volume lvm activate --all
Optionally, you can use the
trigger
subcommand. This command cannot be used directly, and it is used bysystemd
so that it proxies input toceph-volume lvm activate
. This parses the metadata coming from systemd and startup, detecting the UUID and ID associated with an OSD.Syntax
ceph-volume lvm trigger SYSTEMD_DATA
Here the SYSTEMD_DATA is in OSD_ID-OSD_FSID format.
Example
[ceph: root@host01 /]# ceph-volume lvm trigger 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920
Additional Resources
- See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
- See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
8.6. Deactivating Ceph OSDs using ceph-volume
You can deactivate the Ceph OSDs using the ceph-volume lvm
subcommand. This subcommand removes the volume groups and the logical volume.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
-
The Ceph OSDs are activated using the
ceph-volume
utility.
Procedure
Get the OSD ID from the OSD node:
[ceph: root@host01 /]# ceph-volume lvm list
Deactivate the OSD:
Syntax
ceph-volume lvm deactivate OSD_ID
Example
[ceph: root@host01 /]# ceph-volume lvm deactivate 16
Additional Resources
- See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
- See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
- See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
8.7. Creating Ceph OSDs using ceph-volume
The create
subcommand calls the prepare
subcommand, and then calls the activate
subcommand.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD nodes.
If you prefer to have more control over the creation process, you can use the prepare
and activate
subcommands separately to create the OSD, instead of using create
. You can use the two subcommands to gradually introduce new OSDs into a storage cluster, while avoiding having to rebalance large amounts of data. Both approaches work the same way, except that using the create
subcommand causes the OSD to become up and in immediately after completion.
Procedure
To create a new OSD:
Syntax
ceph-volume lvm create --bluestore --data VOLUME_GROUP/LOGICAL_VOLUME
Example
[root@osd ~]# ceph-volume lvm create --bluestore --data example_vg/data_lv
Additional Resources
- See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
- See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
8.8. Migrating BlueFS data
You can migrate the BlueStore file system (BlueFS) data, that is the RocksDB data, from the source volume to the target volume using the migrate
LVM subcommand. The source volume, except the main one, is removed on success.
LVM volumes are primarily for the target only.
The new volumes are attached to the OSD, replacing one of the source drives.
Following are the placement rules for the LVM volumes:
- If source list has DB or WAL volume, then the target device replaces it.
-
if source list has slow volume only, then explicit allocation using the
new-db
ornew-wal
command is needed.
The new-db
and new-wal
commands attaches the given logical volume to the given OSD as a DB or a WAL volume respectively.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
-
Ceph OSDs prepared by the
ceph-volume
utility. - Volume groups and Logical volumes are created.
Procedure
Log in the
cephadm
shell:Example
[root@host01 ~]# cephadm shell
Stop the OSD to which you have to add the DB or the WAL device:
Example
[ceph: root@host01 /]# ceph orch daemon stop osd.1
Mount the new devices to the container:
Example
[root@host01 ~]# cephadm shell --mount /var/lib/ceph/72436d46-ca06-11ec-9809-ac1f6b5635ee/osd.1:/var/lib/ceph/osd/ceph-1
Attach the given logical volume to OSD as a DB/WAL device:
NoteThis command fails if the OSD has an attached DB.
Syntax
ceph-volume lvm new-db --osd-id OSD_ID --osd-fsid OSD_FSID --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm new-db --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_db [ceph: root@host01 /]# ceph-volume lvm new-wal --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_wal
You can migrate BlueFS data in the following ways:
Move BlueFS data from main device to LV that is already attached as DB:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/db
Move BlueFS data from shared main device to LV which shall be attached as a new DB:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/new_db
Move BlueFS data from DB device to new LV, and replace the DB device:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db --target vgname/new_db
Move BlueFS data from main and DB devices to new LV, and replace the DB device:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db
Move BlueFS data from main, DB, and WAL devices to new LV, remove the WAL device, and replace the the DB device:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db wal --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db
Move BlueFS data from main, DB, and WAL devices to the main device, remove the WAL and DB devices:
Syntax
ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db wal --target VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME
Example
[ceph: root@host01 /]# ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db wal --target vgname/data
8.9. Expanding BlueFS DB device
You can expand the storage of the BlueStore File System (BlueFS) data that is the RocksDB data of ceph-volume
created OSDs with the ceph-bluestore
tool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Ceph OSDs are prepared by the
ceph-volume
utility. - Volume groups and Logical volumes are created.
Run these steps on the host where the OSD is deployed.
Procedure
Optional: Inside the
cephadm
shell, list the devices in the Red Hat Ceph Storage cluster.Example
[ceph: root@host01 /]# ceph-volume lvm list ====== osd.3 ======= [db] /dev/db-test/db1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type db vdo 0 devices /dev/vdh [block] /dev/test/lv1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type block vdo 0 devices /dev/vdg
Get the volume group information:
Example
[root@host01 ~]# vgs VG #PV #LV #SN Attr VSize VFree db-test 1 1 0 wz--n- <200.00g <160.00g test 1 1 0 wz--n- <200.00g <170.00g
Stop the Ceph OSD service:
Example
[root@host01 ~]# systemctl stop host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service
Resize, shrink, and expand the logical volumes:
Example
[root@host01 ~]# lvresize -l 100%FREE /dev/db-test/db1 Size of logical volume db-test/db1 changed from 40.00 GiB (10240 extents) to <160.00 GiB (40959 extents). Logical volume db-test/db1 successfully resized.
Launch the
cephadm
shell:Syntax
cephadm shell -m /var/lib/ceph/CLUSTER_FSID/osd.OSD_ID:/var/lib/ceph/osd/ceph-OSD_ID:z
Example
[root@host01 ~]# cephadm shell -m /var/lib/ceph/1a6112da-ed05-11ee-bacd-525400565cda/osd.3:/var/lib/ceph/osd/ceph-3:z
The
ceph-bluestore-tool
needs to access the BlueStore data from within thecephadm
shell container, so it must be bind-mounted. Use the-m
option to make the BlueStore data available.Check the size of the Rocks DB before expansion:
Syntax
ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH
Example
[ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { "/var/lib/ceph/osd/ceph-3/block": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 32212254720, "btime": "2024-04-03T08:34:12.742848+0000", "description": "main", "bfm_blocks": "7864320", "bfm_blocks_per_key": "128", "bfm_bytes_per_block": "4096", "bfm_size": "32212254720", "bluefs": "1", "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda", "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)", "created_at": "2024-04-03T08:34:15.637253Z", "elastic_shared_blobs": "1", "kv_backend": "rocksdb", "magic": "ceph osd volume v026", "mkfs_done": "yes", "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==", "osdspec_affinity": "None", "ready": "ready", "require_osd_release": "19", "whoami": "3" }, "/var/lib/ceph/osd/ceph-3/block.db": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 40794497536, "btime": "2024-04-03T08:34:12.748816+0000", "description": "bluefs db" } }
Expand the BlueStore device:
Syntax
ceph-bluestore-tool bluefs-bdev-expand --path OSD_DIRECTORY_PATH
Example
[ceph: root@host01 /]# ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path 1 : device size 0x27ffbfe000 : using 0x2300000(35 MiB) 2 : device size 0x780000000 : using 0x52000(328 KiB) Expanding DB/WAL... 1 : expanding to 0x171794497536 1 : size label updated to 171794497536
Verify the
block.db
is expanded:Syntax
ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH
Example
[ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { "/var/lib/ceph/osd/ceph-3/block": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 32212254720, "btime": "2024-04-03T08:34:12.742848+0000", "description": "main", "bfm_blocks": "7864320", "bfm_blocks_per_key": "128", "bfm_bytes_per_block": "4096", "bfm_size": "32212254720", "bluefs": "1", "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda", "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)", "created_at": "2024-04-03T08:34:15.637253Z", "elastic_shared_blobs": "1", "kv_backend": "rocksdb", "magic": "ceph osd volume v026", "mkfs_done": "yes", "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==", "osdspec_affinity": "None", "ready": "ready", "require_osd_release": "19", "whoami": "3" }, "/var/lib/ceph/osd/ceph-3/block.db": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 171794497536, "btime": "2024-04-03T08:34:12.748816+0000", "description": "bluefs db" } }
Exit the shell and restart the OSD:
Example
[root@host01 ~]# systemctl start host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service osd.3 host01 running (15s) 0s ago 13m 46.9M 4096M 19.0.0-2493-gd82c9aa1 3714003597ec 02150b3b6877
8.10. Using batch mode with ceph-volume
The batch
subcommand automates the creation of multiple OSDs when single devices are provided.
The ceph-volume
command decides the best method to use to create the OSDs, based on drive type. Ceph OSD optimization depends on the available devices:
-
If all devices are traditional hard drives,
batch
creates one OSD per device. -
If all devices are solid state drives,
batch
creates two OSDs per device. -
If there is a mix of traditional hard drives and solid state drives,
batch
uses the traditional hard drives for data, and creates the largest possible journal (block.db
) on the solid state drive.
The batch
subcommand does not support the creation of a separate logical volume for the write-ahead-log (block.wal
) device.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD nodes.
Procedure
To create OSDs on several drives:
Syntax
ceph-volume lvm batch --bluestore PATH_TO_DEVICE [PATH_TO_DEVICE]
Example
[ceph: root@host01 /]# ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/nvme0n1
Additional Resources
- See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details.
8.11. Zapping data using ceph-volume
The zap
subcommand removes all data and filesystems from a logical volume or partition.
You can use the zap
subcommand to zap logical volumes, partitions, or raw devices that are used by Ceph OSDs for reuse. Any filesystems present on the given logical volume or partition are removed and all data is purged.
Optionally, you can use the --destroy
flag for complete removal of a logical volume, partition, or the physical device.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
Procedure
Zap the logical volume:
Syntax
ceph-volume lvm zap VOLUME_GROUP_NAME/LOGICAL_VOLUME_NAME [--destroy]
Example
[ceph: root@host01 /]# ceph-volume lvm zap osd-vg/data-lv
Zap the partition:
Syntax
ceph-volume lvm zap DEVICE_PATH_PARTITION [--destroy]
Example
[ceph: root@host01 /]# ceph-volume lvm zap /dev/sdc1
Zap the raw device:
Syntax
ceph-volume lvm zap DEVICE_PATH --destroy
Example
[ceph: root@host01 /]# ceph-volume lvm zap /dev/sdc --destroy
Purge multiple devices with the OSD ID:
Syntax
ceph-volume lvm zap --destroy --osd-id OSD_ID
Example
[ceph: root@host01 /]# ceph-volume lvm zap --destroy --osd-id 16
NoteAll the relative devices are zapped.
Purge OSDs with the FSID:
Syntax
ceph-volume lvm zap --destroy --osd-fsid OSD_FSID
Example
[ceph: root@host01 /]# ceph-volume lvm zap --destroy --osd-fsid 65d7b6b1-e41a-4a3c-b363-83ade63cb32b
NoteAll the relative devices are zapped.