Chapter 2. Ceph block device commands
As a storage administrator, being familiar with Ceph’s block device commands can help you effectively manage the Red Hat Ceph Storage cluster. You can create and manage block devices pools and images, along with enabling and disabling the various features of Ceph block devices.
2.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
2.2. Displaying the command help Copy linkLink copied to clipboard!
Display command, and sub-command online help from the command-line interface.
The -h
option still displays help for all available commands.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
Use the
rbd help
command to display help for a particularrbd
command and its subcommand:Syntax
rbd help COMMAND SUBCOMMAND
rbd help COMMAND SUBCOMMAND
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display help for the
snap list
command:rbd help snap list
[root@rbd-client ~]# rbd help snap list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Creating a block device pool Copy linkLink copied to clipboard!
Before using the block device client, ensure a pool for rbd
exists, is enabled and initialized.
You MUST create a pool first before you can specify it as a source.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To create an
rbd
pool, execute the following:Syntax
ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create example 128 ceph osd pool application enable example rbd rbd pool init -p example
[root@rbd-client ~]# ceph osd pool create example 128 [root@rbd-client ~]# ceph osd pool application enable example rbd [root@rbd-client ~]# rbd pool init -p example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for additional details.
2.4. Creating a block device image Copy linkLink copied to clipboard!
Before adding a block device to a node, create an image for it in the Ceph storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To create a block device image, execute the following command:
Syntax
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd create data --size 1024 --pool stack
[root@rbd-client ~]# rbd create data --size 1024 --pool stack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example creates a 1 GB image named
data
that stores information in a pool namedstack
.NoteEnsure the pool exists before creating an image.
Additional Resources
- See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for additional details.
2.5. Listing the block device images Copy linkLink copied to clipboard!
List the block device images.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To list block devices in the
rbd
pool, execute the following (rbd
is the default pool name):rbd ls
[root@rbd-client ~]# rbd ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list block devices in a particular pool, execute the following, but replace
POOL_NAME
with the name of the pool:Syntax
rbd ls POOL_NAME
rbd ls POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd ls swimmingpool
[root@rbd-client ~]# rbd ls swimmingpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Retrieving the block device image information Copy linkLink copied to clipboard!
Retrieve information on the block device image.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To retrieve information from a particular image, execute the following, but replace
IMAGE_NAME
with the name for the image:Syntax
rbd --image IMAGE_NAME info
rbd --image IMAGE_NAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd --image foo info
[root@rbd-client ~]# rbd --image foo info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To retrieve information from an image within a pool, execute the following, but replace
IMAGE_NAME
with the name of the image and replacePOOL_NAME
with the name of the pool:Syntax
rbd --image IMAGE_NAME -p POOL_NAME info
rbd --image IMAGE_NAME -p POOL_NAME info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd --image bar -p swimmingpool info
[root@rbd-client ~]# rbd --image bar -p swimmingpool info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Resizing a block device image Copy linkLink copied to clipboard!
Ceph block device images are thin-provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size
option.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To increase or decrease the maximum size of a Ceph block device image:
Syntax
rbd resize --image IMAGE_NAME --size SIZE
[root@rbd-client ~]# rbd resize --image IMAGE_NAME --size SIZE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Removing a block device image Copy linkLink copied to clipboard!
Remove a block device image.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To remove a block device, execute the following, but replace
IMAGE_NAME
with the name of the image you want to remove:Syntax
rbd rm IMAGE_NAME
rbd rm IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd rm foo
[root@rbd-client ~]# rbd rm foo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove a block device from a pool, execute the following, but replace
IMAGE_NAME
with the name of the image to remove and replacePOOL_NAME
with the name of the pool:Syntax
rbd rm IMAGE_NAME -p POOL_NAME
rbd rm IMAGE_NAME -p POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd rm bar -p swimmingpool
[root@rbd-client ~]# rbd rm bar -p swimmingpool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Managing block device images using the trash command Copy linkLink copied to clipboard!
RADOS Block Device (RBD) images can be moved to the trash using the rbd trash
command.
This command provides a wide array of options such as:
- Removing images from the trash.
- Listing images from the trash.
- Deferring deletion of images from the trash.
- Deleting images from the trash.
- Restoring images from the trash
- Restoring images from the trash and renaming them.
- Purging expired images from the trash.
- Scheduling purge from the trash.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
Move an image to the trash:
Syntax
rbd trash mv POOL_NAME/IMAGE_NAME
rbd trash mv POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash mv mypool/myimage
[root@rbd-client ~]# rbd trash mv mypool/myimage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once an image is in the trash, a unique image ID is assigned.
NoteYou need this image ID to specify the image later if you need to use any of the trash options.
List the images in the trash:
Syntax
rbd trash ls POOL_NAME
rbd trash ls POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash ls mypool
[root@rbd-client ~]# rbd trash ls mypool 1558a57fa43b rename_image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The unique IMAGE_ID
1558a57fa43b
can used for anytrash
options.Move an image to the trash and defer the deletion of the image from the trash:
Syntax
rbd trash mv POOL_NAME/IMAGE_NAME --expires-at "EXPIRATION_TIME"
rbd trash mv POOL_NAME/IMAGE_NAME --expires-at "EXPIRATION_TIME"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The EXPIRATION_TIME can be a number of seconds, hours, date, time in "HH:MM:SS", or "tomorrow".
Example
rbd trash mv mypool/myimage --expires-at "60 seconds"
[root@rbd-client ~]# rbd trash mv mypool/myimage --expires-at "60 seconds"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
myimage
is moved to trash. However, you cannot delete it from trash until 60 seconds.Restore the image from the trash:
Syntax
rbd trash restore POOL_NAME/IMAGE_ID
rbd trash restore POOL_NAME/IMAGE_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash restore mypool/14502ff9ee4d
[root@rbd-client ~]# rbd trash restore mypool/14502ff9ee4d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the image from the trash:
Syntax
rbd trash rm POOL_NAME/IMAGE_ID [--force]
rbd trash rm POOL_NAME/IMAGE_ID [--force]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash rm mypool/14502ff9ee4d
[root@rbd-client ~]# rbd trash rm mypool/14502ff9ee4d Removing image: 100% complete...done.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the image is deferred for deletion, then you cannot delete it from the trash until expiration. You get the following error message:
Example
Deferment time has not expired, please use --force if you really want to remove the image Removing image: 0% complete...failed. 2021-12-02 06:37:49.573 7fb5d237a500 -1 librbd::api::Trash: remove: error: deferment time has not expired.
Deferment time has not expired, please use --force if you really want to remove the image Removing image: 0% complete...failed. 2021-12-02 06:37:49.573 7fb5d237a500 -1 librbd::api::Trash: remove: error: deferment time has not expired.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantOnce an image is deleted from the trash, it cannot be restored.
Rename the image and then restore it from the trash:
Syntax
rbd trash restore POOL_NAME/IMAGE_ID --image NEW_IMAGE_NAME
rbd trash restore POOL_NAME/IMAGE_ID --image NEW_IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash restore mypool/14502ff9ee4d --image test_image
[root@rbd-client ~]# rbd trash restore mypool/14502ff9ee4d --image test_image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove expired images from the trash:
Syntax
rbd trash purge POOL_NAME
rbd trash purge POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd trash purge mypool
[root@rbd-client ~]# rbd trash purge mypool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, all the images that are trashed from
mypool
are removed.
2.10. Enabling and disabling image features Copy linkLink copied to clipboard!
You can enable or disable image features, such as fast-diff
, exclusive-lock
, object-map
, or journaling
, on already existing images.
The deep flatten
feature can be only disabled on already existing images but not enabled. To use deep flatten
, enable it when creating images.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To enable a feature.
Syntax
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE_NAME
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable the
exclusive-lock
feature on theimage1
image in thedata
pool:Example
rbd feature enable data/image1 exclusive-lock
[root@rbd-client ~]# rbd feature enable data/image1 exclusive-lock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf you enable the
fast-diff
andobject-map
features, then rebuild the object map:+ .Syntax
rbd object-map rebuild POOL_NAME/IMAGE_NAME
rbd object-map rebuild POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To disable a feature.
Syntax
rbd feature disable POOL_NAME/IMAGE_NAME FEATURE_NAME
rbd feature disable POOL_NAME/IMAGE_NAME FEATURE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable the
fast-diff
feature on theimage2
image in thedata
pool:Example
rbd feature disable data/image2 fast-diff
[root@rbd-client ~]# rbd feature disable data/image2 fast-diff
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Working with image metadata Copy linkLink copied to clipboard!
Ceph supports adding custom image metadata as key-value pairs. The pairs do not have any strict format.
Also, by using metadata, you can set the RADOS Block Device (RBD) configuration parameters for particular images.
Use the rbd image-meta
commands to work with metadata.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
To set a new metadata key-value pair:
Syntax
rbd image-meta set POOL_NAME/IMAGE_NAME KEY VALUE
rbd image-meta set POOL_NAME/IMAGE_NAME KEY VALUE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd image-meta set data/dataset last_update 2016-06-06
[root@rbd-client ~]# rbd image-meta set data/dataset last_update 2016-06-06
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example sets the
last_update
key to the2016-06-06
value on thedataset
image in thedata
pool.To remove a metadata key-value pair:
Syntax
rbd image-meta remove POOL_NAME/IMAGE_NAME KEY
rbd image-meta remove POOL_NAME/IMAGE_NAME KEY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd image-meta remove data/dataset last_update
[root@rbd-client ~]# rbd image-meta remove data/dataset last_update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example removes the
last_update
key-value pair from thedataset
image in thedata
pool.To view a value of a key:
Syntax
rbd image-meta get POOL_NAME/IMAGE_NAME KEY
rbd image-meta get POOL_NAME/IMAGE_NAME KEY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd image-meta get data/dataset last_update
[root@rbd-client ~]# rbd image-meta get data/dataset last_update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example views the value of the
last_update
key.To show all metadata on an image:
Syntax
rbd image-meta list POOL_NAME/IMAGE_NAME
rbd image-meta list POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd data/dataset image-meta list
[root@rbd-client ~]# rbd data/dataset image-meta list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example lists the metadata set for the
dataset
image in thedata
pool.To override the RBD image configuration settings set in the Ceph configuration file for a particular image:
Syntax
rbd config image set POOL_NAME/IMAGE_NAME PARAMETER VALUE
rbd config image set POOL_NAME/IMAGE_NAME PARAMETER VALUE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd config image set data/dataset rbd_cache false
[root@rbd-client ~]# rbd config image set data/dataset rbd_cache false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example disables the RBD cache for the
dataset
image in thedata
pool.
Additional Resources
- See the Block device general options section in the Red Hat Ceph Storage Block Device Guide for a list of possible configuration options.
2.12. Moving images between pools Copy linkLink copied to clipboard!
You can move RADOS Block Device (RBD) images between different pools within the same cluster. The migration can be between replicated pools, erasure-coded pools, or between replicated and erasure-coded pools.
During this process, the source image is copied to the target image with all snapshot history and optionally with link to the source image’s parent to help preserve sparseness. The source image is read only, the target image is writable. The target image is linked to the source image while the migration is in progress.
You can safely run this process in the background while the new target image is in use. However, stop all clients using the target image before the preparation step to ensure that clients using the image are updated to point to the new target image.
The krbd
kernel module does not support live migration at this time.
Prerequisites
- Stop all clients that use the source image.
- Root-level access to the client node.
Procedure
Prepare for migration by creating the new target image that cross-links the source and target images:
Syntax
rbd migration prepare SOURCE_IMAGE TARGET_IMAGE
rbd migration prepare SOURCE_IMAGE TARGET_IMAGE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- SOURCE_IMAGE with the name of the image to be moved. Use the POOL/IMAGE_NAME format.
- TARGET_IMAGE with the name of the new image. Use the POOL/IMAGE_NAME format.
Example
rbd migration prepare data/source stack/target
[root@rbd-client ~]# rbd migration prepare data/source stack/target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the state of the new target image, which is supposed to be
prepared
:Syntax
rbd status TARGET_IMAGE
rbd status TARGET_IMAGE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optionally, restart the clients using the new target image name.
Copy the source image to target image:
Syntax
rbd migration execute TARGET_IMAGE
rbd migration execute TARGET_IMAGE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd migration execute stack/target
[root@rbd-client ~]# rbd migration execute stack/target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the migration is completed:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Commit the migration by removing the cross-link between the source and target images, and this also removes the source image:
Syntax
rbd migration commit TARGET_IMAGE
rbd migration commit TARGET_IMAGE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd migration commit stack/target
[root@rbd-client ~]# rbd migration commit stack/target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the source image is a parent of one or more clones, use the
--force
option after ensuring that the clone images are not in use:Example
rbd migration commit stack/target --force
[root@rbd-client ~]# rbd migration commit stack/target --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you did not restart the clients after the preparation step, restart them using the new target image name.
2.13. The rbdmap service Copy linkLink copied to clipboard!
The systemd
unit file, rbdmap.service
, is included with the ceph-common
package. The rbdmap.service
unit executes the rbdmap
shell script.
This script automates the mapping and unmapping of RADOS Block Devices (RBD) for one or more RBD images. The script can be ran manually at any time, but the typical use case is to automatically mount RBD images at boot time, and unmount at shutdown. The script takes a single argument, which can be either map
, for mounting or unmap
, for unmounting RBD images. The script parses a configuration file, the default is /etc/ceph/rbdmap
, but can be overridden using an environment variable called RBDMAPFILE
. Each line of the configuration file corresponds to an RBD image.
The format of the configuration file format is as follows:
IMAGE_SPEC RBD_OPTS
Where IMAGE_SPEC specifies the POOL_NAME / IMAGE_NAME, or just the IMAGE_NAME, in which case the POOL_NAME defaults to rbd
. The RBD_OPTS is an optional list of options to be passed to the underlying rbd map
command. These parameters and their values should be specified as a comma-separated string:
OPT1=VAL1,OPT2=VAL2,…,OPT_N=VAL_N
This will cause the script to issue an rbd map
command like the following:
rbd map POOLNAME/IMAGE_NAME --OPT1 VAL1 --OPT2 VAL2
For options and values which contain commas or equality signs, a simple apostrophe can be used to prevent replacing them.
When successful, the rbd map
operation maps the image to a /dev/rbdX
device, at which point a udev
rule is triggered to create a friendly device name symlink, for example, /dev/rbd/POOL_NAME/IMAGE_NAME
, pointing to the real mapped device. For mounting or unmounting to succeed, the friendly device name must have a corresponding entry in /etc/fstab
file. When writing /etc/fstab
entries for RBD images, it is a good idea to specify the noauto
or nofail
mount option. This prevents the init system from trying to mount the device too early, before the device exists.
2.14. Configuring the rbdmap service Copy linkLink copied to clipboard!
To automatically map and mount, or unmap and unmount, RADOS Block Devices (RBD) at boot time, or at shutdown respectively.
Prerequisites
- Root-level access to the node doing the mounting.
-
Installation of the
ceph-common
package.
Procedure
-
Open for editing the
/etc/ceph/rbdmap
configuration file. Add the RBD image or images to the configuration file:
Example
foo/bar1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring foo/bar2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'
foo/bar1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring foo/bar2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save changes to the configuration file.
Enable the RBD mapping service:
Example
systemctl enable rbdmap.service
[root@client ~]# systemctl enable rbdmap.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.15. Monitoring performance of Ceph Block Devices using the command-line interface Copy linkLink copied to clipboard!
Starting with Red Hat Ceph Storage 4.1, a performance metrics gathering framework is integrated within the Ceph OSD and Manager components. This framework provides a built-in method to generate and process performance metrics upon which other Ceph Block Device performance monitoring solutions are built.
A new Ceph Manager module,rbd_support
, aggregates the performance metrics when enabled. The rbd
command has two new actions: iotop
and iostat
.
The initial use of these actions can take around 30 seconds to populate the data fields.
Prerequisites
- User-level access to a Ceph Monitor node.
Procedure
Enable the
rbd_support
Ceph Manager module:Example
ceph mgr module enable rbd_support
[user@mon ~]$ ceph mgr module enable rbd_support
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display an "iotop"-style of images:
Example
rbd perf image iotop
[user@mon ~]$ rbd perf image iotop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe write ops, read-ops, write-bytes, read-bytes, write-latency, and read-latency columns can be sorted dynamically by using the right and left arrow keys.
To display an "iostat"-style of images:
Example
rbd perf image iostat
[user@mon ~]$ rbd perf image iostat
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe output from this command can be in JSON or XML format, and then can be sorted using other command-line tools.
2.16. Additional Resources Copy linkLink copied to clipboard!
-
See Chapter 3, The
rbd
kernel module for details on mapping and unmapping block devices.