Search

Chapter 2. Ceph block device commands

download PDF

As a storage administrator, being familiar with Ceph’s block device commands can help you effectively manage the Red Hat Ceph Storage cluster. You can create and manage block devices pools and images, along with enabling and disabling the various features of Ceph block devices.

2.1. Prerequisites

  • A running Red Hat Ceph Storage cluster.

2.2. Displaying the command help

Display command, and sub-command online help from the command-line interface.

Note

The -h option still displays help for all available commands.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. Use the rbd help command to display help for a particular rbd command and its subcommand:

    Syntax

    rbd help COMMAND SUBCOMMAND

  2. To display help for the snap list command:

    [root@rbd-client ~]# rbd help snap list

2.3. Creating a block device pool

Before using the block device client, ensure a pool for rbd exists, is enabled and initialized.

Note

You MUST create a pool first before you can specify it as a source.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To create an rbd pool, execute the following:

    Syntax

    ceph osd pool create POOL_NAME PG_NUM
    ceph osd pool application enable POOL_NAME rbd
    rbd pool init -p POOL_NAME

    Example

    [root@rbd-client ~]# ceph osd pool create example 128
    [root@rbd-client ~]# ceph osd pool application enable example rbd
    [root@rbd-client ~]# rbd pool init -p example

Additional Resources

  • See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for additional details.

2.4. Creating a block device image

Before adding a block device to a node, create an image for it in the Ceph storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To create a block device image, execute the following command:

    Syntax

    rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME

    Example

    [root@rbd-client ~]# rbd create data --size 1024 --pool stack

    This example creates a 1 GB image named data that stores information in a pool named stack.

    Note

    Ensure the pool exists before creating an image.

Additional Resources

2.5. Listing the block device images

List the block device images.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To list block devices in the rbd pool, execute the following (rbd is the default pool name):

    [root@rbd-client ~]# rbd ls
  2. To list block devices in a particular pool, execute the following, but replace POOL_NAME with the name of the pool:

    Syntax

    rbd ls POOL_NAME

    Example

    [root@rbd-client ~]# rbd ls swimmingpool

2.6. Retrieving the block device image information

Retrieve information on the block device image.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To retrieve information from a particular image, execute the following, but replace IMAGE_NAME with the name for the image:

    Syntax

    rbd --image IMAGE_NAME info

    Example

    [root@rbd-client ~]# rbd --image foo info

  2. To retrieve information from an image within a pool, execute the following, but replace IMAGE_NAME with the name of the image and replace POOL_NAME with the name of the pool:

    Syntax

    rbd --image IMAGE_NAME -p POOL_NAME info

    Example

    [root@rbd-client ~]# rbd --image bar -p swimmingpool info

2.7. Resizing a block device image

Ceph block device images are thin-provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size option.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To increase or decrease the maximum size of a Ceph block device image:

    Syntax

    [root@rbd-client ~]# rbd resize --image IMAGE_NAME --size SIZE

2.8. Removing a block device image

Remove a block device image.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To remove a block device, execute the following, but replace IMAGE_NAME with the name of the image you want to remove:

    Syntax

    rbd rm IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd rm foo

  2. To remove a block device from a pool, execute the following, but replace IMAGE_NAME with the name of the image to remove and replace POOL_NAME with the name of the pool:

    Syntax

    rbd rm IMAGE_NAME -p POOL_NAME

    Example

    [root@rbd-client ~]# rbd rm bar -p swimmingpool

2.9. Managing block device images using the trash command

RADOS Block Device (RBD) images can be moved to the trash using the rbd trash command.

This command provides a wide array of options such as:

  • Removing images from the trash.
  • Listing images from the trash.
  • Deferring deletion of images from the trash.
  • Deleting images from the trash.
  • Restoring images from the trash
  • Restoring images from the trash and renaming them.
  • Purging expired images from the trash.
  • Scheduling purge from the trash.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  • Move an image to the trash:

    Syntax

    rbd trash mv POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd trash mv mypool/myimage

    Once an image is in the trash, a unique image ID is assigned.

    Note

    You need this image ID to specify the image later if you need to use any of the trash options.

  • List the images in the trash:

    Syntax

    rbd trash ls POOL_NAME

    Example

    [root@rbd-client ~]# rbd trash ls mypool
    
    1558a57fa43b rename_image

    The unique IMAGE_ID 1558a57fa43b can used for any trash options.

  • Move an image to the trash and defer the deletion of the image from the trash:

    Syntax

    rbd trash mv POOL_NAME/IMAGE_NAME --expires-at "EXPIRATION_TIME"

    The EXPIRATION_TIME can be a number of seconds, hours, date, time in "HH:MM:SS", or "tomorrow".

    Example

    [root@rbd-client ~]# rbd trash mv mypool/myimage --expires-at "60 seconds"

    In this example, myimage is moved to trash. However, you cannot delete it from trash until 60 seconds.

  • Restore the image from the trash:

    Syntax

    rbd trash restore POOL_NAME/IMAGE_ID

    Example

    [root@rbd-client ~]# rbd trash restore mypool/14502ff9ee4d

  • Delete the image from the trash:

    Syntax

    rbd trash rm POOL_NAME/IMAGE_ID [--force]

    Example

    [root@rbd-client ~]# rbd trash rm mypool/14502ff9ee4d
    Removing image: 100% complete...done.

    If the image is deferred for deletion, then you cannot delete it from the trash until expiration. You get the following error message:

    Example

    Deferment time has not expired, please use --force if you really want to remove the image
    Removing image: 0% complete...failed.
    2021-12-02 06:37:49.573 7fb5d237a500 -1 librbd::api::Trash: remove: error: deferment time has not expired.

    Important

    Once an image is deleted from the trash, it cannot be restored.

  • Rename the image and then restore it from the trash:

    Syntax

    rbd trash restore POOL_NAME/IMAGE_ID --image NEW_IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd trash restore mypool/14502ff9ee4d --image test_image

  • Remove expired images from the trash:

    Syntax

    rbd trash purge POOL_NAME

    Example

    [root@rbd-client ~]# rbd trash purge mypool

    In this example, all the images that are trashed from mypool are removed.

2.10. Enabling and disabling image features

You can enable or disable image features, such as fast-diff, exclusive-lock, object-map, or journaling, on already existing images.

Note

The deep flatten feature can be only disabled on already existing images but not enabled. To use deep flatten, enable it when creating images.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To enable a feature.

    Syntax

    rbd feature enable POOL_NAME/IMAGE_NAME FEATURE_NAME

    1. To enable the exclusive-lock feature on the image1 image in the data pool:

      Example

      [root@rbd-client ~]# rbd feature enable data/image1 exclusive-lock

      Important

      If you enable the fast-diff and object-map features, then rebuild the object map:

      + .Syntax

      rbd object-map rebuild POOL_NAME/IMAGE_NAME
  2. To disable a feature.

    Syntax

    rbd feature disable POOL_NAME/IMAGE_NAME FEATURE_NAME

    1. To disable the fast-diff feature on the image2 image in the data pool:

      Example

      [root@rbd-client ~]# rbd feature disable data/image2 fast-diff

2.11. Working with image metadata

Ceph supports adding custom image metadata as key-value pairs. The pairs do not have any strict format.

Also, by using metadata, you can set the RADOS Block Device (RBD) configuration parameters for particular images.

Use the rbd image-meta commands to work with metadata.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To set a new metadata key-value pair:

    Syntax

    rbd image-meta set POOL_NAME/IMAGE_NAME KEY VALUE

    Example

    [root@rbd-client ~]# rbd image-meta set data/dataset last_update 2016-06-06

    This example sets the last_update key to the 2016-06-06 value on the dataset image in the data pool.

  2. To remove a metadata key-value pair:

    Syntax

    rbd image-meta remove POOL_NAME/IMAGE_NAME KEY

    Example

    [root@rbd-client ~]# rbd image-meta remove data/dataset last_update

    This example removes the last_update key-value pair from the dataset image in the data pool.

  3. To view a value of a key:

    Syntax

    rbd image-meta get POOL_NAME/IMAGE_NAME KEY

    Example

    [root@rbd-client ~]# rbd image-meta get data/dataset last_update

    This example views the value of the last_update key.

  4. To show all metadata on an image:

    Syntax

    rbd image-meta list POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd data/dataset image-meta list

    This example lists the metadata set for the dataset image in the data pool.

  5. To override the RBD image configuration settings set in the Ceph configuration file for a particular image:

    Syntax

    rbd config image set POOL_NAME/IMAGE_NAME  PARAMETER VALUE

    Example

    [root@rbd-client ~]# rbd config image set data/dataset rbd_cache false

    This example disables the RBD cache for the dataset image in the data pool.

Additional Resources

  • See the Block device general options section in the Red Hat Ceph Storage Block Device Guide for a list of possible configuration options.

2.12. Moving images between pools

You can move RADOS Block Device (RBD) images between different pools within the same cluster. The migration can be between replicated pools, erasure-coded pools, or between replicated and erasure-coded pools.

During this process, the source image is copied to the target image with all snapshot history and optionally with link to the source image’s parent to help preserve sparseness. The source image is read only, the target image is writable. The target image is linked to the source image while the migration is in progress.

You can safely run this process in the background while the new target image is in use. However, stop all clients using the target image before the preparation step to ensure that clients using the image are updated to point to the new target image.

Important

The krbd kernel module does not support live migration at this time.

Prerequisites

  • Stop all clients that use the source image.
  • Root-level access to the client node.

Procedure

  1. Prepare for migration by creating the new target image that cross-links the source and target images:

    Syntax

    rbd migration prepare SOURCE_IMAGE TARGET_IMAGE

    Replace:

    • SOURCE_IMAGE with the name of the image to be moved. Use the POOL/IMAGE_NAME format.
    • TARGET_IMAGE with the name of the new image. Use the POOL/IMAGE_NAME format.

    Example

    [root@rbd-client ~]# rbd migration prepare data/source stack/target

  2. Verify the state of the new target image, which is supposed to be prepared:

    Syntax

    rbd status TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd status stack/target
    Watchers: none
    Migration:
                source: data/source (5e2cba2f62e)
                destination: stack/target (5e2ed95ed806)
                state: prepared

  3. Optionally, restart the clients using the new target image name.
  4. Copy the source image to target image:

    Syntax

    rbd migration execute TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd migration execute stack/target

  5. Ensure that the migration is completed:

    Example

    [root@rbd-client ~]# rbd status stack/target
    Watchers:
        watcher=1.2.3.4:0/3695551461 client.123 cookie=123
    Migration:
                source: data/source (5e2cba2f62e)
                destination: stack/target (5e2ed95ed806)
                state: executed

  6. Commit the migration by removing the cross-link between the source and target images, and this also removes the source image:

    Syntax

    rbd migration commit TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd migration commit stack/target

    If the source image is a parent of one or more clones, use the --force option after ensuring that the clone images are not in use:

    Example

    [root@rbd-client ~]# rbd migration commit stack/target --force

  7. If you did not restart the clients after the preparation step, restart them using the new target image name.

2.13. The rbdmap service

The systemd unit file, rbdmap.service, is included with the ceph-common package. The rbdmap.service unit executes the rbdmap shell script.

This script automates the mapping and unmapping of RADOS Block Devices (RBD) for one or more RBD images. The script can be ran manually at any time, but the typical use case is to automatically mount RBD images at boot time, and unmount at shutdown. The script takes a single argument, which can be either map, for mounting or unmap, for unmounting RBD images. The script parses a configuration file, the default is /etc/ceph/rbdmap, but can be overridden using an environment variable called RBDMAPFILE. Each line of the configuration file corresponds to an RBD image.

The format of the configuration file format is as follows:

IMAGE_SPEC RBD_OPTS

Where IMAGE_SPEC specifies the POOL_NAME / IMAGE_NAME, or just the IMAGE_NAME, in which case the POOL_NAME defaults to rbd. The RBD_OPTS is an optional list of options to be passed to the underlying rbd map command. These parameters and their values should be specified as a comma-separated string:

OPT1=VAL1,OPT2=VAL2,…​,OPT_N=VAL_N

This will cause the script to issue an rbd map command like the following:

rbd map POOLNAME/IMAGE_NAME --OPT1 VAL1 --OPT2 VAL2

Note

For options and values which contain commas or equality signs, a simple apostrophe can be used to prevent replacing them.

When successful, the rbd map operation maps the image to a /dev/rbdX device, at which point a udev rule is triggered to create a friendly device name symlink, for example, /dev/rbd/POOL_NAME/IMAGE_NAME, pointing to the real mapped device. For mounting or unmounting to succeed, the friendly device name must have a corresponding entry in /etc/fstab file. When writing /etc/fstab entries for RBD images, it is a good idea to specify the noauto or nofail mount option. This prevents the init system from trying to mount the device too early, before the device exists.

Additional Resources

  • See the rbd manpage for a full list of possible options.

2.14. Configuring the rbdmap service

To automatically map and mount, or unmap and unmount, RADOS Block Devices (RBD) at boot time, or at shutdown respectively.

Prerequisites

  • Root-level access to the node doing the mounting.
  • Installation of the ceph-common package.

Procedure

  1. Open for editing the /etc/ceph/rbdmap configuration file.
  2. Add the RBD image or images to the configuration file:

    Example

    foo/bar1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
    foo/bar2    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'

  3. Save changes to the configuration file.
  4. Enable the RBD mapping service:

    Example

    [root@client ~]# systemctl enable rbdmap.service

Additional Resources

  • See the The rbdmap service section of the Red Hat Ceph Storage Block Device Guide for more details on the RBD system service.

2.15. Monitoring performance of Ceph Block Devices using the command-line interface

Starting with Red Hat Ceph Storage 4.1, a performance metrics gathering framework is integrated within the Ceph OSD and Manager components. This framework provides a built-in method to generate and process performance metrics upon which other Ceph Block Device performance monitoring solutions are built.

A new Ceph Manager module,rbd_support, aggregates the performance metrics when enabled. The rbd command has two new actions: iotop and iostat.

Note

The initial use of these actions can take around 30 seconds to populate the data fields.

Prerequisites

  • User-level access to a Ceph Monitor node.

Procedure

  1. Enable the rbd_support Ceph Manager module:

    Example

    [user@mon ~]$ ceph mgr module enable rbd_support

  2. To display an "iotop"-style of images:

    Example

    [user@mon ~]$ rbd perf image iotop

    Note

    The write ops, read-ops, write-bytes, read-bytes, write-latency, and read-latency columns can be sorted dynamically by using the right and left arrow keys.

  3. To display an "iostat"-style of images:

    Example

    [user@mon ~]$ rbd perf image iostat

    Note

    The output from this command can be in JSON or XML format, and then can be sorted using other command-line tools.

2.16. Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.