Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 2. Ceph block devices


As a storage administrator, being familiar with Ceph’s block device commands can help you effectively manage the Red Hat Ceph Storage cluster. You can create and manage block devices pools and images, along with enabling and disabling the various features of Ceph block devices.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

2.1. Displaying the command help

Display command, and sub-command online help from the command-line interface.

Note

The -h option still displays help for all available commands.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. Use the rbd help command to display help for a particular rbd command and its subcommand:

    Syntax

    rbd help COMMAND SUBCOMMAND

  2. To display help for the snap list command:

    [root@rbd-client ~]# rbd help snap list

2.2. Creating a block device pool

Before using the block device client, ensure a pool for rbd exists, is enabled and initialized.

Note

You MUST create a pool first before you can specify it as a source.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To create an rbd pool, execute the following:

    Syntax

    ceph osd pool create POOL_NAME PG_NUM
    ceph osd pool application enable POOL_NAME rbd
    rbd pool init -p POOL_NAME

    Example

    [root@rbd-client ~]# ceph osd pool create pool1
    [root@rbd-client ~]# ceph osd pool application enable pool1 rbd
    [root@rbd-client ~]# rbd pool init -p pool1

Additional Resources

  • See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for additional details.

2.3. Creating a block device image

Before adding a block device to a node, create an image for it in the Ceph storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To create a block device image, execute the following command:

    Syntax

    rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME

    Example

    [root@rbd-client ~]# rbd create image1 --size 1024 --pool pool1

    This example creates a 1 GB image named image1 that stores information in a pool named pool1.

    Note

    Ensure the pool exists before creating an image.

Additional Resources

2.4. Listing the block device images

List the block device images.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To list block devices in the rbd pool, execute the following command:

    Note

    rbd is the default pool name.

    Example

    [root@rbd-client ~]# rbd ls

  2. To list block devices in a specific pool:

    Syntax

    rbd ls POOL_NAME

    Example

    [root@rbd-client ~]# rbd ls pool1

2.5. Retrieving the block device image information

Retrieve information on the block device image.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To retrieve information from a particular image in the default rbd pool, run the following command:

    Syntax

    rbd --image IMAGE_NAME info

    Example

    [root@rbd-client ~]# rbd --image image1 info

  2. To retrieve information from an image within a pool:

    Syntax

    rbd --image IMAGE_NAME -p POOL_NAME info

    Example

    [root@rbd-client ~]# rbd --image image1 -p pool1 info

2.6. Resizing a block device image

Ceph block device images are thin-provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size option.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To increase or decrease the maximum size of a Ceph block device image for the default rbd pool:

    Syntax

    rbd resize --image IMAGE_NAME --size SIZE

    Example

    [root@rbd-client ~]# rbd resize --image image1 --size 1024

  2. To increase or decrease the maximum size of a Ceph block device image for a specific pool:

    Syntax

    rbd resize --image POOL_NAME/IMAGE_NAME --size SIZE

    Example

    [root@rbd-client ~]# rbd resize --image pool1/image1 --size 1024

2.7. Removing a block device image

Remove a block device image.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To remove a block device from the default rbd pool:

    Syntax

    rbd rm IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd rm image1

  2. To remove a block device from a specific pool:

    Syntax

    rbd rm IMAGE_NAME -p POOL_NAME

    Example

    [root@rbd-client ~]# rbd rm image1 -p pool1

2.8. Moving a block device image to the trash

RADOS Block Device (RBD) images can be moved to the trash using the rbd trash command. This command provides more options than the rbd rm command.

Once an image is moved to the trash, it can be removed from the trash at a later time. This helps to avoid accidental deletion.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To move an image to the trash execute the following:

    Syntax

    rbd trash mv [POOL_NAME/] IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd trash mv pool1/image1

    Once an image is in the trash, a unique image ID is assigned.

    Note

    You need this image ID to specify the image later if you need to use any of the trash options.

  2. Execute the rbd trash list POOL_NAME for a list of IDs of the images in the trash. This command also returns the image’s pre-deletion name. In addition, there is an optional --image-id argument that can be used with rbd info and rbd snap commands. Use --image-id with the rbd info command to see the properties of an image in the trash, and with rbd snap to remove an image’s snapshots from the trash.
  3. To remove an image from the trash execute the following:

    Syntax

    rbd trash rm [POOL_NAME/] IMAGE_ID

    Example

    [root@rbd-client ~]# rbd trash rm pool1/d35ed01706a0

    Important

    Once an image is removed from the trash, it cannot be restored.

  4. Execute the rbd trash restore command to restore the image:

    Syntax

    rbd trash restore [POOL_NAME/] IMAGE_ID

    Example

    [root@rbd-client ~]# rbd trash restore pool1/d35ed01706a0

  5. To remove all expired images from trash:

    Syntax

    rbd trash purge POOL_NAME

    Example

    [root@rbd-client ~]# rbd trash purge pool1
    Removing images: 100% complete...done.

2.9. Defining an automatic trash purge schedule

You can schedule periodic trash purge operations on a pool.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To add a trash purge schedule, execute:

    Syntax

    rbd trash purge schedule add --pool POOL_NAME INTERVAL

    Example

    [ceph: root@host01 /]# rbd trash purge schedule add --pool pool1 10m

  2. To list the trash purge schedule, execute:

    Syntax

    rbd trash purge schedule ls --pool POOL_NAME

    Example

    [ceph: root@host01 /]# rbd trash purge schedule ls --pool pool1
    every 10m

  3. To know the status of trash purge schedule, execute:

    Example

    [ceph: root@host01 /]# rbd trash purge schedule status
    POOL   NAMESPACE   SCHEDULE  TIME
    pool1             2021-08-02 11:50:00

  4. To remove the trash purge schedule, execute:

    Syntax

    rbd trash purge schedule remove --pool POOL_NAME INTERVAL

    Example

    [ceph: root@host01 /]# rbd trash purge schedule remove --pool pool1 10m

2.10. Enabling and disabling image features

The block device images, such as fast-diff, exclusive-lock, object-map, or deep-flatten, are enabled by default. You can enable or disable these image features on already existing images.

Note

The deep flatten feature can be only disabled on already existing images but not enabled. To use deep flatten, enable it when creating images.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. Retrieve information from a particular image in a pool:

    Syntax

    rbd --image POOL_NAME/IMAGE_NAME info

    Example

    [ceph: root@host01 /]# rbd --image pool1/image1 info

  2. Enable a feature:

    Syntax

    rbd feature enable POOL_NAME/IMAGE_NAME FEATURE_NAME

    1. To enable the exclusive-lock feature on the image1 image in the pool1 pool:

      Example

      [ceph: root@host01 /]# rbd feature enable pool1/image1 exclusive-lock

      Important

      If you enable the fast-diff and object-map features, then rebuild the object map:

      + .Syntax

      rbd object-map rebuild POOL_NAME/IMAGE_NAME
  3. Disable a feature:

    Syntax

    rbd feature disable POOL_NAME/IMAGE_NAME FEATURE_NAME

    1. To disable the fast-diff feature on the image1 image in the pool1 pool:

      Example

      [ceph: root@host01 /]# rbd feature disable pool1/image1 fast-diff

2.11. Working with image metadata

Ceph supports adding custom image metadata as key-value pairs. The pairs do not have any strict format.

Also, by using metadata, you can set the RADOS Block Device (RBD) configuration parameters for particular images.

Use the rbd image-meta commands to work with metadata.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. To set a new metadata key-value pair:

    Syntax

    rbd image-meta set POOL_NAME/IMAGE_NAME KEY VALUE

    Example

    [ceph: root@host01 /]# rbd image-meta set pool1/image1 last_update 2021-06-06

    This example sets the last_update key to the 2021-06-06 value on the image1 image in the pool1 pool.

  2. To view a value of a key:

    Syntax

    rbd image-meta get POOL_NAME/IMAGE_NAME KEY

    Example

    [ceph: root@host01 /]# rbd image-meta get pool1/image1 last_update

    This example views the value of the last_update key.

  3. To show all metadata on an image:

    Syntax

    rbd image-meta list POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@host01 /]# rbd image-meta list pool1/image1

    This example lists the metadata set for the image1 image in the pool1 pool.

  4. To remove a metadata key-value pair:

    Syntax

    rbd image-meta remove POOL_NAME/IMAGE_NAME KEY

    Example

    [ceph: root@host01 /]# rbd image-meta remove pool1/image1 last_update

    This example removes the last_update key-value pair from the image1 image in the pool1 pool.

  5. To override the RBD image configuration settings set in the Ceph configuration file for a particular image:

    Syntax

    rbd config image set POOL_NAME/IMAGE_NAME  PARAMETER VALUE

    Example

    [ceph: root@host01 /]# rbd config image set pool1/image1 rbd_cache false

    This example disables the RBD cache for the image1 image in the pool1 pool.

Additional Resources

  • See the Block device general options section in the Red Hat Ceph Storage Block Device Guide for a list of possible configuration options.

2.12. Moving images between pools

You can move RADOS Block Device (RBD) images between different pools within the same cluster.

During this process, the source image is copied to the target image with all snapshot history and optionally with link to the source image’s parent to help preserve sparseness. The source image is read only, the target image is writable. The target image is linked to the source image while the migration is in progress.

You can safely run this process in the background while the new target image is in use. However, stop all clients using the target image before the preparation step to ensure that clients using the image are updated to point to the new target image.

Important

The krbd kernel module does not support live migration at this time.

Prerequisites

  • Stop all clients that use the source image.
  • Root-level access to the client node.

Procedure

  1. Prepare for migration by creating the new target image that cross-links the source and target images:

    Syntax

    rbd migration prepare SOURCE_IMAGE TARGET_IMAGE

    Replace:

    • SOURCE_IMAGE with the name of the image to be moved. Use the POOL/IMAGE_NAME format.
    • TARGET_IMAGE with the name of the new image. Use the POOL/IMAGE_NAME format.

    Example

    [root@rbd-client ~]# rbd migration prepare pool1/image1 pool2/image2

  2. Verify the state of the new target image, which is supposed to be prepared:

    Syntax

    rbd status TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd status pool2/image2
    Watchers: none
    Migration:
                source: pool1/image1 (5e2cba2f62e)
                destination: pool2/image2 (5e2ed95ed806)
                state: prepared

  3. Optionally, restart the clients using the new target image name.
  4. Copy the source image to target image:

    Syntax

    rbd migration execute TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd migration execute pool2/image2

  5. Ensure that the migration is completed:

    Example

    [root@rbd-client ~]# rbd status pool2/image2
    Watchers:
        watcher=1.2.3.4:0/3695551461 client.123 cookie=123
    Migration:
                source: pool1/image1 (5e2cba2f62e)
                destination: pool2/image2 (5e2ed95ed806)
                state: executed

  6. Commit the migration by removing the cross-link between the source and target images, and this also removes the source image:

    Syntax

    rbd migration commit TARGET_IMAGE

    Example

    [root@rbd-client ~]# rbd migration commit pool2/image2

    If the source image is a parent of one or more clones, use the --force option after ensuring that the clone images are not in use:

    Example

    [root@rbd-client ~]# rbd migration commit pool2/image2 --force

  7. If you did not restart the clients after the preparation step, restart them using the new target image name.

2.13. Migrating pools

You can migrate or copy RADOS Block Device (RBD) images.

During this process, the source image is exported and then imported.

Important

Use this migration process if the workload contains only RBD images. No rados cppool images can exist in the workload. If rados cppool images exist in the workload, see Migrating a pool in the Storage Strategies Guide.

Important

While running the export and import commands, be sure that there is no active I/O in the related RBD images. It is recommended to take production down during this pool migration time.

Prerequisites

  • Stop all active I/O in the RBD images which are being exported and imported.
  • Root-level access to the client node.

Procedure

  • Migrate the volume.

    Syntax

    rbd export volumes/VOLUME_NAME - | rbd import --image-format 2 - volumes_new/VOLUME_NAME

    Example

    [root@rbd-client ~]# rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16 - | rbd import --image-format 2 - volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16

  • If using the local drive for import or export is necessary, the commands can be divided, first exporting to a local drive and then importing the files to a new pool.

    Syntax

    rbd export volume/VOLUME_NAME FILE_PATH
    rbd import --image-format 2 FILE_PATH volumes_new/VOLUME_NAME

    Example

    [root@rbd-client ~]# rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16  <path of export file>
    [root@rbd-client ~]# rbd import --image-format 2 <path> volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16

2.14. The rbdmap service

The systemd unit file, rbdmap.service, is included with the ceph-common package. The rbdmap.service unit executes the rbdmap shell script.

This script automates the mapping and unmapping of RADOS Block Devices (RBD) for one or more RBD images. The script can be ran manually at any time, but the typical use case is to automatically mount RBD images at boot time, and unmount at shutdown. The script takes a single argument, which can be either map, for mounting or unmap, for unmounting RBD images. The script parses a configuration file, the default is /etc/ceph/rbdmap, but can be overridden using an environment variable called RBDMAPFILE. Each line of the configuration file corresponds to an RBD image.

The format of the configuration file format is as follows:

IMAGE_SPEC RBD_OPTS

Where IMAGE_SPEC specifies the POOL_NAME / IMAGE_NAME, or just the IMAGE_NAME, in which case the POOL_NAME defaults to rbd. The RBD_OPTS is an optional list of options to be passed to the underlying rbd map command. These parameters and their values should be specified as a comma-separated string:

OPT1=VAL1,OPT2=VAL2,…​,OPT_N=VAL_N

This will cause the script to issue an rbd map command like the following:

Syntax

rbd map POOLNAME/IMAGE_NAME --OPT1 VAL1 --OPT2 VAL2

Note

For options and values which contain commas or equality signs, a simple apostrophe can be used to prevent replacing them.

When successful, the rbd map operation maps the image to a /dev/rbdX device, at which point a udev rule is triggered to create a friendly device name symlink, for example, /dev/rbd/POOL_NAME/IMAGE_NAME, pointing to the real mapped device. For mounting or unmounting to succeed, the friendly device name must have a corresponding entry in /etc/fstab file. When writing /etc/fstab entries for RBD images, it is a good idea to specify the noauto or nofail mount option. This prevents the init system from trying to mount the device too early, before the device exists.

Additional Resources

  • See the rbd manpage for a full list of possible options.

2.15. Configuring the rbdmap service

To automatically map and mount, or unmap and unmount, RADOS Block Devices (RBD) at boot time, or at shutdown respectively.

Prerequisites

  • Root-level access to the node doing the mounting.
  • Installation of the ceph-common package.

Procedure

  1. Open for editing the /etc/ceph/rbdmap configuration file.
  2. Add the RBD image or images to the configuration file:

    Example

    foo/bar1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
    foo/bar2    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'

  3. Save changes to the configuration file.
  4. Enable the RBD mapping service:

    Example

    [root@client ~]# systemctl enable rbdmap.service

Additional Resources

  • See the The rbdmap service section of the Red Hat Ceph Storage Block Device Guide for more details on the RBD system service.

2.16. Persistent Write Log Cache

In a Red Hat Ceph Storage cluster, Persistent Write Log (PWL) cache provides a persistent, fault-tolerant write-back cache for librbd-based RBD clients.

PWL cache uses a log-ordered write-back design which maintains checkpoints internally so that writes that get flushed back to the cluster are always crash consistent. If the client cache is lost entirely, the disk image is still consistent but the data appears stale. You can use PWL cache with persistent memory (PMEM) or solid-state disks (SSD) as cache devices.

For PMEM, the cache mode is replica write log (RWL) and for SSD, the cache mode is (SSD). Currently, PWL cache supports RWL and SSD modes and is disabled by default.

Primary benefits of PWL cache are:

  • PWL cache can provide high performance when the cache is not full. The larger the cache, the longer the duration of high performance.
  • PWL cache provides persistence and is not much slower than RBD cache. RBD cache is faster but volatile and cannot guarantee data order and persistence.
  • In a steady state, where the cache is full, performance is affected by the number of I/Os in flight. For example, PWL can provide higher performance at low io_depth, but at high io_depth, such as when the number of I/Os is greater than 32, the performance is often worse than that in cases without cache.

Use cases for PMEM caching are:

  • Different from RBD cache, PWL cache has non-volatile characteristics and is used in scenarios where you do not want data loss and need performance.
  • RWL mode provides low latency. It has a stable low latency for burst I/Os and it is suitable for those scenarios with high requirements for stable low latency.
  • RWL mode also has high continuous and stable performance improvement in scenarios with low I/O depth or not too much inflight I/O.

Use case for SSD caching is:

  • The advantages of SSD mode are similar to RWL mode. SSD hardware is relatively cheap and popular, but its performance is slightly lower than PMEM.

2.17. Persistent write log cache limitations

When using Persistent Write Log (PWL) cache, there are several limitations that should be considered.

  • The underlying implementation of persistent memory (PMEM) and solid-state disks (SSD) is different, with PMEM having higher performance. At present, PMEM can provide "persist on write" and SSD is "persist on flush or checkpoint". In future releases, these two modes will be configurable.
  • When users switch frequently and open and close images repeatedly, Ceph displays poor performance. If PWL cache is enabled, the performance is worse. It is not recommended to set num_jobs in a Flexible I/O (fio) test, but instead setup multiple jobs to write different images.

2.18. Enabling persistent write log cache

You can enable persistent write log cache (PWL) on a Red Hat Ceph Storage cluster by setting the Ceph RADOS block device (RBD) rbd_persistent_cache_mode and rbd_plugins options.

Important

The exclusive-lock feature must be enabled to enable persistent write log cache. The cache can be loaded only after the exclusive-lock is acquired. Exclusive-locks are enabled on newly created images by default unless overridden by the rbd_default_features configuration option or the --image-feature flag for the rbd create command. See the Enabling and disabling image features section for more details on the exclusive-lock feature.

Set the persistent write log cache options at the host level by using the ceph config set command. Set the persistent write log cache options at the pool or image level by using the rbd config pool set or the rbd config image set commands.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the monitor node.
  • The exclusive-lock feature is enabled.
  • Client-side disks are persistent memory (PMEM) or solid-state disks (SSD).
  • RBD cache is disabled.

Procedure

  1. Enable PWL cache:

    1. At the host level, use the ceph config set command:

      Syntax

      ceph config set client rbd_persistent_cache_mode CACHE_MODE
      ceph config set client rbd_plugins pwl_cache

      Replace CACHE_MODE with rwl or ssd.

      Example

      [ceph: root@host01 /]# ceph config set client rbd_persistent_cache_mode ssd
      [ceph: root@host01 /]# ceph config set client rbd_plugins pwl_cache

    2. At the pool level, use the rbd config pool set command:

      Syntax

      rbd config pool set POOL_NAME rbd_persistent_cache_mode CACHE_MODE
      rbd config pool set POOL_NAME rbd_plugins pwl_cache

      Replace CACHE_MODE with rwl or ssd.

      Example

      [ceph: root@host01 /]# rbd config pool set pool1 rbd_persistent_cache_mode ssd
      [ceph: root@host01 /]# rbd config pool set pool1 rbd_plugins pwl_cache

    3. At the image level, use the rbd config image set command:

      Syntax

      rbd config image set POOL_NAME/IMAGE_NAME rbd_persistent_cache_mode CACHE_MODE
      rbd config image set POOL_NAME/IMAGE_NAME rbd_plugins pwl_cache

      Replace CACHE_MODE with rwl or ssd.

      Example

      [ceph: root@host01 /]# rbd config image set pool1/image1 rbd_persistent_cache_mode ssd
      [ceph: root@host01 /]# rbd config image set pool1/image1 rbd_plugins pwl_cache

  2. Optional: Set the additional RBD options at the host, the pool, or the image level:

    Syntax

    rbd_persistent_cache_mode CACHE_MODE
    rbd_plugins pwl_cache
    rbd_persistent_cache_path /PATH_TO_CACHE_DIRECTORY 1
    rbd_persistent_cache_size PERSISTENT_CACHE_SIZE 2

1
rbd_persistent_cache_path - A file folder to cache data that must have direct access (DAX) enabled when using the rwl mode to avoid performance degradation.
2
rbd_persistent_cache_size - The cache size per image, with a minimum cache size of 1 GB. The larger the cache size, the better the performance.
  1. Setting additional RBD options for rwl mode:

    Example

    rbd_cache false
    rbd_persistent_cache_mode rwl
    rbd_plugins pwl_cache
    rbd_persistent_cache_path /mnt/pmem/cache/
    rbd_persistent_cache_size 1073741824

  2. Setting additional RBD options for ssd mode:

    Example

    rbd_cache false
    rbd_persistent_cache_mode ssd
    rbd_plugins pwl_cache
    rbd_persistent_cache_path /mnt/nvme/cache
    rbd_persistent_cache_size 1073741824

Additional Resources

2.19. Checking persistent write log cache status

You can check the status of the Persistent Write Log (PWL) cache. The cache is used when an exclusive lock is acquired, and when the exclusive-lock is released, the persistent write log cache is closed. The cache status shows information about the cache size, location, type, and other cache-related information. Updates to the cache status are done when the cache is opened and closed.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the monitor node.
  • A running process with PWL cache enabled.

Procedure

  • View the PWL cache status:

    Syntax

    rbd status POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@host01 /]# rbd status pool1/image1
    Watchers:
            watcher=10.10.0.102:0/1061883624 client.25496 cookie=140338056493088
    Persistent cache state:
            host: host02
            path: /mnt/nvme0/rbd-pwl.rbd.101e5824ad9a.pool
            size: 1 GiB
            mode: ssd
            stats_timestamp: Mon Apr 18 13:26:32 2022
            present: true   empty: false    clean: false
            allocated: 509 MiB
            cached: 501 MiB
            dirty: 338 MiB
            free: 515 MiB
            hits_full: 1450 / 61%
            hits_partial: 0 / 0%
            misses: 924
            hit_bytes: 192 MiB / 66%
            miss_bytes: 97 MiB

2.20. Flushing persistent write log cache

You can flush the cache file with the rbd command, specifying persistent-cache flush, the pool name, and the image name before discarding the persistent write log (PWL) cache. The flush command can explicitly write cache files back to the OSDs. If there is a cache interruption or the application dies unexpectedly, all the entries in the cache are flushed to the OSDs so that you can manually flush the data and then invalidate the cache.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the monitor node.
  • PWL cache is enabled.

Procedure

  • Flush the PWL cache:

    Syntax

    rbd persistent-cache flush POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@host01 /]# rbd persistent-cache flush pool1/image1

Additional Resources

2.21. Discarding persistent write log cache

You might need to manually discard the Persistent Write Log (PWL) cache, for example, if the data in the cache has expired. You can discard a cache file for an image by using the rbd persistent-cache invalidate command. The command removes the cache metadata for the specified image, disables the cache feature, and deletes the local cache file, if it exists.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the monitor node.
  • PWL cache is enabled.

Procedure

  • Discard PWL cache:

    Syntax

    rbd persistent-cache invalidate POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@host01 /]# rbd persistent-cache invalidate pool1/image1

2.22. Monitoring performance of Ceph Block Devices using the command-line interface

Starting with Red Hat Ceph Storage 4.1, a performance metrics gathering framework is integrated within the Ceph OSD and Manager components. This framework provides a built-in method to generate and process performance metrics upon which other Ceph Block Device performance monitoring solutions are built.

A new Ceph Manager module,rbd_support, aggregates the performance metrics when enabled. The rbd command has two new actions: iotop and iostat.

Note

The initial use of these actions can take around 30 seconds to populate the data fields.

Prerequisites

  • User-level access to a Ceph Monitor node.

Procedure

  1. Ensure the rbd_support Ceph Manager module is enabled:

    Example

    [ceph: root@host01 /]# ceph mgr module ls
    
    {
           "always_on_modules": [
              "balancer",
               "crash",
               "devicehealth",
               "orchestrator",
               "pg_autoscaler",
               "progress",
               "rbd_support",   <--
               "status",
               "telemetry",
               "volumes"
    }

  2. To display an "iotop"-style of images:

    Example

    [user@mon ~]$ rbd perf image iotop

    Note

    The write ops, read-ops, write-bytes, read-bytes, write-latency, and read-latency columns can be sorted dynamically by using the right and left arrow keys.

  3. To display an "iostat"-style of images:

    Example

    [user@mon ~]$ rbd perf image iostat

    Note

    The output from this command can be in JSON or XML format, and then can be sorted using other command-line tools.

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.