30.4. Administering VDO
30.4.1. Starting or Stopping VDO
To start a given VDO volume, or all VDO volumes, and the associated UDS index(es), storage management utilities should invoke one of these commands:
#
vdo start --name=my_vdo
#
vdo start --all
The VDO systemd unit is installed and enabled by default when the vdo package is installed. This unit automatically runs the
vdo start --all
command at system startup to bring up all activated VDO volumes. See Section 30.4.6, “Automatically Starting VDO Volumes at System Boot” for more information.
To stop a given VDO volume, or all VDO volumes, and the associated UDS index(es), use one of these commands:
#
vdo stop --name=my_vdo
#
vdo stop --all
Stopping a VDO volume takes time based on the speed of your storage device and the amount of data that the volume needs to write:
- The volume always writes around 1GiB for every 1GiB of the UDS index.
- With a sparse UDS index, the volume additionally writes the amount of data equal to the block map cache size plus up to 8MiB per slab.
If restarted after an unclean shutdown, VDO will perform a rebuild to verify the consistency of its metadata and will repair it if necessary. Rebuilds are automatic and do not require user intervention. See Section 30.4.5, “Recovering a VDO Volume After an Unclean Shutdown” for more information on the rebuild process.
VDO might rebuild different writes depending on the write mode:
- In synchronous mode, all writes that were acknowledged by VDO prior to the shutdown will be rebuilt.
- In asynchronous mode, all writes that were acknowledged prior to the last acknowledged flush request will be rebuilt.
In either mode, some writes that were either unacknowledged or not followed by a flush may also be rebuilt.
For details on VDO write modes, see Section 30.4.2, “Selecting VDO Write Modes”.
30.4.2. Selecting VDO Write Modes
VDO supports three write modes,
sync
, async
, and auto
:
- When VDO is in
sync
mode, the layers above it assume that a write command writes data to persistent storage. As a result, it is not necessary for the file system or application, for example, to issue FLUSH or Force Unit Access (FUA) requests to cause the data to become persistent at critical points.VDO must be set tosync
mode only when the underlying storage guarantees that data is written to persistent storage when the write command completes. That is, the storage must either have no volatile write cache, or have a write through cache. - When VDO is in
async
mode, the data is not guaranteed to be written to persistent storage when a write command is acknowledged. The file system or application must issue FLUSH or FUA requests to ensure data persistence at critical points in each transaction.VDO must be set toasync
mode if the underlying storage does not guarantee that data is written to persistent storage when the write command completes; that is, when the storage has a volatile write back cache.For information on how to find out if a device uses volatile cache or not, see the section called “Checking for a Volatile Cache”.Warning
When VDO is running inasync
mode, it is not compliant with Atomicity, Consistency, Isolation, Durability (ACID). When there is an application or a file system that assumes ACID compliance on top of the VDO volume,async
mode might cause unexpected data loss. - The
auto
mode automatically selectssync
orasync
based on the characteristics of each device. This is the default option.
For a more detailed theoretical overview of how write policies operate, see the section called “Overview of VDO Write Policies”.
To set a write policy, use the
--writePolicy
option. This can be specified either when creating a VDO volume as in Section 30.3.3, “Creating a VDO Volume” or when modifying an existing VDO volume with the changeWritePolicy
subcommand:
#
vdo changeWritePolicy --writePolicy=sync|async|auto --name=vdo_name
Important
Using the incorrect write policy might result in data loss on power failure.
Checking for a Volatile Cache
To see whether a device has a writeback cache, read the
/sys/block/block_device/device/scsi_disk/identifier/cache_type
sysfs file. For example:
- Device
sda
indicates that it has a writeback cache:$
cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type'
write back - Device
sdb
indicates that it does not have a writeback cache:$
cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type'
None
Additionally, in the kernel boot log, you can find whether the above mentioned devices have a write cache or not:
sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports DPO and FUA
See the Viewing and Managing Log Files chapter in the System Administrator's Guide for more information on reading the system log.
In these examples, use the following write policies for VDO:
async
mode for thesda
devicesync
mode for thesdb
device
Note
You should configure VDO to use the
sync
write policy if the cache_type
value is none
or write through
.
30.4.3. Removing VDO Volumes
A VDO volume can be removed from the system by running:
#
vdo remove --name=my_vdo
Prior to removing a VDO volume, unmount file systems and stop applications that are using the storage. The
vdo remove
command removes the VDO volume and its associated UDS index, as well as logical volumes where they reside.
30.4.3.1. Removing an Unsuccessfully Created Volume
If a failure occurs when the
vdo
utility is creating a VDO volume, the volume is left in an intermediate state. This might happen when, for example, the system crashes, power fails, or the administrator interrupts a running vdo create
command.
To clean up from this situation, remove the unsuccessfully created volume with the
--force
option:
#
vdo remove --force --name=my_vdo
The
--force
option is required because the administrator might have caused a conflict by changing the system configuration since the volume was unsuccessfully created. Without the --force
option, the vdo remove
command fails with the following message:
[...] A previous operation failed. Recovery from the failure either failed or was interrupted. Add '--force' to 'remove' to perform the following cleanup. Steps to clean up VDO my_vdo: umount -f /dev/mapper/my_vdo udevadm settle dmsetup remove my_vdo vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete
30.4.4. Configuring the UDS Index
VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they are being stored. The deduplication window is the number of previously written blocks which the index remembers. The size of the deduplication window is configurable. For a given window size, the index will requires a specific amount of RAM and a specific amount of disk space. The size of the window is usually determined by specifying the size of the index memory using the
--indexMem=size
option. The amount of disk space to use will then be determined automatically.
In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an extremely efficient indexing data structure, requiring approximately one-tenth of a byte of DRAM per block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block. The minimum configuration of this index uses 256 MB of DRAM and approximately 25 GB of space on disk. To use this configuration, specify the
--sparseIndex=enabled --indexMem=0.25
options to the vdo create
command. This configuration results in a deduplication window of 2.5 TB (meaning it will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate for deduplicating storage pools that are up to 10 TB in size.
The default configuration of the index, however, is to use a dense index. This index is considerably less efficient (by a factor of 10) in DRAM, but it has much lower (also by a factor of 10) minimum required disk space, making it more convenient for evaluation in constrained environments.
In general, a deduplication window which is one quarter of the physical size of a VDO volume is a recommended configuration. However, this is not an actual requirement. Even small deduplication windows (compared to the amount of physical storage) can find significant amounts of duplicate data in many use cases. Larger windows may also be used, but it in most cases, there will be little additional benefit to doing so.
Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning this important system parameter.
30.4.5. Recovering a VDO Volume After an Unclean Shutdown
If a volume is restarted without having been shut down cleanly, VDO will need to rebuild a portion of its metadata to continue operating, which occurs automatically when the volume is started. (Also see Section 30.4.5.2, “Forcing a Rebuild” to invoke this process on a volume that was cleanly shut down.)
Data recovery depends on the write policy of the device:
- If VDO was running on synchronous storage and write policy was set to
sync
, then all data written to the volume will be fully recovered. - If the write policy was
async
, then some writes may not be recovered if they were not made durable by sending VDO aFLUSH
command, or a write I/O tagged with theFUA
flag (force unit access). This is accomplished from user mode by invoking a data integrity operation likefsync
,fdatasync
,sync
, orumount
.
30.4.5.1. Online Recovery
In the majority of cases, most of the work of rebuilding an unclean VDO volume can be done after the VDO volume has come back online and while it is servicing read and write requests. Initially, the amount of space available for write requests may be limited. As more of the volume's metadata is recovered, more free space may become available. Furthermore, data written while the VDO is recovering may fail to deduplicate against data written before the crash if that data is in a portion of the volume which has not yet been recovered. Data may be compressed while the volume is being recovered. Previously compressed blocks may still be read or overwritten.
During an online recovery, a number of statistics will be unavailable: for example,
blocks in use
and blocks free
. These statistics will become available once the rebuild is complete.
30.4.5.2. Forcing a Rebuild
VDO can recover from most hardware and software errors. If a VDO volume cannot be recovered successfully, it is placed in a read-only mode that persists across volume restarts. Once a volume is in read-only mode, there is no guarantee that data has not been lost or corrupted. In such cases, Red Hat recommends copying the data out of the read-only volume and possibly restoring the volume from backup. (The
operating mode
attribute of vdostats
indicates whether a VDO volume is in read-only mode.)
If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume metadata so the volume can be brought back online and made available. Again, the integrity of the rebuilt data cannot be guaranteed.
To force a rebuild of a read-only VDO volume, first stop the volume if it is running:
#
vdo stop --name=my_vdo
Then restart the volume using the
--forceRebuild
option:
#
vdo start --name=my_vdo --forceRebuild
30.4.6. Automatically Starting VDO Volumes at System Boot
During system boot, the
vdo
systemd unit automatically starts all VDO devices that are configured as activated.
To prevent certain existing volumes from being started automatically, deactivate those volumes by running either of these commands:
- To deactivate a specific volume:
#
vdo deactivate --name=my_vdo
- To deactivate all volumes:
#
vdo deactivate --all
Conversely, to activate volumes, use one of these commands:
- To activate a specific volume:
#
vdo activate --name=my_vdo
- To activate all volumes:
#
vdo activate --all
You can also create a VDO volume that does not start automatically by adding the
--activate=disabled
option to the vdo create
command.
For systems that place LVM volumes on top of VDO volumes as well as beneath them (for example, Figure 30.5, “Deduplicated Unified Storage”), it is vital to start services in the right order:
- The lower layer of LVM must be started first (in most systems, starting this layer is configured automatically when the LVM2 package is installed).
- The
vdo
systemd unit must then be started. - Finally, additional scripts must be run in order to start LVM volumes or other services on top of the now running VDO volumes.
30.4.7. Disabling and Re-enabling Deduplication
In some instances, it may be desirable to temporarily disable deduplication of data being written to a VDO volume while still retaining the ability to read to and write from the volume. While disabling deduplication will prevent subsequent writes from being deduplicated, data which was already deduplicated will remain so.
- To stop deduplication on a VDO volume, use the following command:
#
vdo disableDeduplication --name=my_vdo
This stops the associated UDS index and informs the VDO volume that deduplication is no longer active. - To restart deduplication on a VDO volume, use the following command:
#
vdo enableDeduplication --name=my_vdo
This restarts the associated UDS index and informs the VDO volume that deduplication is active again.
You can also disable deduplication when creating a new VDO volume by adding the
--deduplication=disabled
option to the vdo create
command.
30.4.8. Using Compression
30.4.8.1. Introduction
In addition to block-level deduplication, VDO also provides inline block-level compression using the HIOPS Compression™ technology. While deduplication is the optimal solution for virtual machine environments and backup applications, compression works very well with structured and unstructured file formats that do not typically exhibit block-level redundancy, such as log files and databases.
Compression operates on blocks that have not been identified as duplicates. When unique data is seen for the first time, it is compressed. Subsequent copies of data that have already been stored are deduplicated without requiring an additional compression step. The compression feature is based on a parallelized packaging algorithm that enables it to handle many compression operations at once. After first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks that, when compressed, can fit into a single physical block. After it is determined that a particular physical block is unlikely to hold additional compressed blocks, it is written to storage and the uncompressed blocks are freed and reused. By performing the compression and packaging operations after having already responded to the requestor, using compression imposes a minimal latency penalty.
30.4.8.2. Enabling and Disabling Compression
VDO volume compression is on by default.
When creating a volume, you can disable compression by adding the
--compression=disabled
option to the vdo create
command.
Compression can be stopped on an existing VDO volume if necessary to maximize performance or to speed processing of data that is unlikely to compress.
- To stop compression on a VDO volume, use the following command:
#
vdo disableCompression --name=my_vdo
- To start it again, use the following command:
#
vdo enableCompression --name=my_vdo
30.4.9. Managing Free Space
Because VDO is a thinly provisioned block storage target, the amount of physical space VDO uses may differ from the size of the volume presented to users of the storage. Integrators and systems administrators can exploit this disparity to save on storage costs but must take care to avoid unexpectedly running out of storage space if the data written does not achieve the expected rate of deduplication.
Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that reason, storage systems using VDO must provide storage administrators with a way of monitoring the size of the VDO's free pool. The size of this free pool may be determined by using the
vdostats
utility; see Section 30.7.2, “vdostats” for details. The default output of this utility lists information for all running VDO volumes in a format similar to the Linux df
utility. For example:
Device 1K-blocks Used Available Use% /dev/mapper/my_vdo 211812352 105906176 105906176 50%
When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following:
Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool my_vdo. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool my_vdo is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool my_vdo is now 96.07% full.
If the size of VDO's free pool drops below a certain level, the storage administrator can take action by deleting data (which will reclaim space whenever the deleted data is not duplicated), adding physical storage, or even deleting LUNs.
Important
Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume.
Reclaiming Space on File Systems
VDO cannot reclaim space unless file systems communicate that blocks are free using
DISCARD
, TRIM
, or UNMAP
commands. For file systems that do not use DISCARD
, TRIM
, or UNMAP
, free space may be manually reclaimed by storing a file consisting of binary zeros and then deleting that file.
File systems may generally be configured to issue
DISCARD
requests in one of two ways:
- Realtime discard (also online discard or inline discard)
- When realtime discard is enabled, file systems send
REQ_DISCARD
requests to the block layer whenever a user deletes a file and frees space. VDO recieves these requests and returns space to its free pool, assuming the block was not shared.For file systems that support online discard, you can enable it by setting thediscard
option at mount time. - Batch discard
- Batch discard is a user-initiated operation that causes the file system to notify the block layer (VDO) of any unused blocks. This is accomplished by sending the file system an
ioctl
request calledFITRIM
.You can use thefstrim
utility (for example fromcron
) to send thisioctl
to the file system.
For more information on the
discard
feature, see Section 2.4, “Discard Unused Blocks”.
Reclaiming Space Without a File System
It is also possible to manage free space when the storage is being used as a block storage target without a file system. For example, a single VDO volume can be carved up into multiple subvolumes by installing the Logical Volume Manager (LVM) on top of it. Before deprovisioning a volume, the
blkdiscard
command can be used in order to free the space previously used by that logical volume. LVM supports the REQ_DISCARD
command and will forward the requests to VDO at the appropriate logical block addresses in order to free the space. If other volume managers are being used, they would also need to support REQ_DISCARD
, or equivalently, UNMAP
for SCSI devices or TRIM
for ATA devices.
Reclaiming Space on Fibre Channel or Ethernet Network
VDO volumes (or portions of volumes) can also be provisioned to hosts on a Fibre Channel storage fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. SCSI initiators can use the
UNMAP
command to free space on thinly provisioned storage targets, but the SCSI target framework will need to be configured to advertise support for this command. This is typically done by enabling thin provisioning on these volumes. Support for UNMAP
can be verified on Linux-based SCSI initiators by running the following command:
#
sg_vpd --page=0xb0 /dev/device
In the output, verify that the "Maximum unmap LBA count" value is greater than zero.
30.4.10. Increasing Logical Volume Size
Management applications can increase the logical size of a VDO volume using the
vdo growLogical
subcommand. Once the volume has been grown, the management should inform any devices or file systems on top of the VDO volume of its new size. The volume may be grown as follows:
#
vdo growLogical --name=my_vdo --vdoLogicalSize=new_logical_size
The use of this command allows storage administrators to initially create VDO volumes which have a logical size small enough to be safe from running out of space. After some period of time, the actual rate of data reduction can be evaluated, and if sufficient, the logical size of the VDO volume can be grown to take advantage of the space savings.
30.4.11. Increasing Physical Volume Size
To increase the amount of physical storage available to a VDO volume:
- Increase the size of the underlying device.The exact procedure depends on the type of the device. For example, to resize an MBR partition, use the
fdisk
utility as described in Section 13.5, “Resizing a Partition with fdisk”. - Use the
growPhysical
option to add the new physical storage space to the VDO volume:#
vdo growPhysical --name=my_vdo
It is not possible to shrink a VDO volume with this command.
30.4.12. Automating VDO with Ansible
You can use the Ansible tool to automate VDO deployment and administration. For details, see:
- Ansible documentation: https://docs.ansible.com/
- VDO Ansible module documentation: https://docs.ansible.com/ansible/latest/modules/vdo_module.html