Questo contenuto non è disponibile nella lingua selezionata.

31.3. Data Efficiency Testing Procedures


Successful validation of VDO is dependent upon following a well-structured test procedure. This section provides a series of steps to follow, along with the expected results, as examples of tests to consider when participating in an evaluation.

Test Environment

The test cases in the next section make the following assumptions about the test environment:
  • One or more Linux physical block devices are available.
  • The target block device (for example, /dev/sdb) is larger than 512 GB.
  • Flexible I/O Tester (fio) version 2.1.1 or later is installed.
  • VDO is installed.
The following information should be recorded at the start of each test in order to ensure that the test environment is fully understood:
  • The Linux build used, including the kernel build number.
  • A complete list of installed packages, as obtained from the rpm -qa command.
  • Complete system specifications:
    • CPU type and quantity (available in /proc/cpuinfo).
    • Installed memory and the amount available after the base OS is running (available in /proc/meminfo).
    • Type(s) of drive controller(s) used.
    • Type(s) and quantity of disk(s) used.
  • A complete list of running processes (from ps aux or a similar listing).
  • Name of the Physical Volume and the Volume Group created for use with VDO (pvs and vgs listings).
  • File system used when formatting the VDO volume (if any).
  • Permissions on the mounted directory.
  • Contents of /etc/vdoconf.yaml.
  • Location of the VDO files.
You can capture much of the required information by running sosreport.

Workloads

Effectively testing VDO requires the use of data sets that simulate real world workloads. The data sets should provide a balance between data that can be deduplicated and/or compressed and data that cannot in order to demonstrate performance under different conditions.
There are several tools that can synthetically generate data with repeatable characteristics. Two utilities in particular, VDbench and fio, are recommended for use during testing.
This guide uses fio. Understanding the arguments is critical to a successful evaluation:
Table 31.1. fio Options
ArgumentDescriptionValue
--size The quantity of data fio will send to the target per job (see numjobs below). 100 GB
--bs The block size of each read/write request produced by fio. Red Hat recommends a 4 KB block size to match VDO's 4 KB default 4k
--numjobs
The number of jobs that fio will create to run the benchmark.
Each job sends the amount of data specified by the --size parameter.
The first job sends data to the device at the offset specified by the --offset parameter. Subsequent jobs write the same region of the disk (overwriting) unless the --offset_increment parameter is provided, which will offset each job from where the previous job began by that value. To achieve peak performance on flash at least two jobs are recommended. One job is typically enough to saturate rotational disk (HDD) throughput.
1 (HDD)
2 (SSD)
--thread Instructs fio jobs to be run in threads rather than being forked, which may provide better performance by limiting context switching. <N/A>
--ioengine
There are several I/O engines available in Linux that are able to be tested using fio. Red Hat testing uses the asynchronous unbuffered engine (libaio). If you are interested in another engine, discuss that with your Red Hat Sales Engineer.
The Linux libaio engine is used to evaluate workloads in which one or more processes are making random requests simultaneously. libaio allows multiple requests to be made asynchronously from a single thread before any data has been retrieved, which limits the number of context switches that would be required if the requests were provided by manythreads via a synchronous engine.
libaio
--direct
When set, direct allows requests to be submitted to the device bypassing the Linux Kernel's page cache.
Libaio Engine: libaio must be used with direct enabled (=1) or the kernel may resort to the sync API for all I/O requests.
1 (libaio)
--iodepth
The number of I/O buffers in flight at any time.
A high iodepth will usually increase performance, particularly for random reads or writes. High depths ensure that the controller always has requests to batch. However, setting iodepth too high (greater than 1K, typically) may cause undesirable latency. While Red Hat recommends an iodepth between 128 and 512, the final value is a trade-off and depends on how your application tolerates latency.
128 (minimum)
--iodepth_batch_submit The number of I/Os to create when the iodepth buffer pool begins to empty. This parameter limits task switching from I/O to buffer creation during the test. 16
--iodepth_batch_complete The number of I/Os to complete before submitting a batch (iodepth_batch_complete). This parameter limits task switching from I/O to buffer creation during the test. 16
--gtod_reduce Disables time-of-day calls to calculate latency. This setting will lower throughput if enabled (=0), so it should be enabled (=1) unless latency measurement is necessary. 1

31.3.1. Configuring a VDO Test Volume

1. Create a VDO Volume with a Logical Size of 1 TB on a 512 GB Physical Volume

  1. Create a VDO volume.
    • To test the VDO async mode on top of synchronous storage, create an asynchronous volume using the --writePolicy=async option:
      # vdo create --name=vdo0 --device=/dev/sdb \
                   --vdoLogicalSize=1T --writePolicy=async --verbose
      
    • To test the VDO sync mode on top of synchronous storage, create a synchronous volume using the --writePolicy=sync option:
      # vdo create --name=vdo0 --device=/dev/sdb \
                   --vdoLogicalSize=1T --writePolicy=sync --verbose
      
  2. Format the new device with an XFS or ext4 file system.
    • For XFS:
      # mkfs.xfs -K /dev/mapper/vdo0
      
    • For ext4:
      # mkfs.ext4 -E nodiscard /dev/mapper/vdo0
      
  3. Mount the formatted device:
    # mkdir /mnt/VDOVolume
    # mount /dev/mapper/vdo0 /mnt/VDOVolume && \
      chmod a+rwx /mnt/VDOVolume
    

31.3.2. Testing VDO Efficiency

2. Test Reading and Writing to the VDO Volume

  1. Write 32 GB of random data to the VDO volume:
    $ dd if=/dev/urandom of=/mnt/VDOVolume/testfile bs=4096 count=8388608
    
  2. Read the data from the VDO volume and write it to another location not on the VDO volume:
    $ dd if=/mnt/VDOVolume/testfile of=/home/user/testfile bs=4096
    
  3. Compare the two files using diff, which should report that the files are the same:
    $ diff -s /mnt/VDOVolume/testfile /home/user/testfile
    
  4. Copy the file to a second location on the VDO volume:
    $ dd if=/home/user/testfile of=/mnt/VDOVolume/testfile2 bs=4096
    
  5. Compare the third file to the second file. This should report that the files are the same:
    $ diff -s /mnt/VDOVolume/testfile2 /home/user/testfile
    

3. Remove the VDO Volume

  1. Unmount the file system created on the VDO volume:
    # umount /mnt/VDOVolume
  2. Run the command to remove the VDO volume vdo0 from the system:
    # vdo remove --name=vdo0
  3. Verify that the volume has been removed. There should be no listing in vdo list for the VDO partition:
    # vdo list --all | grep vdo

4. Measure Deduplication

  1. Create and mount a VDO volume following Section 31.3.1, “Configuring a VDO Test Volume”.
  2. Create 10 directories on the VDO volume named vdo1 through vdo10 to hold 10 copies of the test data set:
    $ mkdir /mnt/VDOVolume/vdo{01..10}
  3. Examine the amount of disk space used according to the file system:
    $ df -h /mnt/VDOVolume
    
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/vdo0      1.5T  198M  1.4T   1% /mnt/VDOVolume
    
    Consider tabulating the results in a table:
    StatisticBare File SystemAfter SeedAfter 10 Copies
    File System Used Size198 MB  
    VDO Data Used   
    VDO Logical Used   
  4. Run the following command and record the values. "Data blocks used" is the number of blocks used by user data on the physical device running under VDO. "Logical blocks used" is the number of blocks used before optimization. It will be used as the starting point for measurements
    # vdostats --verbose | grep "blocks used"
    
    data blocks used                : 1090
    overhead blocks used            : 538846
    logical blocks used             : 6059434
    
  5. Create a data source file in the top level of the VDO volume
    $ dd if=/dev/urandom of=/mnt/VDOVolume/sourcefile bs=4096 count=1048576
    
    4294967296 bytes (4.3 GB) copied, 540.538 s, 7.9 MB/s
    
  6. Re-examine the amount of used physical disk space in use. This should show an increase in the number of blocks used corresponding to the file just written:
    $ df -h /mnt/VDOVolume
    
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/vdo0      1.5T  4.2G  1.4T   1% /mnt/VDOVolume
    
    # vdostats --verbose | grep "blocks used"
    
    data blocks used                : 1050093 (increased by 4GB)
    overhead blocks used            : 538846 (Did not change)
    logical blocks used             : 7108036 (increased by 4GB)
    
  7. Copy the file to each of the 10 subdirectories:
    $ for i in {01..10}; do
      cp /mnt/VDOVolume/sourcefile /mnt/VDOVolume/vdo$i
      done
    
  8. Once again, check the amount of physical disk space used (data blocks used). This number should be similar to the result of step 6 above, with only a slight increase due to file system journaling and metadata:
    $ df -h /mnt/VDOVolume
    
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/vdo0      1.5T   45G  1.3T   4% /mnt/VDOVolume
    
    # vdostats --verbose | grep "blocks used"
    
    data blocks used                : 1050836 (increased by 3M)
    overhead blocks used            : 538846
    logical blocks used             : 17594127 (increased by 41G)
    
  9. Subtract this new value of the space used by the file system from the value found before writing the test data. This is the amount of space consumed by this test from the file system's perspective.
  10. Observe the space savings in your recorded statistics:
    Note:In the following table, values have been converted to MB/GB. vdostats "blocks" are 4,096 B.
    StatisticBare File SystemAfter SeedAfter 10 Copies
    File System Used Size198 MB4.2 GB45 GB
    VDO Data Used4 MB4.1 GB4.1 GB
    VDO Logical Used23.6 GB*27.8 GB68.7 GB
    * File system overhead for 1.6 TB formatted drive

5. Measure Compression

  1. Create a VDO volume of at least 10 GB of physical and logical size. Add options to disable deduplication and enable compression:
    # vdo create --name=vdo0 --device=/dev/sdb \
                 --vdoLogicalSize=10G --verbose \
                 --deduplication=disabled --compression=enabled
    
  2. Inspect VDO statistics before transfer; make note of data blocks used and logical blocks used (both should be zero):
    # vdostats --verbose | grep "blocks used"
  3. Format the new device with an XFS or ext4 file system.
    • For XFS:
      # mkfs.xfs -K /dev/mapper/vdo0
      
    • For ext4:
      # mkfs.ext4 -E nodiscard /dev/mapper/vdo0
      
  4. Mount the formatted device:
    # mkdir /mnt/VDOVolume
    # mount /dev/mapper/vdo0 /mnt/VDOVolume && \
      chmod a+rwx /mnt/VDOVolume
    
  5. Synchronize the VDO volume to complete any unfinished compression:
    # sync && dmsetup message vdo0 0 sync-dedupe
  6. Inspect VDO statistics again. Logical blocks used — data blocks used is the number of 4 KB blocks saved by compression for the file system alone. VDO optimizes file system overhead as well as actual user data:
    # vdostats --verbose | grep "blocks used"
  7. Copy the contents of /lib to the VDO volume. Record the total size:
    # cp -vR /lib /mnt/VDOVolume
    
    ...
    sent 152508960 bytes  received 60448 bytes  61027763.20 bytes/sec
    total size is 152293104  speedup is 1.00
    
  8. Synchronize Linux caches and the VDO volume:
    # sync && dmsetup message vdo0 0 sync-dedupe
  9. Inspect VDO statistics once again. Observe the logical and data blocks used:
    # vdostats --verbose | grep "blocks used"
    • Logical blocks used - data blocks used represents the amount of space used (in units of 4 KB blocks) for the copy of your /lib files.
    • The total size (from the table in the section called “4. Measure Deduplication”) - (logical blocks used-data blocks used * 4096) = bytes saved by compression.
  10. Remove the VDO volume:
    # umount /mnt/VDOVolume && vdo remove --name=vdo0

6. Test VDO Compression Efficiency

  1. Create and mount a VDO volume following Section 31.3.1, “Configuring a VDO Test Volume”.
  2. Repeat the experiments in the section called “4. Measure Deduplication” and the section called “5. Measure Compression” without removing the volume. Observe changes to space savings in vdostats.
  3. Experiment with your own datasets.

7. Understanding TRIM and DISCARD

Thin provisioning allows a logical or virtual storage space to be larger than the underlying physical storage. Applications such as file systems benefit from running on the larger virtual layer of storage, and data-efficiency techniques such as data deduplication reduce the number of physical data blocks needed to store all of the data. To benefit from these storage savings, the physical storage layer needs to know when application data has been deleted.
Traditional file systems did not have to inform the underlying storage when data was deleted. File systems that work with thin provisioned storage send TRIM or DISCARD commands to inform the storage system when a logical block is no longer required. These commands can be sent whenever a block is deleted using the discard mount option, or these commands can be sent in a controlled manner by running utilities such as fstrim that tell the file system to detect which logical blocks are unused and send the information to the storage system in the form of a TRIM or DISCARD command.

Important

For more information on how thin provisioning works, see Thinly-Provisioned Logical Volumes (Thin Volumes) in the Red Hat Enterprise Linux 7 Logical Volume Manager Administration Guide.
To see how this works:
  1. Create and mount a new VDO logical volume following Section 31.3.1, “Configuring a VDO Test Volume”.
  2. Trim the file system to remove any unneeded blocks (this may take a long time):
    # fstrim /mnt/VDOVolume
  3. Record the initial state in following table below by entering:
    $ df -m /mnt/VDOVolume
    to see how much capacity is used in the file system, and run vdostats to see how many physical and logical data blocks are being used.
  4. Create a 1 GB file with non-duplicate data in the file system running on top of VDO:
    $ dd if=/dev/urandom of=/mnt/VDOVolume/file bs=1M count=1K
    
    and then collect the same data. The file system should have used an additional 1 GB, and the data blocks used and logical blocks used have increased similarly.
  5. Run fstrim /mnt/VDOVolume and confirm that this has no impact after creating a new file.
  6. Delete the 1 GB file:
    $ rm /mnt/VDOVolume/file
    Check and record the parameters. The file system is aware that a file has been deleted, but there has been no change to the number of physical or logical blocks because the file deletion has not been communicated to the underlying storage.
  7. Run fstrim /mnt/VDOVolume and record the same parameters. fstrim looks for free blocks in the file system and sends a TRIM command to the VDO volume for unused addresses, which releases the associated logical blocks, and VDO processes the TRIM to release the underlying physical blocks.
    StepFile Space Used (MB)Data Blocks UsedLogical Blocks Used
    Initial   
    Add 1 GB File   
    Run fstrim   
    Delete 1 GB File   
    Run fstrim   
From this exercise, the TRIM process is needed so the underlying storage can have an accurate knowledge of capacity utilization. fstrim is a command line tool that analyzes many blocks at once for greater efficiency. An alternative method is to use the file system discard option when mounting. The discard option will update the underlying storage after each file system block is deleted, which can slow throughput but provides for great utilization awareness. It is also important to understand that the need to TRIM or DISCARD unused blocks is not unique to VDO; any thin-provisioned storage system has the same challenge
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.