Search

Chapter 17. Storage

download PDF

New kernel subsystem: libnvdimm

This update adds libnvdimm, a kernel subsystem responsible for the detection, configuration, and management of Non-Volatile Dual Inline Memory Modules (NVDIMMs). As a result, if NVDIMMs are present in the system, they are exposed through the /dev/pmem* device nodes and can be configured using the ndctl utility. (BZ#1269626)

Hardware with NVDIMM support

At the time of the Red Hat Enterprise Linux 7.3 release, a number of original equipment manufacturers (OEMs) are in the process of adding support for Non-Volatile Dual Inline Memory Module (NVDIMM) hardware. As these products are introduced in the market, Red Hat will work with these OEMs to test these configurations and, if possible, announce support for them on Red Hat Enterprise Linux 7.3.
Since this is a new technology, a specific support statement will be issued for each product and supported configuration. This will be done after successful Red Hat testing, and corresponding documented support by the OEM.
The currently supported NVDIMM products are:
  • HPE NVDIMM on HPE ProLiant systems. For specific configurations, see Hewlett Packard Enterprise Company support statements.
NVDIMM products and configurations that are not on this list are not supported. The Red Hat Enterprise Linux 7.3 Release Notes will be updated as NVDIMM products are added to the list of supported products. (BZ#1389121)

New packages: nvml

The nvml packages contain the Non-Volatile Memory Library (NVML), a collection of libraries for using memory-mapped persistence, optimized specifically for persistent memory. (BZ#1274541)

SCSI now supports multiple hardware queues

The nr_hw_queues field is now present in the Scsi_Host structure, which allows drivers to use the field. (BZ#1308703)

The exclusive_pref_bit optional argument has been added to the multipath ALUA prioritizer

If the exclusive_pref_bit argument is added to the multipath Asymmetric Logical Unit Access (ALUA) prioritizer, and a path has the Target Port Group Support (TPGS) pref bit set, multipath makes a path group using only that path and assigns the highest priority to the path. Users can now either allow the preferred path to be in a path group with other paths that are equally optimized, which is the default option, or in a path group by itself by adding the exclusive_pref_bit argument. (BZ#1299652)

multipathd now supports raw format mode in multipathd formatted output commands

The multipathd formatted output commands now offer raw format mode, which removes the headers and additional padding between fields. Support for additional format wildcards has been added as well. Raw format mode makes it easier to collect and parse information about multipath devices, particularly for use in scripting. (BZ#1299651)

Improved LVM locking infrastructure

lvmlockd is a next generation locking infrastucture for LVM. It allows LVM to safely manage shared storage from multiple hosts, using either the dlm or sanlock lock managers. sanlock allows lvmlockd to coordinate hosts through storage-based locking, without the need for an entire cluster infrastructure. For more information, see the lvmlockd(8) man page.
This feature was originally introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.3, lvmlockd is fully supported. (BZ#1299977)

Support for caching thinly-provisioned logical volumes with limitations

Red Hat Enterprise Linux 7.3 provides the ability to cache thinly provisioned logical volumes. This brings caching benefits to all the thin logical volumes associated with a particular thin pool. However, when thin pools are set up in this way, it is not currently possible to grow the thin pool without removing the cache layer first. This also means that thin pool auto-grow features are unavailable. Users should take care to monitor the fullness and consumption rate of their thin pools to avoid running out of space. Refer to the lvmthin(7) man page for information on thinly-provisioned logical volume and the lvmcache(7) man page for information on LVM cache volumes. (BZ#1371597)

device-mapper-persistent-data rebased to version 0.6.2

The device-mapper-persistent-data packages have been upgraded to upstream version 0.6.2, which provides a number of bug fixes and enhancements over the previous version. Notably, the thin_ls tool, which can provide information about thin volumes in a pool, is now available. (BZ#1315452)

Support for DIF/DIX (T10 PI) on specified hardware

SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.3, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests.
At the current time, the following vendors are known to provide this support.
FUJITSU supports DIF and DIX on:
EMULEX 16G FC HBA:
  • EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with:
  • FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650
QLOGIC 16G FC HBA:
  • QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with:
  • FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3
Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability.
EMC supports DIF on:
EMULEX 8G FC HBA:
  • LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with:
  • EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
EMULEX 16G FC HBA:
  • LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with:
  • EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
QLOGIC 16G FC HBA:
  • QLE2670-E-SP and QLE2672-E-SP, with:
  • EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later
Please refer to the hardware vendor's support information for the latest status.
Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1379689)

iprutils rebased to version 2.4.13

The iprutils packages have been upgraded to upstream version 2.4.13, which provides a number of bug fixes and enhancements over the previous version. Notably, this update adds support for enabling an adapter write cache on 8247-22L and 8247-21L base Serial Attached SCSI (SAS) backplanes to provide significant performance improvements. (BZ#1274367)

The multipathd command can now display the multipath data with JSON formatting

With this release, multipathd now includes the show maps json command to display the multipath data with JSON formatting. This makes it easier for other programs to parse the multipathd show maps output. (BZ#1353357)

Default configuration added for Huawei XSG1 arrays

With this release, multipath provides a default configuration for Huawei XSG1 arrays. (BZ#1333331)

Multipath now includes support for Ceph RADOS block devices.

RDB devices need special uid handling and their own checker function with the ability to repair devices. With this release, it is now possible to run multipath on top of RADOS block devices. Note, however, that the multipath RBD support should be used only when an RBD image with the exclusive-lock feature enabled is being shared between multiple clients. (BZ#1348372)

Support added for PURE FlashArray

With this release, multipath has added built-in configuration support for the PURE FlashArray (BZ#1300415)

Default configuration added for the MSA 2040 array

With this release, multipath provides a default configuration for the MSA 2040 array. (BZ#1341748)

New skip_kpartx configuration option to allow skipping kpartx partition creation

The skip_kpartx option has been added to the defaults, devices, and multipaths sections of the multipath.conf file. When this option is set to yes, multipath devices that are configured with skip_kpartx will not have any partition devices created for them. This allows users to create a multipath device without creating partitions, even if the device has a partition table. The default value of this option is no. (BZ#1311659)

Multipaths weightedpath prioritizer now supports a wwn keyword

The multipaths weightedpath prioritizer now supports a wwn keyword. If this is used, the regular expression for matching the device is of the form host_wwnn:host_wwpn:target_wwnn:target_wwpn. These identifiers can either be looked up through sysfs or using the following multipathd show paths format wildcards: %N:%R:%n:%r.
The weightedpath prioritizer previously only allowed HBTL and device nam regex matching. Neither of these are persistent across reboots, so the weightedpath prioritizer arguments needed to be changed after every boot. This feature provides a way to use the weightedpath prioritizer with persistent device identifiers. (BZ#1297456)

New packages: nvme-cli

The nvme-cli packages provide the Non-Volatile Memory Express (NVMe) command-line interface to manage and configure NVMe controllers. (BZ#1344730)

LVM2 now displays a warning message when autoresize is not configured

The thin pool default behavior is not to autoresize the thin pool when the space is going to be exhausted. Exhausting the space can have various negative consequences. When the user is not using autoresize and the thin pool becomes full, a new warning message notifies the user about possible problems so that they can take appropriate actions, such as resize the thin pool, or stop using the thin volume. (BZ#1189221)

dmstats now supports mapping of files to dmstats regions

The --filemap option of the dmstats command now allows the user to easily configure dmstats regions to track I/O operations to a specified file in the file system. Previously, I/O statistics were only available for a whole device, or a region of a device, which limited administrator insight into I/O performance to a per-file basis. Now, the --filemap option enables the user to inspect file I/O performance using the same tools used for any device-mapper device. (BZ#1286285)

LVM no longer applies LV polices on external volumes

Previously, LVM disruptively applied its own policy for LVM thin logical volumes (LVs) on external volumes as well, which could result in unexpected behavior. With this update, external users of thin pool can use their own management of external thin volumes, and LVM no longer applies LV polices on such volumes. (BZ#1329235)

The thin pool is now always checked for sufficient space when creating a new thin volume

Even when the user does not use autoresize with thin pool monitoring, the thin pool is now always checked for sufficient space when creating a new thin volume.
A new thin volumes now cannot be created in the following situations:
  • The thin-pool has reached 100% of the data volume capacity.
  • There is less than 25% of thin pool metadata free space for metadata smaller than 16 MiB.
  • There is less than 4 MiB of free space in metadata. (BZ#1348336)

LVM can now set the maximum number of cache pool chunks

The new LVM allocation parameter in the allocation section of the lvm.conf file, cache_pool_max_chunks, limits the maximum number of cache pool chunks. When this parameter is undefined or set to 0, the built-in defaults are used. (BZ#1364244)

Support for ability to uncouple a cache pool from a logical volume

LVM now has the ability to uncouple a cache pool from a logical volume if a device in the cache pool has failed. Previously, this type of failure would require manual intervention and complicated alterations to LVM metadata in order to separate the cache pool from the origin logical volume.
To uncouple a logical volume from its cache-pool use the following command:
# lvconvert --uncache *vg*/*lv*
Note the following limitations:
  • The cache logical volume must be inactive (may require a reboot)
  • A writeback cache requires the --force option due to the possibility of abandoning data lost to failure.
(BZ#1131777)

LVM can now track and display thin snapshot logical volumes that have been removed

You can now configure your system to track thin snapshot logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. The full dependency chain, including historical LVs, can be displayed with new lv_full_ancestors and lv_full_descendants reporting fields. For information on configuring and displaying historical logical volumes, see Logical Volume Administration. (BZ#1240549)
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.