Este contenido no está disponible en el idioma seleccionado.

Chapter 7. Technology Previews


This chapter provides a list of all Technology Previews available in Red Hat Enterprise Linux 7.7.

For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope.

7.1. General Updates

The systemd-importd VM and container image import and export service

Latest systemd version now contains the systemd-importd daemon that was not enabled in the earlier build, which caused the machinectl pull-* commands to fail. Note that the systemd-importd daemon is offered as a Technology Preview and should not be considered stable.

(BZ#1284974)

7.2. Authentication and Interoperability

Containerized Identity Management server available as Technology Preview

The rhel7/ipa-server container image is available as a Technology Preview feature. Note that the rhel7/sssd container image is now fully supported.

For details, see Using Containerized Identity Management Services.

(BZ#1405325)

Setting up IdM as a hidden replica is now available as a Technology Preview

This enhancement enables administrators to set up an Identity Management (IdM) replica as a hidden replica. A hidden replica is an IdM server that has all services running and available. However, it is not advertised to other clients or masters because no SRV records exist for the services in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas.

Hidden replicas are primarily designed for dedicated services that can otherwise disrupt clients. For example, a full backup of IdM requires to shut down all IdM services on the master or replica. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries.

To install a new hidden replica, use the ipa-replica-install --hidden-replica command. To change the state of an existing replica, use the ipa server-state command.

(BZ#1518939)

DNSSEC available as Technology Preview in IdM

Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated.

Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents:

Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide.

(BZ#1115294)

Identity Management JSON-RPC API available as Technology Preview

An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview.

In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables:

  • Administrators to use previous or later versions of IdM on the server than on the managing client.
  • Developers to use a specific version of an IdM call, even if the IdM version changes on the server.

In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature.

For details on using the API, see the related Knowlegdebase article.

(BZ#1298286)

Use of AD and LDAP sudo providers

The Active Directory (AD) provider is a back end used to connect to an AD server. Starting with Red Hat Enterprise Linux 7.2, using the AD sudo provider together with the LDAP provider is available as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file.

(BZ#1068725)

The Custodia secrets service provider is available as a Technology Preview

As a Technology Preview, you can use Custodia, a secrets service provider. Custodia stores or serves as a proxy for secrets, such as keys or passwords.

For details, see the upstream documentation at http://custodia.readthedocs.io.

Note that since Red Hat Enterprise Linux 7.6, Custodia has been deprecated.

(BZ#1403214)

7.3. Clustering

Heuristics in corosync-qdevice available as a Technology Preview

Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd, and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate.

(BZ#1413573)

New fence-agents-heuristics-ping fence agent

As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way.

If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions.

A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case.

(BZ#1476401)

The pcs tool now manages bundle resources in Pacemaker

As a Technology Preview starting with Red Hat Enterprise Linux 7.4, Pacemaker supports a special syntax for launching a Docker container with any infrastructure it requires: the bundle. After you have created a Pacemaker bundle, you can create a Pacemaker resource that the bundle encapsulates. For information on Pacemaker support for containers, see High Availability Add-On Reference.

There is one exception to this feature being Technology Preview: As of RHEL 7.4, Red Hat fully supports the usage of Pacemaker bundles for Red Hat Openstack Platform (RHOSP) deployments.

(BZ#1433016)

New LVM and LVM lock manager resource agents

As a Technology Preview, Red Hat Enterprise Linux 7.6 introduces two new resource agents: lvmlockd and LVM-activate.

The LVM-activate agent provides a choice from multiple methods for LVM management throughout a cluster:

  • tagging: the same as tagging with the existing lvm resource agent
  • clvmd: the same as clvmd with the existing lvm resource agent
  • system ID: a new option for using system ID for volume group failover (an alternative to tagging).
  • lvmlockd: a new option for using lvmlockd and dlm for volume group sharing (an alternative to clvmd).

The new lvmlockd resource agent is used to start the lvmlockd daemon when LVM-activate is configured to use lvmlockd.

For information on the lvmlockd and LVM-activate resource agent, see the PCS help screens for those agents. For information on setting up LVM for use with lvmlockd, see the lvmlockd(8) man page.

(BZ#1513957)

7.4. Desktop

Wayland available as a Technology Preview

The Wayland display server protocol is available in Red Hat Enterprise Linux as a Technology Preview with the dependent packages required to enable Wayland support in GNOME, which supports fractional scaling. Wayland uses the libinput library as its input driver.

The following features are currently unavailable or do not work correctly:

  • Multiple GPU support is not possible at this time.
  • The NVIDIA binary driver does not work under Wayland.
  • The xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout.
  • Screen recording, remote desktop, and accessibility do not always work correctly under Wayland.
  • No clipboard manager is available.
  • It is currently impossible to restart GNOME Shell under Wayland.
  • Wayland ignores keyboard grabs issued by X11 applications, such as virtual machines viewers.

(BZ#1481411)

Fractional Scaling available as a Technology Preview

Starting with Red Hat Enterprise Linux 7.5, GNOME provides, as a Technology Preview, fractional scaling to address problems with monitors whose DPI lies in the middle between lo (scale 1) and hi (scale 2).

Due to technical limitations, fractional scaling is available only on Wayland.

(BZ#1481395)

7.5. File Systems

File system DAX is now available for ext4 and XFS as a Technology Preview

Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview, a means for an application to directly map persistent memory into its address space.

To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application’s address space.

(BZ#1274459)

pNFS block layout is now available

As a Technology Preview, Red Hat Enterprise Linux clients can now mount pNFS shares with the block layout feature.

Note that Red Hat recommends using the pNFS SCSI layout instead, which is similar to block layout but easier to use.

(BZ#1111712)

OverlayFS

OverlayFS is a type of union file system. It allows the user to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. See the Linux kernel documentation for additional information.

OverlayFS remains a Technology Preview under most circumstances. As such, the kernel will log warnings when this technology is activated.

Full support is available for OverlayFS when used with Docker under the following restrictions:

  • OverlayFS is only supported for use as a Docker graph driver. Its use can only be supported for container COW content, not for persistent storage. Any persistent storage must be placed on non-OverlayFS volumes to be supported. Only default Docker configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system.
  • Only XFS is currently supported for use as a lower layer file system.
  • On Red Hat Enterprise Linux 7.3 and earlier, SELinux must be enabled and in enforcing mode on the physical machine, but must be disabled in the container when performing container separation, that is the /etc/sysconfig/docker file must not contain --selinux-enabled. Starting with Red Hat Enterprise Linux 7.4, OverlayFS supports SELinux security labels, and you can enable SELinux support for containers by specifying --selinux-enabled in /etc/sysconfig/docker.
  • The OverlayFS kernel ABI and userspace behavior are not considered stable, and may see changes in future updates.
  • In order to make the yum and rpm utilities work properly inside the container, the user should be using the yum-plugin-ovl packages.

Note that OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS.

Note that XFS file systems must be created with the -n ftype=1 option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the --mkfsoptions=-n ftype=1 parameters in the Anaconda kickstart. When creating a new file system after the installation, run the # mkfs -t xfs -n ftype=1 /PATH/TO/DEVICE command. To determine whether an existing file system is eligible for use as an overlay, run the # xfs_info /PATH/TO/DEVICE | grep ftype command to see if the ftype=1 option is enabled.

There are also several known issues associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation.

(BZ#1206277)

Btrfs file system

The B-Tree file system, Btrfs, is available as a Technology Preview in Red Hat Enterprise Linux 7.

Red Hat Enterprise Linux 7.4 introduced the last planned update to this feature. Btrfs has been deprecated, which means Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux.

(BZ#1477977)

7.6. Hardware Enablement

LSI Syncro CS HA-DAS adapters

Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI.

(BZ#1062759)

tss2 enables TPM 2.0 for IBM Power LE

The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview for the IBM Power LE architecture. This package enables users to interact with TPM 2.0 devices.

(BZ#1384452)

The ibmvnic device driver available as a Technology Preview

Since Red Hat Enterprise Linux 7.3, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic, has been available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization.

In Red Hat Enterprise Linux 7.6, the ibmvnic driver was upgraded to version 1.0, which provides a number of bug fixes and enhancements over the previous version. Notable changes include:

  • The code that previously requested error information has been removed because no error ID is provided by the Virtual Input-Output (VIOS) Server.
  • Error reporting has been updated with the cause string. As a result, during a recovery, the driver classifies the string as a warning rather than an error.
  • Error recovery on a login failure has been fixed.
  • The failed state that occurred after a failover while migrating Logical Partitioning (LPAR) has been fixed.
  • The driver can now handle all possible login response return values.
  • A driver crash that happened during a failover or Link Power Management (LPM) if the Transmit and Receive (Tx/Rx) queues have changed has been fixed.

(BZ#1519746)

Aero adapters available as a Technology Preview

The following Aero adapters are available as a Technology Preview:

  • PCI ID 0x1000:0x00e2 and 0x1000:0x00e6, controlled by the mpt3sas driver
  • PCI ID 0x1000:Ox10e5 and 0x1000:0x10e6, controlled by the megaraid_sas driver

(BZ#1660791, BZ#1660289)

The ice driver available as a Technology Preview

The Intel® Ethernet Connection E800 Series Linux Driver (ice.ko.xz) is available as a Technology Preview.

(BZ#1454916)

The igc driver available as a Technology Preview

The Intel® 2.5G Ethernet Linux Driver (igc.ko.xz) is available as a Technology Preview.

(BZ#1454918)

7.7. Kernel

eBPF system call for tracing

Red Hat Enterprise Linux 7.6 introduces the Extended Berkeley Packet Filter tool (eBPF) as a Technology Preview. This tool is enabled only for the tracing subsystem. For details, see the related Red Hat Knowledgebase article.

(BZ#1559615)

Heterogeneous memory management included as a Technology Preview

Red Hat Enterprise Linux 7.3 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line.

(BZ#1230959)

kexec as a Technology Preview

The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot.

(BZ#1460849)

kexec fast reboot as a Technology Preview

The kexec fast reboot feature, which was introduced in Red Hat Enterprise Linux 7.5, continues to be available as a Technology Preview. kexec fast reboot makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system.

It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda. It still does not enable to make kexec fast reboot default. However, when used with Anaconda, the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file.

(BZ#1464377)

perf cqm has been replaced by resctrl

The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm.

These problems included most notably:

  • perf cqm did not support the group of tasks which is allocated using resctrl
  • perf cqm gave random and inaccurate data due to several problems with recycling
  • perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events)
  • perf cqm provided only partial support for cgroup events
  • The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together
  • Monitoring tasks for the lifetime caused perf overhead
  • perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed

In Red Hat Enterprise Linux 7.5, perf cqm was replaced by the approach based on the resctrl file system, which addressed all of the aforementioned problems.

(BZ#1457533)

TC HW offloading available as a Technology Preview

Starting with Red Hat Enterprise Linux 7.6, Traffic Control (TC) Hardware offloading has been provided as a Technology Preview.

Hardware offloading enables that the selected functions of network traffic processing, such as shaping, scheduling, policing and dropping, are executed directly in the hardware instead of waiting for software processing, which improves the performance.

(BZ#1503123)

AMD xgbe network driver available as a Technology Preview

Starting with Red Hat Enterprise Linux 7.6, the AMD xgbe network driver has been provided as a Technology Preview.

(BZ#1589397)

Secure Memory Encryption is available only as a Technology Preview

Currently, Secure Memory Encryption (SME) is incompatible with kdump functionality, as the kdump kernel lacks the memory key to decrypt SME-encrypted memory. Red Hat found that with SME enabled, servers under testing might fail to perform some functions and therefore the feature is unfit for use in production. Consequently, SME is changing the support level from Supported to Technology Preview. Customers are encouraged to report any issues found while testing in pre-production to Red Hat or their system vendor.

(BZ#1726642)

criu rebased to version 3.5

Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU), which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state.

Note that the criu tool depends on Protocol Buffers, a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview.

In Red Hat Enterprise Linux 7.7, the criu packages were upgraded to the latest upstream version, which provides support for Podman to do a container checkpoint and restore. The newly added functionality only works without SELinux support.

(BZ#1400230)

The mlx5_core driver supports the Mellanox ConnectX-6 Dx network adapter as a Technology Preview

This enhancement adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the mlx5_core driver. On hosts that use this adapter, RHEL loads the mlx5_core driver automatically. Note that Red Hat provides this feature as an unsupported Technology Preview.

(BZ#1685900)

7.8. Networking

Cisco usNIC driver

Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. The libusnic_verbs driver, which is available as a Technology Preview, makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API.

(BZ#916384)

Cisco VIC kernel driver

The Cisco VIC Infiniband kernel driver, which is available as a Technology Preview, allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures.

(BZ#916382)

Trusted Network Connect

Trusted Network Connect, available as a Technology Preview, is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint’s system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network.

(BZ#755087)

SR-IOV functionality in the qlcnic driver

Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported.

(BZ#1259547)

The flower classifier with off-loading support

flower is a Traffic Control (TC) classifier intended to allow users to configure matching on well-known packet fields for various protocols. It is intended to make it easier to configure rules over the u32 classifier for complex filtering and classification tasks. flower also supports the ability to off-load classification and action rules to underlying hardware if the hardware supports it. The flower TC classifier is now provided as a Technology Preview.

(BZ#1393375)

7.9. Red Hat Enterprise Linux System Roles

The postfix role of RHEL System Roles available as a Technology Preview

Red Hat Enterprise Linux System Roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases.

Since Red Hat Enterprise Linux 7.4, the rhel-system-roles packages have been distributed through the Extras repository.

The postfix role is available as a Technology Preview.

The following roles are fully supported:

  • kdump
  • network
  • selinux
  • storage - available with the RHEA-2020:0407 advisory
  • timesync

For more information, see the Knowledgebase article about RHEL System Roles.

(BZ#1439896)

rhel-system-roles-sap available as a Technology Preview

The rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) System Roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription.

With the RHEA-2019:3190 advisory, the following new roles in the rhel-system-roles-sap package are available as a Technology Preview:

  • sap-preconfigure
  • sap-netweaver-preconfigure
  • sap-hana-preconfigure

For more information, see Red Hat Enterprise Linux System Roles for SAP.

Note: RHEL 7.7 for SAP Solutions is scheduled to be validated for use with SAP HANA on Intel 64 architecture and IBM POWER8. Other SAP applications and database products, for example, SAP NetWeaver and SAP ASE, can use RHEL 7.7 features. Please consult SAP Notes 2369910 and 2235581 for the latest information about validated releases and SAP support.

(BZ#1752544)

7.10. Security

SECCOMP can be now enabled in libreswan

As a Technology Preview, the seccomp=enabled|tolerant|disabled option has been added to the ipsec.conf configuration file, which makes it possible to use the Secure Computing mode (SECCOMP). This improves the syscall security by whitelisting all the system calls that Libreswan is allowed to execute. For more information, see the ipsec.conf(5) man page.

(BZ#1375750)

pk12util can now import certificates signed with RSA-PSS

The pk12util tool now provides importing a certificate signed with the RSA-PSS algorithm as a Technology Preview.

Note that if the corresponding private key is imported and has the PrivateKeyInfo.privateKeyAlgorithm field that restricts the signing algorithm to RSA-PSS, it is ignored when importing the key to a browser. See https://bugzilla.mozilla.org/show_bug.cgi?id=1413596 for more information.

(BZ#1431210)

Support for certificates signed with RSA-PSS in certutil has been improved

Support for certificates signed with the RSA-PSS algorithm in the certutil tool has been improved. Notable enhancements and fixes include:

  • The --pss option is now documented.
  • The PKCS#1 v1.5 algorithm is no longer used for self-signed signatures when a certificate is restricted to use RSA-PSS.
  • Empty RSA-PSS parameters in the subjectPublicKeyInfo field are no longer printed as invalid when listing certificates.
  • The --pss-sign option for creating regular RSA certificates signed with the RSA-PSS algorithm has been added.

Support for certificates signed with RSA-PSS in certutil is provided as a Technology Preview.

(BZ#1425514)

NSS is now able to verify RSA-PSS signatures on certificates

Since the RHEL 7.5 version of the nss package, the Network Security Services (NSS) libraries provide verifying RSA-PSS signatures on certificates as a Technology Preview. Prior to this update, clients using NSS as the SSL backend were not able to establish a TLS connection to a server that offered only certificates signed with the RSA-PSS algorithm.

Note that the functionality has the following limitations:

  • The algorithm policy settings in the /etc/pki/nss-legacy/rhel7.config file do not apply to the hash algorithms used in RSA-PSS signatures.
  • RSA-PSS parameters restrictions between certificate chains are ignored and only a single certificate is taken into account.

(BZ#1432142)

USBGuard enables blocking USB devices while the screen is locked as a Technology Preview

With the USBGuard framework, you can influence how an already running usbguard-daemon instance handles newly inserted USB devices by setting the value of the "InsertedDevicePolicy" runtime parameter. This functionality is provided as a Technology Preview, and the default choice is to apply the policy rules to figure out whether to authorize the device or not.

See the Blocking USB devices while the screen is locked Knowledgebase article.

(BZ#1480100)

7.11. Storage

NVMe/FC available as a Technology Preview in Qlogic adapters using the qla2xxx driver

The NVMe over Fibre Channel (NVMe/FC) transport type is available as a Technology Preview in Qlogic adapters using the qla2xxx driver.

NVMe/FC is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux.

NVMe/FC provides a higher-performance, lower-latency I/O protocol over existing Fibre Channel infrastructure. This is especially important with solid-state storage arrays, because it allows the performance benefits of NVMe storage to be passed through the fabric transport, rather than being encapsulated in a different protocol, SCSI.

Note that since Red Hat Enterprise Linux 7.6, NVMe/FC is fully supported with Broadcom Emulex Fibre Channel 32Gbit adapters using the lpfc driver.

(BZ#1387768)

Multi-queue I/O scheduling for SCSI

Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line.

Also note that although blk-mq is intended to offer improved performance, particularly for low-latency devices, it is not guaranteed to always provide better performance. Notably, in some cases, enabling scsi-mq can result in significantly deteriorated performance, especially on systems with many CPUs.

(BZ#1109348)

Targetd plug-in from the libStorageMgmt API

Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface.

The Targetd plug-in is not fully supported and remains a Technology Preview.

(BZ#1119909)

SCSI-MQ as a Technology Preview in the qla2xxx and lpfc drivers

The qla2xxx driver updated in Red Hat Enterprise Linux 7.4 can enable the use of SCSI-MQ (multiqueue) with the ql2xmqsupport=1 module parameter. The default value is 0 (disabled).

The SCSI-MQ functionality is provided as a Technology Preview when used with the qla2xxx or the lpfc drivers.

Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions.

(BZ#1414957)

7.12. System and Subscription Management

YUM 4 available as Technology Preview

YUM version 4, a next generation of the YUM package manager, is available as a Technology Preview in the Red Hat Enterprise Linux 7 Extras channel.

YUM 4 is based on the DNF technology and offers the following advantages over the standard YUM 3 used on RHEL 7:

  • Increased performance
  • Support for modular content
  • Well-designed stable API for integration with tooling

To install YUM 4, run the yum install nextgen-yum4 command.

Make sure to install the dnf-plugin-subscription-manager package, which includes the subscription-manager plug-in. This plug-in is required for accessing protected repositories provided by the Red Hat Customer Portal or Red Hat Satellite 6, and for automatic updates of the /etc/yum.repos.d/redhat.repo file.

To manage packages, use the yum4 command and its particular options the same way as the yum command.

For detailed information about differences between the new YUM 4 tool and YUM 3, see Changes in DNF CLI compared to YUM.

For instructions on how to enable the Extras channel, see the Knowledgebase article How to subscribe to the Extras channel/repo.

(BZ#1461652)

7.13. Virtualization

USB 3.0 support for KVM guests

USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7.

(BZ#1103193)

Select Intel network adapters now support SR-IOV in RHEL 7 guest OS on Hyper-V

As a Technology Preview, Red Hat Enterprise Linux 7 guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and i40evf drivers. This feature is enabled when the following conditions are met:

  • SR-IOV support is enabled for the network interface controller (NIC)
  • SR-IOV support is enabled for the virtual NIC
  • SR-IOV support is enabled for the virtual switch
  • The virtual function (VF) from the NIC is attached to the virtual machine.

The feature is currently supported with Microsoft Windows Server 2019 and 2016.

(BZ#1348508)

No-IOMMU mode for VFIO drivers

As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU.

(BZ#1299662)

Azure M416v2 as a host for RHEL 7 guests

As a Technology Preview, the Azure M416v2 instance type can now be used as a host for virtual machines that use RHEL 7.6 and later as the guest operating systems.

(BZ#1661654)

virt-v2v can convert Debian and Ubuntu guests

As a Technology Preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion:

  • virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest.
  • After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest’s network interface may change, and thus requires manual configuration.

(BZ#1387213)

GPU-based mediated devices now support the VNC console

As a Technology Preview, the Virtual Network Computing (VNC) console is now available for use with GPU-based mediated devices, such as the NVIDIA vGPU technology. As a result, it is now possible to use these mediated devices for real-time rendering of a virtual machine’s graphical output.

(BZ#1475770)

Open Virtual Machine Firmware

The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8.

(BZ#653382)

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.