Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Known Issues
This chapter documents known problems in Red Hat Enterprise Linux 7.9.
8.1. Authentication and Interoperability
Trusts with Active Directory do not work properly after upgrading ipa-server
using the latest container image
After upgrading an IdM server with the latest version of the container image, existing trusts with Active Directory domains no longer work. To work around this problem, delete the existing trust and re-establish it after the upgrade.
Potential risk when using the default value for ldap_id_use_start_tls
option
When using ldap://
without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search.
Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls
, defaults to false
. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap
. Note id_provider = ad
and id_provider = ipa
are not affected as they use encrypted connections protected by SASL and GSSAPI.
If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls
option to true
in the /etc/sssd/sssd.conf
file. The default behavior is planned to be changed in a future release of RHEL.
(JIRA:RHELPLAN-155168)
8.2. Compiler and Tools
GCC thread sanitizer included in RHEL no longer works
Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL.
As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer.
(BZ#1569484)
8.3. Installation and Booting
Systems installed as Server with GUI
with the DISA STIG profile or with the CIS profile do not start properly
The DISA STIG profile and the CIS profile require the removal of the xorg-x11-server-common
(X Windows) package but does not require the change of the default target. As a consequence, the system is configured to run the GUI but the X Windows package is missing. As a result, the system does not start properly. To work around this problem, do not use the DISA STIG profile and the CIS profile with the Server with GUI
software selection or customize the profile by removing the package_xorg-x11-server-common_removed
rule.
8.4. Kernel
The radeon
driver fails to reset hardware correctly when performing kdump
When booting the kernel from the currently running kernel, such as when performing the kdump process, the radeon
kernel driver currently does not properly reset hardware. Instead, the kdump kernel terminates unexpectedly, which causes the rest of the kdump service to fail.
To work around this problem, disable radeon
in kdump by adding the following line to the /etc/kdump.conf
file:
dracut_args --omit-drivers "radeon"
Afterwards, restart the machine and kdump.
Note that in this scenario, no graphics will be available during kdump, but kdump will complete successfully.
(BZ#1168430)
Slow connection to RHEL 7 guest console on a Windows Server 2019 host
When using RHEL 7 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently takes significantly longer than expected. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host.
(BZ#1706522)
Kernel deadlocks can occur when dm_crypt is used with intel_qat
The intel_qat
kernel module uses the GFP_ATOMIC
memory allocations, which can fail under memory stress. Consequently, kernel deadlocks and possible data corruption can occur when the dm_crypt
kernel module uses intel_qat
for encryption offload. To work around this problem, you can choose either of the following:
- Update to RHEL 8
-
Avoid using
intel_qat
for encryption offload (potential performance impact) - Ensure the system does not get under excessive memory pressure
(BZ#1813394)
The vmcore file generation fails on Amazon c5a machines on RHEL 7
On Amazon c5a machines, the Advanced Programmable Interrupt Controller (APIC) fails to route the interrupts of the Local APIC (LAPIC), when configured in the flat mode
inside the kdump
kernel. As a consequence, the kdump
kernel fails to boot and prevents the kdump
kernel from saving the vmcore
file for further analysis.
To work around the problem:
Increase the crash kernel size by setting the
crashkernel
argument to256M
:$ grubby-args="crashkernel=256M" --update-kernel /boot/vmlinuz-`uname -r`
Set the
nr_cpus=9
option by editing the/etc/sysconfig/kdump
file:KDUMP_COMMANDLINE_APPEND="irqpoll" *nr_cpus=9* reset_devices cgroup_disable=memory mce=off numa=off udev.children- max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable
As a result, the kdump
kernel boots with 9 CPUs and the vmcore
file is captured upon kernel crash. Note that the kdump
service can use a significant amount of crash kernel memory to dump the vmcore
file since it enables 9 CPUs in the kdump
kernel. Therefore, ensure that the crash kernel has a size reserve of 256MB available for booting the kdump
kernel.
(BZ#1844522)
Enabling some kretprobes
can trigger kernel panic
Using kretprobes
of the following functions can cause CPU hard-lock:
-
_raw_spin_lock
-
_raw_spin_lock_irqsave
-
_raw_spin_unlock_irqrestore
-
queued_spin_lock_slowpath
As a consequence, enabling these kprobe
events, you can experience a system response failure. This situation triggers a kernel panic. To workaround this problem, avoid configuring kretprobes
for mentioned functions and prevent system response failure.
(BZ#1838903)
The kdump
service fails on UEFI Secure Boot enabled systems
If a UEFI Secure Boot enabled system boots with a not up-to-date RHEL kernel version, the kdump
service fails to start. In the described scenario, kdump
reports the following error message:
kexec_file_load failed: Required key not available
This behavior displays due to either of these:
- Booting the crash kernel with a not up-to-date kernel version.
-
Configuring the
KDUMP_KERNELVER
variable in/etc/sysconfig/kdump
file to a not up-to-date kernel version.
As a consequence, kdump
fails to start and hence no dump core is saved during the crash event.
To workaround this problem, use either of these:
- Boot the crash kernel with the latest RHEL 7 fixes.
-
Configure
KDUMP_KERNELVER
inetc/sysconfig/kdump
to use the latest kernel version.
As a result, kdump
starts successfully in the described scenario.
(BZ#1862840)
The RHEL installer might not detect iSCSI storage
The RHEL installer might not automatically set kernel command-line options related to iSCSI for some offloading iSCSI host bus adapters (HBAs). As a consequence, the RHEL installer might not detect iSCSI storage.
To work around the problem, add the following options to the kernel command line when booting to the installer:
rd.iscsi.ibft=1 rd.iscsi.firmware=1
These options enable network configuration and iSCSI target discovery from the pre-OS firmware configuration.
The firmware configures the iSCSI storage, and as a result, the installer can discover and use the iSCSI storage.
(BZ#1871027)
Race condition in the mlx5e_rep_neigh_update
work queue sometimes triggers the kernel panic
When offloading encapsulation actions over the mlx5
device using the switchdev
in-kernel driver model in the Single Root I/O Virtualization (SR-IOV) capability, a race condition can happen in the mlx5e_rep_neigh_update
work queue. Consequently, the system terminates unexpectedly with the kernel panic and the following message appears:
Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core]
Currently, a workaround or partial mitigation to this problem is not known.
(BZ#1874101)
The ice
driver does not load for Intel® network adapters
The ice
kernel driver does not load for all Intel® Ethernet network adapters E810-XXV except the following:
-
v00008086d00001593sv*sd*bc*sc*i*
-
v00008086d00001592sv*sd*bc*sc*i*
-
v00008086d00001591sv*sd*bc*sc*i*
Consequently, the network adapter remains undetected by the operating system. To work around this problem, you can use external drivers for RHEL 7 provided by Intel® or Dell.
(BZ#1933998)
kdump does not support setting nr_cpus to 2 or higher in Hyper-V virtual machines
When using RHEL 7.9 as a guest operating system on a Microsoft Hyper-V hypervisor, the kdump kernel in some cases becomes unresponsive when the nr_cpus
parameter is set to 2 or higher. To avoid this problem from occurring, do not change the default nr_cpus=1
parameter in the /etc/sysconfig/kdump
file of the guest.
8.5. Networking
Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7
It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service
file from the /usr/lib/systemd/system/
directory to the /etc/systemd/system/
directory and add the following line to the Service section of the file:
Environment=OPENSSL_ENABLE_MD5_VERIFY=1
Then run the systemctl daemon-reload
command as root to reload the service file.
Note that MD5 certificates are highly insecure and Red Hat does not recommend using them.
(BZ#1062656)
bind-utils
DNS lookup utilities support fewer search domains than glibc
The dig
, host
, and nslookup
DNS lookup utilities from the bind-utils
package support only up to 8 search domains, while the glibc
resolver in the system supports any number of search domains. As a consequence, the DNS lookup utilities may get different results than applications when a search in the /etc/resolv.conf
file contains more than 8 domains.
To work around this problem, use one of the following:
- Full names ending with a dot, or
-
Fewer than nine domains in the
resolv.conf
search clause.
Note that it is not recommended to use more than three domains.
BIND 9.11 changes log severity of query errors when query logging is enabled
With the BIND 9.11 update, the log severity for the query-errors
changes from debug 1
to info
when query logging is enabled. Consequently, additional log entries describing errors now appear in the query log. To work around this problem, add the following statement into the logging
section of the /etc/named.conf
file:
category query-errors { default_debug; };
This will move query errors back into the debug log.
Alternatively, use the following statement to discard all query error messages:
category querry-errors { null; };
As a result, only name queries are logged in a similar way to the previous BIND 9.9.4 release.
(BZ#1853191)
named-chroot
service fails to start when check-names
option is not allowed in forward zone
Previously, the usage of the check-names
option was allowed in the forward zone
definitions.
With the rebase to bind
9.11, only the following zone
types:
-
master
-
slave
-
stub
-
hint
use the check-names
statement.
Consequently, the check-names
option, previously allowed in the forward zone
definitions, is no longer accepted and causes a failure on start of the named-chroot
service. To work around this problem, remove the check-names
option from all the zone
types except for master
, slave
, stub
or hint
.
As a result, the named-chroot
service starts again without errors. Note that the ignored statements will not change the provided service.
(BZ#1851836)
The NFQUEUE
target overrides queue-cpu-fanout
flag
iptables NFQUEUE
target using --queue-bypass
and --queue-cpu-fanout
options accidentally overrides the --queue-cpu-fanout
option if ordered after the --queue-bypass
option. Consequently, the --queue-cpu-fanout
option is ignored.
To work around this problem, rearrange the --queue-bypass
option before --queue-cpu-fanout
option.
8.6. Security
Audit executable watches on symlinks do not work
File monitoring provided by the -w
option cannot directly track a path. It has to resolve the path to a device and an inode to make a comparison with the executed program. A watch monitoring an executable symlink monitors the device and an inode of the symlink itself instead of the program executed in memory, which is found from the resolution of the symlink. Even if the watch resolves the symlink to get the resulting executable program, the rule triggers on any multi-call binary called from a different symlink. This results in flooding logs with false positives. Consequently, Audit executable watches on symlinks do not work.
To work around the problem, set up a watch for the resolved path of the program executable, and filter the resulting log messages using the last component listed in the comm=
or proctitle=
fields.
(BZ#1421794)
Executing a file while transitioning to another SELinux context requires additional permissions
Due to the backport of the fix for CVE-2019-11190 in RHEL 7.8, executing a file while transitioning to another SELinux context requires more permissions than in previous releases.
In most cases, the domain_entry_file()
interface grants the newly required permission to the SELinux domain. However, in case the executed file is a script, then the target domain may lack the permission to execute the interpreter’s binary. This lack of the newly required permission leads to AVC denials. If SELinux is running in enforcing mode, the kernel might kill the process with the SIGSEGV or SIGKILL signal in such a case.
If the problem occurs on the file from the domain which is a part of the selinux-policy
package, file a bug against this component. In case it is part of a custom policy module, Red Hat recommends granting the missing permissions using standard SELinux interfaces:
-
corecmd_exec_shell()
for shell scripts -
corecmd_exec_all_executables()
for interpreters labeled asbin_t
such as Perl or Python
For more details, see the /usr/share/selinux/devel/include/kernel/corecommands.if
file provided by the selinux-policy-doc
package and the An exception that breaks the stability of the RHEL SELinux policy API article on the Customer Portal.
(BZ#1832194)
Scanning large numbers of files with OpenSCAP causes systems to run out of memory
The OpenSCAP scanner stores all collected results in the memory until the scan finishes. As a consequence, the system might run out of memory on systems with low RAM when scanning large numbers of files, for example, from the large package groups Server with GUI and Workstation.
To work around this problem, use smaller package groups, for example, Server and Minimal Install on systems with limited RAM. If your scenario requires large package groups, you can test whether your system has sufficient memory in a virtual or staging environment. Alternatively, you can tailor the scanning profile to deselect rules that involve recursion over the entire /
filesystem:
-
rpm_verify_hashes
-
rpm_verify_permissions
-
rpm_verify_ownership
-
file_permissions_unauthorized_world_writable
-
no_files_unowned_by_user
-
dir_perms_world_writable_system_owned
-
file_permissions_unauthorized_suid
-
file_permissions_unauthorized_sgid
-
file_permissions_ungroupowned
-
dir_perms_world_writable_sticky_bits
This prevents the OpenSCAP scanner from causing the system to run out of memory.
RSA signatures with SHA-1 cannot be completely disabled in RHEL7
Because the ssh-rsa
signature algorithm must be allowed in OpenSSH to use the new SHA2 (rsa-sha2-512
, rsa-sha2-256
) signatures, you cannot completely disable SHA1 algorithms in RHEL7. To work around this limitation, you can update to RHEL8 or use ECDSA/Ed25519 keys, which use only SHA2.
rpm_verify_permissions
fails in the CIS profile
The rpm_verify_permissions
rule compares file permissions to package default permissions. However, the Center for Internet Security (CIS) profile, which is provided by the scap-security-guide
packages, changes some file permissions to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions
fails. To work around this problem, manually verify that these files have the following permissions:
-
/etc/cron.d
(0700) -
/etc/cron.hourly
(0700) -
/etc/cron.monthly
(0700) -
/etc/crontab
(0600) -
/etc/cron.weekly
(0700) -
/etc/cron.daily
(0700)
For more information about the related feature, see SCAP Security Guide now provides a profile aligned with the CIS RHEL 7 Benchmark v2.2.0.
OpenSCAP file ownership-related rules do not work with remote user and group back ends
The OVAL language used by the OpenSCAP suite to perform configuration checks has a limited set of capabilities. It lacks possibilities to obtain a complete list of system users, groups, and their IDs if some of them are remote. For example, if they are stored in an external database such as LDAP.
As a consequence, rules that work with user IDs or group IDs do not have access to IDs of remote users. Therefore, such IDs are identified as foreign to the system. This might result in scans to fail on compliant systems. In the scap-security-guide
packages, the following rules are affected:
-
xccdf_org.ssgproject.content_rule_file_permissions_ungroupowned
-
xccdf_org.ssgproject.content_rule_no_files_unowned_by_user
To work around this problem, if a rule that deals with user or group IDs fails on a system that defines remote users, check the failed parts manually. The OpenSCAP scanner enables you to specify the --oval-results
option together with the --report
option. This option displays offending files and UIDs in the HTML report and makes the manual revision process straightforward.
Additionally, in RHEL 8.3, the rules in the scap-security-guide
packages contain a warning that only local-user back ends have been evaluated.
rpm_verify_permissions
and rpm_verify_ownership
fail in the Essential Eight profile
The rpm_verify_permissions
rule compares file permissions to package default permissions and the rpm_verify_ownership
rule compares file owner to package default owner. However, the Australian Cyber Security Centre (ACSC) Essential Eight profile, which is provided by the scap-security-guide
packages, changes some file permissions and ownerships to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions
and rpm_verify_ownership
fails. To work around this problem, manually verify that the /usr/libexec/abrt-action-install-debuginfo-to-abrt-cache
file is owned by root
and that it has suid
and sgid
bits set.
8.7. Servers and Services
The compat-unixODBC234
package for SAP requires a symlink to load the unixODBC
library
The unixODBC
package version 2.3.1 is available in RHEL 7. In addition, the compat-unixODBC234
package version 2.3.4 is available in the RHEL 7 for SAP Solutions sap-hana
repository; see New package: compat-unixODBC234
for SAP for details.
Due to minor ABI differences between unixODBC
version 2.3.1 and 2.3.4, an application built with version 2.3.1 might not work with version 2.3.4 in certain rare cases. To prevent problems caused by this incompatibility, the compat-unixODBC234
package uses a different SONAME for shared libraries available in this package, and the library file is available under /usr/lib64/libodbc.so.1002.0.0
instead of /usr/lib64/libodbc.so.2.0.0
.
As a consequence, third party applications built with unixODBC
version 2.3.4 that load the unixODBC
library in runtime using the dlopen()
function fail to load the library with the following error message:
/usr/lib64/libodbc.so.2.0.0: cannot open shared object file: No such file or directory
To work around this problem, create the following symbolic link:
# ln -s /usr/lib64/libodbc.so.1002.0.0 /usr/lib64/libodbc.so.2.0.0
and similar symlinks for other libraries from the compat-unixODBC234
package if necessary.
Note that the compat-unixODBC234
package conflicts with the base RHEL 7 unixODBC
package. Therefore, uninstall unixODBC
prior to installing compat-unixODBC234
.
(BZ#1844443)
Symbol conflicts between OpenLDAP libraries might cause crashes in httpd
When both the libldap
and libldap_r
libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd
child processes using the PHP ldap
extension might terminate unexpectedly if the mod_security
or mod_auth_openidc
modules are also loaded by the httpd
configuration.
With this update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND
environment variable, which enables the use of the RTLD_DEEPBIND
dynamic linker option when loading httpd
modules. When the APR_DEEPBIND
environment variable is enabled, crashes no longer occur in httpd
configurations that load conflicting libraries.
(BZ#1739287)
8.8. Storage
RHEL 7 does not support VMD 2.0 storage
The 10th generation Intel Core and 3rd generation Intel Xeon Scalable platforms (also known as Intel Ice Lake) include hardware that utilizes version 2.0 of the Volume Management Device (VMD) technology.
RHEL 7 no longer receives updates to support new hardware. As a consequence, RHEL 7 cannot recognize Non-Volatile Memory Express (NVMe) devices that are managed by VMD 2.0.
To work around the problem, Red Hat recommends that you upgrade to a recent major RHEL release.
(BZ#1942865)
SCSI devices cannot be deleted after removing the iSCSI target
If a SCSI device is BLOCKED
due to a transport issue, including an iSCSI session being disrupted due to a network or target side configuration change, the attached devices cannot be deleted while blocked on transport error recovery. If you attempt to remove the SCSI device using the delete sysfs
command (/sys/block/sd*/device/delete
) it can be blocked indefinitely.
To work around this issue, terminate the transport session with the iscsiadm logout
commands in either session mode (specifying a session ID) or in node mode (specifying a matching target name and portal for the blocked session). Issuing an iSCSI session logout on a recovering session terminates the session and removes the SCSI devices.
(BZ#1439055)
8.9. System and Subscription Management
The needs-restarting
command from yum-utils
might fail to display the container boot time
In certain RHEL 7 container environments, the needs-restarting
command from the yum-utils
package might incorrectly display the host boot time instead of the container boot time. As a consequence, this command might still report a false reboot warning message after you restart the container environment. You can safely ignore this harmless warning message in such a case.
8.10. Virtualization
RHEL 7.9 virtual machines on IBM POWER sometimes do not detect hot-plugged devices
RHEL7.9 virtual machines (VMs) started on an IBM POWER system on a RHEL 8.3 or later hypervisor do not detect hot-plugged PCI devices if the hot plug is performed when the VM is not fully booted yet. To work around the problem, reboot the VM.
(BZ#1854917)
8.11. RHEL in cloud environments
Core dumping RHEL 7 virtual machines that use NICs with enabled accelerated networking to a remote machine on Azure fails
Currently, using the kdump
utility to save the core dump file of a RHEL 7 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine does not work correctly when the VM is using a NIC with enabled accelerated networking. As a consequence, the kdump
operation fails.
To prevent this problem from occurring, add the following line to the /etc/kdump.conf
file and restart the kdump
service.
extra_modules pci_hyperv
(BZ#1846667)
SSH with password login now impossible by default on RHEL 8 virtual machines configured using cloud-init
For security reasons, the ssh_pwauth
option in the configuration of the cloud-init
utility is now set to 0
by default. As a consequence, it is not possible to use a password login when connecting via SSH to RHEL 8 virtual machines (VMs) configured using cloud-init
.
If you require using a password login for SSH connections to your RHEL 8 VMs configured using cloud-init
, set ssh_pwauth: 1
in the /etc/cloud/cloud.cfg file before deploying the VM.
(BZ#1685580)