Este contenido no está disponible en el idioma seleccionado.
Chapter 11. Known issues
This part describes known issues in Red Hat Enterprise Linux 8.8.
11.1. Installer and image creation
During RHEL installation on IBM Z, udev
does not assign predictable interface names to RoCE cards enumerated by FID
If you start a RHEL 8.7 or later installation with the net.naming-scheme=rhel-8.7
kernel command-line option, the udev
device manager on the RHEL installation media ignores this setting for RoCE cards enumerated by the function identifier (FID). As a consequence, udev
assigns unpredictable interface names to these devices. There is no workaround during the installation, but you can configure the feature after the installation. For further details, see Determining a predictable RoCE device name on the IBM Z platform.
(JIRA:RHEL-11397)
Installation fails on IBM Power 10 systems with LPAR and secure boot enabled
RHEL installer is not integrated with static key secure boot on IBM Power 10 systems. Consequently, when logical partition (LPAR) is enabled with the secure boot option, the installation fails with the error, Unable to proceed with RHEL-x.x Installation
.
To work around this problem, install RHEL without enabling secure boot. After booting the system:
-
Copy the signed Kernel into the PReP partition using the
dd
command. - Restart the system and enable secure boot.
Once the firmware verifies the bootloader and the kernel, the system boots up successfully.
For more information, see https://www.ibm.com/support/pages/node/6528884
Bugzilla:2025814
Unexpected SELinux policies on systems where Anaconda is running as an application
When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the –image
anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso
or dvd.iso
is not affected by this issue.
The auth
and authconfig
Kickstart commands require the AppStream repository
The authselect-compat
package is required by the auth
and authconfig
Kickstart commands during installation. Without this package, the installation fails if auth
or authconfig
are used. However, by design, the authselect-compat
package is only available in the AppStream repository.
To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect
Kickstart command during installation.
Bugzilla:1640697
The reboot --kexec
and inst.kexec
commands do not provide a predictable system state
Performing a RHEL installation with the reboot --kexec
Kickstart command or the inst.kexec
kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.
Note that the kexec
feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.
Bugzilla:1697896
The USB CD-ROM drive is not available as an installation source in Anaconda
Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use=
command is specified. In this case, Anaconda cannot find and use this source disk.
To work around this problem, use the harddrive --partition=sdX --dir=/
command to install from USB CD-ROM drive. As a result, the installation does not fail.
Network access is not enabled by default in the installation program
Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled.
To work around this problem, add ip=dhcp
to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used.
Bugzilla:1757877
Hard drive partitioned installations with iso9660 filesystem fails
You cannot install RHEL on systems where the hard drive is partitioned with the iso9660
filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660
file system partition. This happens even when RHEL is installed without using a DVD.
To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts.
Note: Before performing the workaround, backup the data available on the disk. The wipefs
command formats all the existing data from the disk.
%pre
wipefs -a /dev/sda
%end
As a result, installations work as expected without any errors.
IBM Power systems with HASH MMU
mode fail to boot with memory allocation failures
IBM Power Systems with HASH memory allocation unit (MMU)
mode support kdump
up to a maximum of 192 cores. Consequently, the system fails to boot with memory allocation failures if kdump
is enabled on more than 192 cores. This limitation is due to RMA memory allocations during early boot in HASH MMU
mode. To work around this problem, use the Radix MMU
mode with fadump
enabled instead of using kdump
.
Bugzilla:2028361
RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload
When deploying rpm-ostree
payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error:
The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.
To work around this issue:
- Use an automatic partitioning scheme and do not add any mount points manually.
-
Manually assign mount points only inside
/var
directory. For example,/var/my-mount-point
), and the following standard directories:/
,/boot
,/var
.
As a result, the installation process finishes successfully.
11.2. Subscription management
syspurpose addons
have no effect on the subscription-manager attach --auto
output
In Red Hat Enterprise Linux 8, four attributes of the syspurpose
command-line tool have been added: role
,usage
, service_level_agreement
and addons
. Currently, only role
, usage
and service_level_agreement
affect the output of running the subscription-manager attach --auto
command. Users who attempt to set values to the addons
argument will not observe any effect on the subscriptions that are auto-attached.
11.3. Software management
cr_compress_file_with_stat()
can cause a memory leak
The createrepo_c
C library has the API cr_compress_file_with_stat()
function. This function is declared with char **dst
as a second parameter. Depending on its other parameters, cr_compress_file_with_stat()
either uses dst
as an input parameter, or uses it to return an allocated string. This unpredictable behavior can cause a memory leak, because it does not inform the user when to free dst
contents.
To work around this problem, a new API cr_compress_file_with_stat_v2
function has been added, which uses the dst
parameter only as an input. It is declared as char *dst
. This prevents memory leak.
Note that the cr_compress_file_with_stat_v2
function is temporary and will be present only in RHEL 8. Later, cr_compress_file_with_stat()
will be fixed instead.
Bugzilla:1973588
YUM transactions reported as successful when a scriptlet fails
Since RPM version 4.6, post-install scriptlets are allowed to fail without being fatal to the transaction. This behavior propagates up to YUM as well. This results in scriptlets which might occasionally fail while the overall package transaction reports as successful.
There is no workaround available at the moment.
Note that this is expected behavior that remains consistent between RPM and YUM. Any issues in scriptlets should be addressed at the package level.
11.4. Shells and command-line tools
ipmitool
is incompatible with certain server platforms
The ipmitool
utility serves for monitoring, configuring, and managing devices that support the Intelligent Platform Management Interface (IPMI). The current version of ipmitool
uses Cipher Suite 17 by default instead of the previous Cipher Suite 3. Consequently, ipmitool
fails to communicate with certain bare metal nodes that announced support for Cipher Suite 17 during negotiation, but do not actually support this cipher suite. As a result, ipmitool
aborts with the no matching cipher suite
error message.
For more details, see the related Knowledgebase article.
To solve this problem, update your baseboard management controller (BMC) firmware to use the Cipher Suite 17.
Optionally, if the BMC firmware update is not available, you can work around this problem by forcing ipmitool
to use a certain cipher suite. When invoking a managing task with ipmitool
, add the -C
option to the ipmitool
command together with the number of the cipher suite you want to use. See the following example:
# ipmitool -I lanplus -H myserver.example.com -P mypass -C 3 chassis power status
ReaR fails to recreate a volume group when you do not use clean disks for restoring
ReaR fails to perform recovery when you want to restore to disks that contain existing data.
To work around this problem, wipe the disks manually before restoring to them if they have been previously used. To wipe the disks in the rescue environment, use one of the following commands before running the rear recover
command:
-
The
dd
command to overwrite the disks. -
The
wipefs
command with the-a
flag to erase all available metadata.
See the following example of wiping metadata from the /dev/sda
disk:
# wipefs -a /dev/sda[1-9] /dev/sda
This command wipes the metadata from the partitions on /dev/sda
first, and then the partition table itself.
coreutils
might report misleading EPERM error codes
GNU Core Utilities (coreutils
) started using the statx()
system call. If a seccomp
filter returns an EPERM error code for unknown system calls, coreutils
might consequently report misleading EPERM error codes because EPERM can not be distinguished from the actual Operation not permitted error returned by a working statx()
syscall.
To work around this problem, update the seccomp
filter to either permit the statx()
syscall, or to return an ENOSYS error code for syscalls it does not know.
11.5. Infrastructure services
Postfix TLS fingerprint algorithm in the FIPS mode needs to be changed to SHA-256
By default in RHEL 8, postfix
uses MD5 fingerprints with the TLS for backward compatibility. But in the FIPS mode, the MD5 hashing function is not available, which may cause TLS to incorrectly function in the default postfix configuration. To workaround this problem, the hashing function needs to be changed to SHA-256 in the postfix configuration file.
For more details, see the related Knowledgebase article Fix postfix TLS in the FIPS mode by switching to SHA-256 instead of MD5.
The brltty
package is not multilib compatible
It is not possible to have both 32-bit and 64-bit versions of the brltty
package installed. You can either install the 32-bit (brltty.i686
) or the 64-bit (brltty.x86_64
) version of the package. The 64-bit version is recommended.
11.6. Security
tangd-keygen
does not handle non-default umask
correctly
The tangd-keygen
script does not change file permissions for generated key files. Consequently, on systems with a default user file-creation mode mask (umask
) that prevents reading keys to other users, the tang-show-keys
command returns the error message Internal Error 500
instead of displaying the keys.
To work around the problem, use the chmod o+r *.jwk
command to change permissions on the files in the /var/db/tang
directory.
sshd -T
provides inaccurate information about Ciphers, MACs and KeX algorithms
The output of the sshd -T
command does not contain the system-wide crypto policy configuration or other options that could come from an environment file in /etc/sysconfig/sshd
and that are applied as arguments on the sshd
command. This occurs because the upstream OpenSSH project did not support the Include directive to support Red-Hat-provided cryptographic defaults in RHEL 8. Crypto policies are applied as command-line arguments to the sshd
executable in the sshd.service
unit during the service’s start by using an EnvironmentFile
. To work around the problem, use the source
command with the environment file and pass the crypto policy as an argument to the sshd
command, as in sshd -T $CRYPTO_POLICY
. For additional information, see Ciphers, MACs or KeX algorithms differ from sshd -T
to what is provided by current crypto policy level. As a result, the output from sshd -T
matches the currently configured crypto policy.
Bugzilla:2044354
RHV hypervisor may not work correctly when hardening the system during installation
When installing Red Hat Virtualization Hypervisor (RHV-H) and applying the Red Hat Enterprise Linux 8 STIG profile, OSCAP Anaconda Add-on may harden the system as RHEL instead of RVH-H and remove essential packages for RHV-H. Consequently, the RHV hypervisor may not work. To work around the problem, install the RHV-H system without applying any profile hardening, and after the installation is complete, apply the profile by using OpenSCAP. As a result, the RHV hypervisor works correctly.
CVE OVAL feeds are now only in the compressed format, and data streams are not in the SCAP 1.3 standard
Red Hat provides CVE OVAL feeds in the bzip2-compressed format and are no longer available in the XML file format. Because referencing compressed content is not standardized in the Security Content Automation Protocol (SCAP) 1.3 specification, third-party SCAP scanners can have problems scanning rules that use the feed.
Certain Rsyslog priority strings do not work correctly
Support for the GnuTLS priority string for imtcp
that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in the Rsyslog remote logging application:
NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL
To work around this problem, use only correctly working priority strings:
NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL
As a result, current configurations must be limited to the strings that work correctly.
Server with GUI
and Workstation
installations are not possible with CIS Server profiles
The CIS Server Level 1 and Level 2 security profiles are not compatible with the Server with GUI
and Workstation
software selections. As a consequence, a RHEL 8 installation with the Server with GUI
software selection and CIS Server profiles is not possible. An attempted installation using the CIS Server Level 1 or Level 2 profiles and either of these software selections will generate the error message:
package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.
If you need to align systems with the Server with GUI
or Workstation
software selections according to CIS benchmarks, use the CIS Workstation Level 1 or Level 2 profiles instead.
Kickstart uses org_fedora_oscap
instead of com_redhat_oscap
in RHEL 8
The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap
instead of com_redhat_oscap
, which might cause confusion. This is necessary to keep compatibility with Red Hat Enterprise Linux 7.
Bugzilla:1665082
libvirt
overrides xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding
The libvirt
virtualization framework enables IPv4 forwarding whenever a virtual network with a forward mode of route
or nat
is started. This overrides the configuration by the xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding
rule, and subsequent compliance scans report the fail
result when assessing this rule.
Apply one of these scenarios to work around the problem:
-
Uninstall the
libvirt
packages if your scenario does not require them. -
Change the forwarding mode of virtual networks created by
libvirt
. -
Remove the
xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding
rule by tailoring your profile.
OpenSSL in FIPS mode accepts only specific D-H parameters
In FIPS mode, TLS clients that use OpenSSL return a bad dh value
error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with Diffie-Hellman parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups.
Bugzilla:1810911
crypto-policies
incorrectly allow Camellia ciphers
The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default.
To work around the problem, apply the NO-CAMELLIA
subpolicy:
# update-crypto-policies --set DEFAULT:NO-CAMELLIA
In the previous command, replace DEFAULT
with the cryptographic level name if you have switched from DEFAULT
previously.
As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround.
OpenSC might not detect CardOS V5.3 card objects correctly
The OpenSC toolkit does not correctly detect serial numbers of smart cards using the CardOS V5.3 system. Consequently, the pkcs11-tool
utility might not list card objects.
To work around the problem, turn off file caching by setting the`use_file_caching = false` option in the /etc/opensc.conf
file.
Smart-card provisioning process through OpenSC pkcs15-init
does not work properly
The file_caching
option is enabled in the default OpenSC configuration, and the file caching functionality does not handle some commands from the pkcs15-init
tool properly. Consequently, the smart-card provisioning process through OpenSC fails.
To work around the problem, add the following snippet to the /etc/opensc.conf
file:
app pkcs15-init { framework pkcs15 { use_file_caching = false; } }
The smart-card provisioning through pkcs15-init
only works if you apply the previously described workaround.
Connections to servers with SHA-1 signatures do not work with GnuTLS
SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries.
To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy.
Bugzilla:1628553
libselinux-python
is available only through its module
The libselinux-python
package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python
is no longer available in the default RHEL 8 repositories through the yum install libselinux-python
command.
To work around this problem, enable both the libselinux-python
and python27
modules, and install the libselinux-python
package and its dependencies with the following commands:
# yum module enable libselinux-python # yum install libselinux-python
Alternatively, install libselinux-python
using its install profile with a single command:
# yum module install libselinux-python:2.8/common
As a result, you can install libselinux-python
using the respective module.
Bugzilla:1666328
udica
processes UBI 8 containers only when started with --env container=podman
The Red Hat Universal Base Image 8 (UBI 8) containers set the container
environment variable to the oci
value instead of the podman
value. This prevents the udica
tool from analyzing a container JavaScript Object Notation (JSON) file.
To work around this problem, start a UBI 8 container using a podman
command with the --env container=podman
parameter. As a result, udica
can generate an SELinux policy for a UBI 8 container only when you use the described workaround.
Negative effects of the default logging setup on performance
The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald
is running with rsyslog
.
See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information.
Jira:RHELPLAN-10431
SELINUX=disabled
in /etc/selinux/config
does not work properly
Disabling SELinux using the SELINUX=disabled
option in the /etc/selinux/config
results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks.
To work around this problem, disable SELinux by adding the selinux=0
parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux.
Jira:RHELPLAN-34199
IKE over TCP connections do not work on custom TCP ports
The tcp-remoteport
Libreswan configuration option does not work properly. Consequently, an IKE over TCP connection cannot be established when a scenario requires specifying a non-default TCP port.
scap-security-guide
cannot configure termination of idle sessions
Even though the sshd_set_idle_timeout
rule still exists in the data stream, the former method for idle session timeout of configuring sshd
is no longer available. Therefore, the rule is marked as not applicable
and cannot harden anything. Other methods for configuring idle session termination, such as systemd
(Logind), are also not available. As a consequence, scap-security-guide
cannot configure the system to reliably disconnect idle sessions after a certain amount of time.
You can work around this problem in one of the following ways, which might fulfill the security requirement:
-
Configuring the
accounts_tmout
rule. However, this variable could be overridden by using theexec
command. -
Configuring the
configure_tmux_lock_after_time
andconfigure_bashrc_exec_tmux
rules. This requires installing thetmux
package. -
Upgrading to RHEL 8.7 or later where the
systemd
feature is already implemented together with the proper SCAP rule.
The OSCAP Anaconda add-on does not fetch tailored profiles in the graphical installation
The OSCAP Anaconda add-on does not provide an option to select or deselect tailoring of security profiles in the RHEL graphical installation. Starting from RHEL 8.8, the add-on does not take tailoring into account by default when installing from archives or RPM packages. Consequently, the installation displays the following error message instead of fetching an OSCAP tailored profile:
There was an unexpected problem with the supplied content.
To work around this problem, you must specify paths in the %addon org_fedora_oscap
section of your Kickstart file, for example:
xccdf-path = /usr/share/xml/scap/sc_tailoring/ds-combined.xml tailoring-path = /usr/share/xml/scap/sc_tailoring/tailoring-xccdf.xml
As a result, you can use the graphical installation for OSCAP tailored profiles only with the corresponding Kickstart specifications.
The automatic screen lock does not work when a smart-card reader is removed
The opensc
packages incorrectly handle removing USB smart-card readers. Consequently, the system remains unlocked even when the GNOME Display Manager (GDM) is configured to lock the screen when a smart card is removed. Furthermore, after you reconnect the USB reader, the screen also does not lock after removing the smart card.
To work around this problem, perform one of the following actions:
- Always remove only a smart card, not a smart-card reader.
- When using hardware tokens that integrate a reader and a card in one package, upgrade to RHEL 9.
OpenSCAP memory-consumption problems
On systems with limited memory, the OpenSCAP scanner might terminate prematurely or it might not generate the results files. To work around this problem, you can customize the scanning profile to deselect rules that involve recursion over the entire /
file system:
-
rpm_verify_hashes
-
rpm_verify_permissions
-
rpm_verify_ownership
-
file_permissions_unauthorized_world_writable
-
no_files_unowned_by_user
-
dir_perms_world_writable_system_owned
-
file_permissions_unauthorized_suid
-
file_permissions_unauthorized_sgid
-
file_permissions_ungroupowned
-
dir_perms_world_writable_sticky_bits
For more details and more workarounds, see the related Knowledgebase article.
Remediating service-related rules during kickstart installations might fail
During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable
or disable
state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues.
11.7. Networking
Systems with the IPv6_rpfilter
option enabled experience low network throughput
Systems with the IPv6_rpfilter
option enabled in the firewalld.conf
file currently experience suboptimal performance and low network throughput in high traffic scenarios, such as 100 Gbps links. To work around the problem, disable the IPv6_rpfilter
option. To do so, add the following line in the /etc/firewalld/firewalld.conf
file.
IPv6_rpfilter=no
As a result, the system performs better, but also has reduced security.
Bugzilla:1871860
11.8. Kernel
The kernel ACPI driver reports it has no access to a PCIe ECAM memory region
The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot:
[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]
However, the kernel is still able to access the 0x30000000-0x31ffffff
memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output:
03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) ... Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us
As a result, you can ignore the warning message.
For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff
not reserved in ACPI namespace" appears during system boot solution.
Bugzilla:1868526
The tuned-adm profile powersave
command causes the system to become unresponsive
Executing the tuned-adm profile powersave
command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave
profile if your system matches the mentioned specifications.
Bugzilla:1609288
The HP NMI watchdog does not always generate a crash dump
In certain cases, the hpwdt
driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon
driver.
The missing NMI is initiated by one of two conditions:
- The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user.
-
The
hpwdt
watchdog. The expiration by default sends an NMI to the server.
Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic()
function and if configured, the kdump
service generates a vmcore
file.
Because of the missing NMI, however, kernel panic()
is not called and vmcore
is not collected.
In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server.
In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR).
The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency.
Bugzilla:1602962
Reloading an identical crash extension may cause segmentation faults
When you load a copy of an already loaded crash extension file, it might trigger a segmentation fault. Currently, the crash utility detects if an original file has been loaded. Consequently, due to two identical files co-existing in the crash utility, a namespace collision occurs, which triggers the crash utility to cause a segmentation fault.
You can work around the problem by loading the crash extension file only once. As a result, segmentation faults no longer occur in the described scenario.
Connections fail when attaching a virtual function to virtual machine
Pensando network cards that use the ionic
device driver silently accept VLAN tag configuration requests and attempt configuring network connections while attaching network virtual functions (VF
) to a virtual machine (VM
). Such network connections fail as this feature is not yet supported by the card’s firmware.
Bugzilla:1930576
The OPEN MPI library may trigger run-time failures with default PML
In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib
Byte Transfer Layer (BTL).
However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib
BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem:
-
Run the
mpirun
command using following parameters:
-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0
where,
-
The
-mca btl openib
parameter disablesopenib
BTL -
The
-mca pml ucx
parameter configures OPEN MPI to useucx
PML. -
The
x UCX_NET_DEVICES=
parameter restricts UCX to use the specified devices
The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as:
-
Run the
mpirun
command using following parameters:
-mca pml_ucx_priority 5
As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX.
Bugzilla:1866402
vmcore capture fails after memory hot-plug or unplug operation
After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile
utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet:
- A little-endian variant of IBM Power System runs RHEL 8.
-
The
kdump
orfadump
service is enabled on the system.
Consequently, the capture kernel fails to save vmcore
if a kernel crash is triggered after the memory hot-plug or hot-unplug operation.
To work around this problem, restart the kdump
service after hot-plug or hot-unplug:
# systemctl restart kdump.service
As a result, vmcore
is successfully saved in the described scenario.
Bugzilla:1793389
Using irqpoll
causes vmcore
generation failure
Due to an existing problem with the nvme
driver on the 64-bit ARM architecture that run on the Amazon Web Services Graviton 1 processor, causes vmcore
generation to fail when you provide the irqpoll
kernel command line parameter to the first kernel. Consequently, no vmcore
file is dumped in the /var/crash/
directory upon a kernel crash. To work around this problem:
Append
irqpoll
toKDUMP_COMMANDLINE_REMOVE
variable in the/etc/sysconfig/kdump
file.# KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb"
Remove
irqpoll
fromKDUMP_COMMANDLINE_APPEND
variable in the/etc/sysconfig/kdump
file.# KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory udev.children-max=2 panic=10 swiotlb=noforce novmcoredd"
Restart the
kdump
service:# systemctl restart kdump
As a result, the first kernel boots correctly and the vmcore
file is expected to be captured upon the kernel crash.
Note that the Amazon Web Services Graviton 2 and Amazon Web Services Graviton 3 processors do not require you to manually remove the irqpoll
parameter in the /etc/sysconfig/kdump
file.
The kdump
service can use a significant amount of crash kernel memory to dump the vmcore
file. Ensure that the capture kernel has sufficient memory available for the kdump
service.
For related information on this Known Issue, see The irqpoll kernel command line parameter might cause vmcore generation failure article.
Bugzilla:1654962
Debug kernel fails to boot in crash capture environment on RHEL 8
Due to the memory-intensive nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel and a stack trace is generated instead. To work around this problem, increase the crash kernel memory as required. As a result, the debug kernel boots successfully in the crash capture environment.
Bugzilla:1659609
Allocating crash kernel memory fails at boot time
On some Ampere Altra systems, allocating the crash kernel memory during boot fails when the 32-bit region is disabled in BIOS settings. Consequently, the kdump
service fails to start. This is caused by memory fragmentation in the region below 4 GB with no fragment being large enough to contain the crash kernel memory.
To work around this problem, enable the 32-bit memory region in BIOS as follows:
- Open the BIOS settings on your system.
- Open the Chipset menu.
-
Under Memory Configuration, enable the
Slave 32-bit
option.
As a result, crash kernel memory allocation within the 32-bit region succeeds and the kdump
service works as expected.
Bugzilla:1940674
RoCE interfaces on IBM Z lose their IP settings due to an unexpected change of the network interface name
In RHEL 8.6 and earlier, the udev
device manager assigns on the IBM Z platform unpredictable device names to RoCE interfaces that are enumerated by a unique identifier (UID). However, in RHEL 8.7 and later, udev
assigns predictable device names with the eno
prefix to these interfaces.
If you update from RHEL 8.6 or earlier to 8.7 or later, these UID-enumerated interfaces have new names and no longer match the device names in NetworkManager connection profiles. Consequently, these interfaces have no IP configuration after the update.
For workarounds you can apply before the update and a fix if you have already updated the system, see RoCE interfaces on IBM Z lose their IP settings after updating to RHEL 8.7 or later.
Bugzilla:2169382
The QAT manager leaves no spare device for LKCF
The Intel® QuickAssist Technology (QAT) manager (qatmgr
) is a user space process, which by default uses all QAT devices in the system. As a consequence, there are no QAT devices left for the Linux Kernel Cryptographic Framework (LKCF). There is no need to work around this situation, as this behavior is expected and a majority of users will use acceleration from the user space.
Bugzilla:1920086
The Solarflare fails to create maximum number of virtual functions (VFs)
The Solarflare NICs fail to create a maximum number of VFs due to insufficient resources. You can check the maximum number of VFs that a PCIe device can create in the /sys/bus/pci/devices/PCI_ID/sriov_totalvfs
file. To workaround this problem, you can either adjust the number of VFs or the VF MSI interrupt value to a lower value, either from Solarflare Boot Manager
on startup, or using Solarflare sfboot
utility. The default VF MSI interrupt value is 8
.
-
To adjust the VF MSI interrupt value using
sfboot
:
# sfboot vf-msix-limit=2
Adjusting VF MSI interrupt value affects the VF performance.
For more information about parameters to be adjusted accordingly, see the Solarflare Server Adapter user guide
.
Bugzilla:1971506
Using page_poison=1
can cause a kernel crash
When using page_poison=1
as the kernel parameter on firmware with faulty EFI implementation, the operating system can cause the kernel to crash. By default, this option is disabled and it is not recommended to enable it, especially in production systems.
Bugzilla:2050411
The iwl7260-firmware
breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4
After updating the iwl7260-firmware
or iwl7260-wifi
driver to the version provided by RHEL 8.7 and later, the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message:
kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110
An unconfirmed work around is to power off the system and back on again. Do not reboot.
Bugzilla:2106341
Secure boot on IBM Power Systems does not support migration
Currently, on IBM Power Systems, logical partition (LPAR) does not boot after successful physical volume (PV) migration. As a result, any type of automated migration with secure boot enabled on a partition fails.
Bugzilla:2126777
weak-modules
from kmod
fails to work with module inter-dependencies
The weak-modules
script provided by the kmod
package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules
processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules
script fails to work in this scenario.
To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel.
Bugzilla:2103605
kdump
in Ampere Altra servers enters the OOM state
The firmware in Ampere Altra and Altra Max servers currently causes the kernel to allocate too many event, interrupt and command queues, which consumes too much memory. As a consequence, the kdump
kernel enters the Out of memory (OOM) state.
To work around this problem, reserve extra memory for kdump
by increasing the value of the crashkernel=
kernel option to 640M.
Bugzilla:2111855
Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1
boot parameter to avoid lock contentions
Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock
, which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1
boot parameter.
To avoid lock conflicts, enable skew_tick=1
:
Enable the
skew_tick=1
parameter withgrubby
.# grubby --update-kernel=ALL --args="skew_tick=1"
- Reboot for changes to take effect.
-
Verify the new settings by running the
cat /proc/cmdline
command.
Note that enabling skew_tick=1
causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads.
Bugzilla:2214508
11.9. Boot loader
The behavior of grubby
diverges from its documentation
When you add a new kernel using the grubby
tool and do not specify any arguments, grubby
passes the default arguments to the new entry. This behavior occurs even without passing the --copy-default
argument. Using --args
and --copy-default
options ensures those arguments are appended to the default arguments as stated in the grubby
documentation.
However, when you add additional arguments, such as $tuned_params
, the grubby
tool does not pass these arguments unless the --copy-default
option is invoked.
In this situation, two workarounds are available:
Either set the
root=
argument and leave--args
empty:# grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args "root=/dev/mapper/rhel-root" --title "entry_with_root_set"
Or set the
root=
argument and the specified arguments, but not the default ones:# grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args "root=/dev/mapper/rhel-root some_args and_some_more" --title "entry_with_root_set_and_other_args_too"
11.10. File systems and storage
LVM mirror
devices that store a LUKS volume sometimes become unresponsive
Mirrored LVM devices with a segment type of mirror
that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations.
To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1
instead of mirror
if you need to stack LUKS volumes on top of resilient software-defined storage.
The raid1
segment type is the default RAID configuration type and replaces mirror
as the recommended solution.
To convert mirror
devices to raid1
, see Converting a mirrored LVM device to a RAID1 device.
Bugzilla:1730502
The /boot
file system cannot be placed on LVM
You cannot place the /boot
file system on an LVM logical volume. This limitation exists for the following reasons:
-
On EFI systems, the EFI System Partition conventionally serves as the
/boot
file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. -
RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the
/boot
file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the/boot
configuration defined by the uEFI standard. - The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS.
Red Hat does not plan to support /boot
on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot
file system to be placed on an LVM logical volume.
Bugzilla:1496229
LVM no longer allows creating volume groups with mixed block sizes
LVM utilities such as vgcreate
or vgextend
no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size.
To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1
option in the lvm.conf
file.
Limitations of LVM writecache
The writecache
LVM caching method has the following limitations, which are not present in the cache
method:
-
You cannot name a
writecache
logical volume when usingpvmove
commands. -
You cannot use logical volumes with
writecache
in combination with thin pools or VDO.
The following limitation also applies to the cache
method:
-
You cannot resize a logical volume while
cache
orwritecache
is attached to it.
Jira:RHELPLAN-27987, Bugzilla:1798631, Bugzilla:1808012
Device-mapper multipath is not supported when using NVMe/TCP driver.
The use of device-mapper multipath on top of NVMe/TCP devices can cause reduced performance and error handling. To avoid this problem, use native NVMe multipath instead of DM multipath tools. For RHEL 8, you can add the option nvme_core.multipath=Y
to the kernel command line.
Bugzilla:2022359
The blk-availability
systemd service deactivates complex device stacks
In systemd
, the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command:
# systemctl enable --now blk-availability.service
As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages.
Bugzilla:2011699
XFS quota warnings are triggered too often
Using the quota timer results in quota warnings triggering too often, which causes soft quotas to be enforced faster than they should. To work around this problem, do not use soft quotas, which will prevent triggering warnings. As a result, the amount of warning messages will not enforce soft quota limit anymore, respecting the configured timeout.
Bugzilla:2059262
11.11. Dynamic programming languages, web and database servers
Creating virtual Python 3.11 environments fails when using the virtualenv
utility
The virtualenv
utility in RHEL 8, provided by the python3-virtualenv
package, is not compatible with Python 3.11. An attempt to create a virtual environment by using virtualenv
will fail with the following error message:
$ virtualenv -p python3.11 venv3.11 Running virtualenv with interpreter /usr/bin/python3.11 ERROR: Virtual environments created by virtualenv < 20 are not compatible with Python 3.11. ERROR: Use `python3.11 -m venv` instead.
To create Python 3.11 virtual environments, use the python3.11 -m venv
command instead, which uses the venv
module from the standard library.
python3.11-lxml
does not provide the lxml.isoschematron
submodule
The python3.11-lxml
package is distributed without the lxml.isoschematron
submodule because it is not under an open source license. The submodule implements ISO Schematron support. As an alternative, pre-ISO-Schematron validation is available in the lxml.etree.Schematron
class. The remaining content of the python3.11-lxml
package is unaffected.
PAM plug-in version 1.0 does not work in MariaDB
MariaDB 10.3
provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. MariaDB 10.5
provides the plug-in versions 1.0 and 2.0, version 2.0 is the default.
The MariaDB
PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5
module stream.
Symbol conflicts between OpenLDAP libraries might cause crashes in httpd
When both the libldap
and libldap_r
libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd
child processes using the PHP ldap
extension might terminate unexpectedly if the mod_security
or mod_auth_openidc
modules are also loaded by the httpd
configuration.
Since the RHEL 8.3 update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND
environment variable, which enables the use of the RTLD_DEEPBIND
dynamic linker option when loading httpd
modules. When the APR_DEEPBIND
environment variable is enabled, crashes no longer occur in httpd
configurations that load conflicting libraries.
Bugzilla:1819607
getpwnam()
might fail when called by a 32-bit application
When a user of NIS uses a 32-bit application that calls the getpwnam()
function, the call fails if the nss_nis.i686
package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686
command.
11.12. Identity Management
Actions required when running Samba as a print server and updating from RHEL 8.4 and earlier
With this update, the samba
package no longer creates the /var/spool/samba/
directory. If you use Samba as a print server and use /var/spool/samba/
in the [printers]
share to spool print jobs, SELinux prevents Samba users from creating files in this directory. Consequently, print jobs fail and the auditd
service logs a denied
message in /var/log/audit/audit.log
. To avoid this problem after updating your system from 8.4 and earlier:
-
Search the
[printers]
share in the/etc/samba/smb.conf
file. -
If the share definition contains
path = /var/spool/samba/
, update the setting and set thepath
parameter to/var/tmp/
. Restart the
smbd
service:# systemctl restart smbd
If you newly installed Samba on RHEL 8.5 or later, no action is required. The default /etc/samba/smb.conf
file provided by the samba-common
package in this case already uses the /var/tmp/
directory to spool print jobs.
Bugzilla:2009213
Using the cert-fix
utility with the --agent-uid pkidbuser
option breaks Certificate System
Using the cert-fix
utility with the --agent-uid pkidbuser
option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system.
FIPS mode does not support using a shared secret to establish a cross-forest trust
Establishing a cross-forest trust using a shared secret fails in FIPS mode because NTLMSSP authentication is not FIPS-compliant. To work around this problem, authenticate with an Active Directory (AD) administrative account when establishing a trust between an IdM domain with FIPS mode enabled and an AD domain.
Downgrading authselect
after the rebase to version 1.2.2 breaks system authentication
The authselect
package has been rebased to the latest upstream version 1.2.2
. Downgrading authselect
is not supported and breaks system authentication for all users, including root
.
If you downgraded the authselect
package to 1.2.1
or earlier, perform the following steps to work around this problem:
-
At the GRUB boot screen, select
Red Hat Enterprise Linux
with the version of the kernel that you want to boot and presse
to edit the entry. -
Type
single
as a separate word at the end of the line that starts withlinux
and pressCtrl+X
to start the boot process. - Upon booting in single-user mode, enter the root password.
Restore authselect configuration using the following command:
# authselect select sssd --force
IdM to AD cross-realm TGS requests fail
The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD).
Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error:
Generic error (see e-text) while getting credentials for <service principal>
Potential risk when using the default value for ldap_id_use_start_tls
option
When using ldap://
without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search.
Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls
, defaults to false
. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap
. Note id_provider = ad
and id_provider = ipa
are not affected as they use encrypted connections protected by SASL and GSSAPI.
If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls
option to true
in the /etc/sssd/sssd.conf
file. The default behavior is planned to be changed in a future release of RHEL.
Jira:RHELPLAN-155168
The default
keyword for enabled ciphers in the NSS does not work in conjunction with other ciphers
In Directory Server you can use the default
keyword to refer to the default ciphers enabled in the network security services (NSS). However, if you want to enable the default ciphers and additional ones using the command line or web console, Directory Server fails to resolve the default
keyword. As a consequence, the server enables only the additionally specified ciphers and logs an error similar to the following:
Security Initialization - SSL alert: Failed to set SSL cipher preference information: invalid ciphers <default,+cipher_name>: format is +cipher1,-cipher2... (Netscape Portable Runtime error 0 - no error)
As a workaround, specify all ciphers that are enabled by default in NSS including the ones you want to additionally enable.
pki-core-debuginfo
update from RHEL 8.6 to RHEL 8.7 or later fails
Updating the pki-core-debuginfo
package from RHEL 8.6 to RHEL 8.7 or later fails. To work around this problem, run the following commands:
-
yum remove pki-core-debuginfo
-
yum update -y
-
yum install pki-core-debuginfo
-
yum install idm-pki-symkey-debuginfo idm-pki-tools-debuginfo
Migrated IdM users might be unable to log in due to mismatching domain SIDs
If you have used the ipa migrate-ds
script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit
utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs.
Jira:RHELPLAN-109613
IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust
Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate.
IdM Vault encryption and decryption fails in FIPS mode
The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate.
Incorrect warning when setting expiration dates for a Kerberos principal
If you set a password expiration date for a Kerberos principal, the current timestamp is compared to the expiration timestamp using a 32-bit signed integer variable. If the expiration date is more than 68 years in the future, it causes an integer variable overflow resulting in the following warning message being displayed:
Warning: Your password will expire in less than one hour on [expiration date]
You can ignore this message, the password will expire correctly at the configured date and time.
11.13. Desktop
Disabling flatpak
repositories from Software Repositories is not possible
Currently, it is not possible to disable or remove flatpak
repositories in the Software Repositories tool in the GNOME Software utility.
Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts
When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log:
The guest operating system reported that it failed with the following error code: 0x1E
This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host.
Bugzilla:1583445
Drag-and-drop does not work between desktop and applications
Due to a bug in the gnome-shell-extensions
package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release.
11.14. Graphics infrastructures
The radeon
driver fails to reset hardware correctly
The radeon
kernel driver currently does not reset hardware in the kexec
context correctly. Instead, radeon
falls over, which causes the rest of the kdump
service to fail.
To work around this problem, disable radeon
in kdump
by adding the following line to the /etc/kdump.conf
file:
dracut_args --omit-drivers "radeon" force_rebuild 1
Restart the system and kdump
. After starting kdump
, the force_rebuild 1
line might be removed from the configuration file.
Note that in this scenario, no graphics is available during the dump process, but kdump
works correctly.
Bugzilla:1694705
Multiple HDR displays on a single MST topology may not power on
On systems using NVIDIA Turing GPUs with the nouveau
driver, using a DisplayPort
hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays.
Bugzilla:1812577
GUI in ESXi might crash due to low video memory
The graphical user interface (GUI) on RHEL virtual machines (VMs) in the VMware ESXi 7.0.1 hypervisor with vCenter Server 7.0.1 requires a certain amount of video memory. If you connect multiple consoles or high-resolution monitors to the VM, the GUI requires at least 16 MB of video memory. If you start the GUI with less video memory, the GUI might terminate unexpectedly.
To work around the problem, configure the hypervisor to assign at least 16 MB of video memory to the VM. As a result, the GUI on the VM no longer crashes.
If you encounter this issue, Red Hat recommends that you report it to VMware.
See also the following VMware article: VMs with high resolution VM console may experience a crash on ESXi 7.0.1 (83194).
Bugzilla:1910358
VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z
The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth.
To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc
server, replace the -depth 16
option with -depth 24
in the Xvnc
configuration.
As a result, VNC clients display the correct colors but use more network bandwidth with the server.
Unable to run graphical applications using sudo
command
When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland
is restricted by the Xauthority
file to use regular user credentials for authentication.
To work around this problem, use the sudo -E
command to run graphical applications as a root
user.
Hardware acceleration is not supported on ARM
Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture.
To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver.
Jira:RHELPLAN-57914
The installer freezes on servers with ASPEED 2600
When you start the graphical RHEL 8.8 installer on a server with the ASPEED 2600 On System Management Chipset, the installer becomes unresponsive with a black screen. Consequently, you cannot install RHEL 8.8 on the server.
To work around the issue, add either of the following options on the kernel command line when booting the installer:
-
nomodeset
-
drm_kms_helper.edid_firmware=edid/1024x768.bin
As a result, the installation proceeds as expected.
Bugzilla:2189645
11.15. The web console
VNC console works incorrectly at certain resolutions
When using the Virtual Network Computing (VNC) console under certain display resolutions, you might experience a mouse offset issue or you might see only a part of the interface. Consequently, using the VNC console might not be possible. To work around this issue, you can try expanding the size of the VNC console or use the Desktop Viewer in the console tab to launch the remote viewer instead.
11.16. Red Hat Enterprise Linux system roles
Using the RHEL system role with Ansible 2.9 can display a warning about using dnf
with the command
module
Since RHEL 8.8, the RHEL system roles no longer use the warn
parameter in with the dnf
module because this parameter was removed in Ansible Core 2.14. However, if you use the latest rhel-system-roles
package still with Ansible 2.9 and a role installs a package, one of the following warnings can be displayed:
[WARNING]: Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
[WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
If you want to hide these warnings, add the command_warnings = False
setting to the [Defaults]
section of the ansible.cfg
file. However, note that this setting disables all warnings in Ansible.
Unable to manage localhost
by using the localhost
hostname in the playbook or inventory
With the inclusion of the ansible-core 2.13
package in RHEL, if you are running Ansible on the same host you manage your nodes, you cannot do it by using the localhost
hostname in your playbook or inventory. This happens because ansible-core 2.13
uses the python38
module, and many of the libraries are missing, for example, blivet
for the storage
role, gobject
for the network
role. To workaround this problem, if you are already using the localhost
hostname in your playbook or inventory, you can add a connection, by using ansible_connection=local
, or by creating an inventory file that lists localhost
with the ansible_connection=local
option. With that, you are able to manage resources on localhost
. For more details, see the article RHEL system roles playbooks fail when run on localhost.
If firewalld.service
is masked, using the firewall
RHEL system role fails
If firewalld.service
is masked on a RHEL system, the firewall
RHEL system role fails. To work around this problem, unmask the firewalld.service
:
systemctl unmask firewalld.service
The rhc
system role fails on already registered systems when rhc_auth
contains activation keys
Executing playbook files on already registered systems fails if activation keys are specified for the rhc_auth
parameter. To workaround this issue, do not specify activation keys when executing the playbook file on the already registered system.
11.17. Virtualization
Using a large number of queues might cause Windows virtual machines to fail
Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues.
This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail.
To work around this problem, choose one of the following two options:
- Keep the vTPM device enabled, but use less than 250 queues.
- Disable the vTPM device to use more than 250 queues.
The Milan
VM CPU type is sometimes not available on AMD Milan systems
On certain AMD Milan systems, the Enhanced REP MOVSB (erms
) and Fast Short REP MOVSB (fsrm
) feature flags are disabled in the BIOS by default. Consequently, the Milan
CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms
and fsrm
in the BIOS of your host.
Bugzilla:2077770
SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC
When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT
CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough.
Attaching LUN devices to virtual machines using virtio-blk does not work
The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller.
Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk'
option rather than device='lun'
.
Bugzilla:1777138
Virtual machines sometimes fail to start when using many virtio-blk disks
Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM’s guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot
error.
Virtual machines with iommu_platform=on
fail to start on IBM POWER
RHEL 8 currently does not support the iommu_platform=on
parameter for virtual machines (VMs) on IBM POWER system. As a consequence, starting a VM with this parameter on IBM POWER hardware results in the VM becoming unresponsive during the boot process.
IBM POWER hosts now work correctly when using the ibmvfc
driver
When running RHEL 8 on a PowerVM logical partition (LPAR), a variety of errors could previously occur due to problems with the ibmvfc
driver. As a consequence, a kernel panic triggered on the host under certain circumstances, such as:
- Using the Live Partition Mobility (LPM) feature
- Resetting a host adapter
- Using SCSI error handling (SCSI EH) functions
With this update, the handling of ibmvfc
has been fixed, and the described kernel panics no longer occur.
Bugzilla:1961722
Using perf kvm record
on IBM POWER Systems can cause the VM to crash
When using a RHEL 8 host on the little-endian variant of IBM POWER hardware, using the perf kvm record
command to collect trace event samples for a KVM virtual machine (VM) in some cases results in the VM becoming unresponsive. This situation occurs when:
-
The
perf
utility is used by an unprivileged user, and the-p
option is used to identify the VM - for exampleperf kvm record -e trace_cycles -p 12345
. -
The VM was started using the
virsh
shell.
To work around this problem, use the perf kvm
utility with the -i
option to monitor VMs that were created using the virsh
shell. For example:
# perf kvm record -e trace_imc/trace_cycles/ -p <guest pid> -i
Note that when using the -i
option, child tasks do not inherit counters, and threads will therefore not be monitored.
Bugzilla:1924016
Windows Server 2016 virtual machines with Hyper-V enabled fail to boot when using certain CPU models
Currently, it is not possible to boot a virtual machine (VM) that uses Windows Server 2016 as the guest operating system, has the Hyper-V role enabled, and uses one of the following CPU models:
- EPYC-IBPB
- EPYC
To work around this problem, use the EPYC-v3 CPU model, or manually enable the xsaves CPU flag for the VM.
Bugzilla:1942888
Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails
Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a Migration status: active
status.
To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully.
Bugzilla:1741436
Using virt-customize
sometimes causes guestfs-firstboot
to fail
After modifying a virtual machine (VM) disk image using the virt-customize
utility, the guestfs-firstboot
service in some cases fails due to incorrect SELinux permissions. This causes a variety of problems during VM startup, such as failing user creation or system registration.
To avoid this problem, use the virt-customize
command with the --selinux-relabel
option.
Deleting a forward interface from a macvtap virtual network resets all connection counts of this network
Currently, deleting a forward interface from a macvtap
virtual network with multiple forward interfaces also resets the connection status of the other forward interfaces of the network. As a consequence, the connection information in the live network XML is incorrect. Note, however, that this does not affect the functionality of the virtual network. To work around the issue, restart the libvirtd
service on your host.
Virtual machines with SLOF fail to boot in netcat interfaces
When using a netcat (nc
) interface to access the console of a virtual machine (VM) that is currently waiting at the Slimline Open Firmware (SLOF) prompt, the user input is ignored and VM stays unresponsive. To work around this problem, use the nc -C
option when connecting to the VM, or use a telnet interface instead.
Bugzilla:1974622
Attaching mediated devices to virtual machines in virt-manager
in some cases fails
The virt-manager
application is currently able to detect mediated devices, but cannot recognize whether the device is active. As a consequence, attempting to attach an inactive mediated device to a running virtual machine (VM) using virt-manager
fails. Similarly, attempting to create a new VM that uses an inactive mediated device fails with a device not found
error.
To work around this issue, use the virsh nodedev-start
or mdevctl start
commands to activate the mediated device before using it in virt-manager
.
RHEL 9 virtual machines fail to boot in POWER8 compatibility mode
Currently, booting a virtual machine (VM) that runs RHEL 9 as its guest operating system fails if the VM also uses CPU configuration similar to the following:
<cpu mode="host-model"> <model>power8</model> </cpu>
To work around this problem, do not use POWER8 compatibility mode in RHEL 9 VMs.
In addition, note that running RHEL 9 VMs is not possible on POWER8 hosts.
SUID and SGID are not cleared automatically on virtiofs
When you run the virtiofsd
service with the killpriv_v2
feature, your system may not automatically clear the SUID and SGID permissions after performing some file-system operations. Consequently, not clearing the permissions might cause a potential security threat. To work around this issue, disable the killpriv_v2
feature by entering the following command:
# virtiofsd -o no_killpriv_v2
Bugzilla:1966475
Restarting the OVS service on a host might block network connectivity on its running VMs
When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets.
This problem only affects systems that use the packed virtqueue format in their virtio
networking stack.
To work around this problem, use the packed=off
parameter in the virtio
networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM.
NFS failure during VM migration causes migration failure and source VM coredump
Currently, if the NFS service or server is shut down during virtual machine (VM) migration, the source VM’s QEMU is unable to reconnect to the NFS server when it starts running again. As a result, the migration fails and a coredump is initiated on the source VM. Currently, there is no workaround available.
Hotplugging a Watchdog card to a virtual machine fails
Currently, if there are no PCI slots available, adding a Watchdog card to a running virtual machine (VM) fails with the following error:
Failed to configure watchdog ERROR Error attempting device hotplug: internal error: No more available PCI slots
To work around this problem, shut down the VM before adding the Watchdog card.
11.18. RHEL in cloud environments
Setting static IP in a RHEL virtual machine on a VMware host does not work
Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init
utility to set the VM’s network to static IP and then reboot the VM, the VM’s network will be changed to DHCP.
To work around this issue, see the VNware knowledgebase.
kdump sometimes does not start on Azure and Hyper-V
On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump
kernel in some cases fails when post-exec notifiers are enabled.
To work around this problem, disable crash kexec post notifiers:
# echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers
Bugzilla:1865745
The SCSI host address sometimes changes when booting a Hyper-V VM with multiple guest disks
Currently, when booting a RHEL 8 virtual machine (VM) on the Hyper-V hypervisor, the host portion of the Host, Bus, Target, Lun (HBTL) SCSI address in some cases changes. As a consequence, automated tasks set up with the HBTL SCSI identification or device node in the VM do not work consistently. This occurs if the VM has more than one disk or if the disks have different sizes.
To work around the problem, modify your kickstart files, using one of the following methods:
Method 1: Use persistent identifiers for SCSI devices.
You can use for example the following powershell script to determine the specific device identifiers:
# Output what the /dev/disk/by-id/<value> for the specified hyper-v virtual disk. # Takes a single parameter which is the virtual disk file. # Note: kickstart syntax works with and without the /dev/ prefix. param ( [Parameter(Mandatory=$true)][string]$virtualdisk ) $what = Get-VHD -Path $virtualdisk $part = $what.DiskIdentifier.ToLower().split('-') $p = $part[0] $s0 = $p[6] + $p[7] + $p[4] + $p[5] + $p[2] + $p[3] + $p[0] + $p[1] $p = $part[1] $s1 = $p[2] + $p[3] + $p[0] + $p[1] [string]::format("/dev/disk/by-id/wwn-0x60022480{0}{1}{2}", $s0, $s1, $part[4])
You can use this script on the hyper-v host, for example as follows:
PS C:\Users\Public\Documents\Hyper-V\Virtual hard disks> .\by-id.ps1 .\Testing_8\disk_3_8.vhdx /dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4 PS C:\Users\Public\Documents\Hyper-V\Virtual hard disks> .\by-id.ps1 .\Testing_8\disk_3_9.vhdx /dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2
Afterwards, the disk values can be used in the kickstart file, for example as follows:
part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=/dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2 part /home --fstype="xfs" --grow --ondisk=/dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4
As these values are specific for each virtual disk, the configuration needs to be done for each VM instance. It may, therefore, be useful to use the %include
syntax to place the disk information into a separate file.
Method 2: Set up device selection by size.
A kickstart file that configures disk selection based on size must include lines similar to the following:
... # Disk partitioning information is supplied in a file to kick start %include /tmp/disks ... # Partition information is created during install using the %pre section %pre --interpreter /bin/bash --log /tmp/ks_pre.log # Dump whole SCSI/IDE disks out sorted from smallest to largest ouputting # just the name disks=(`lsblk -n -o NAME -l -b -x SIZE -d -I 8,3`) || exit 1 # We are assuming we have 3 disks which will be used # and we will create some variables to represent d0=${disks[0]} d1=${disks[1]} d2=${disks[2]} echo "part /home --fstype="xfs" --ondisk=$d2 --grow" >> /tmp/disks echo "part swap --fstype="swap" --ondisk=$d0 --size=4096" >> /tmp/disks echo "part / --fstype="xfs" --ondisk=$d1 --grow" >> /tmp/disks echo "part /boot --fstype="xfs" --ondisk=$d1 --size=1024" >> /tmp/disks %end
Bugzilla:1906870
RHEL instances on Azure fail to boot if provisioned by cloud-init
and configured with an NFSv3 mount entry
Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init
tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab
file.
Bugzilla:2081114
11.19. Supportability
The getattachment
command fails to download multiple attachments at once
The redhat-support-tool
command offers the getattachment
subcommand for downloading attachments. However, getattachment
is currently only able to download a single attachment and fails to download multiple attachments.
As a workaround, you can download multiple attachments one by one by passing the case number and UUID for each attachment in the getattachment
subcommand.
redhat-support-tool
does not work with the FUTURE
crypto policy
Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE
system-wide cryptographic policy, the redhat-support-tool
utility does not work with this policy level at the moment.
To work around this problem, use the DEFAULT
crypto policy while connecting to the Customer Portal API.
Timeout when running sos report
on IBM Power Systems, Little Endian
When running the sos report
command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu
directory. As a workaround, increase the plugin’s timeout accordingly:
- For one-time setting, run:
# sos report -k processor.timeout=1800
-
For a permanent change, edit the
[plugin_options]
section of the/etc/sos/sos.conf
file:
[plugin_options] # Specify any plugin options and their values here. These options take the form # plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800
The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin’s timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command:
# time sos report -o processor -k processor.timeout=0 --batch --build
Bugzilla:2011413
11.20. Containers
Running systemd within an older container image does not work
Running systemd within an older container image, for example, centos:7
, does not work:
$ podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.
To work around this problem, use the following commands:
# mkdir /sys/fs/cgroup/systemd # mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd # podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd
Jira:RHELPLAN-96940