Chapter 11. Known issues

download PDF

This part describes known issues in Red Hat Enterprise Linux 9.2.

11.1. Installer and image creation

The auth and authconfig Kickstart commands require the AppStream repository

The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository.

To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation.


The reboot --kexec and inst.kexec commands do not provide a predictable system state

Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.

Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.


Unexpected SELinux policies on systems where Anaconda is running as an application

When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the –image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue.


Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool

When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only Red Hat CDN is detected).

This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format.

As a workaround, use either of the following solution:

  • When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo=.
  • To create a bootable USB device on Windows, use Fedora Media Writer.
  • When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device.

For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3.


The USB CD-ROM drive is not available as an installation source in Anaconda

Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk.

To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail.


Driver disk menu fails to display user inputs on the console

When you start RHEL installation using the inst.dd option on the Kernel command line with a driver disk, the console fails to display the user input. Consequently, it appears that the application does not respond to the user input and freezes, but displays the output which is confusing for users. However, this behavior does not affect the functionality, and user input gets registered after pressing Enter.

As a workaround, to see the expected results, ignore the absence of user inputs in the console and press Enter when you finish adding inputs.


Hard drive partitioned installations with iso9660 filesystem fails

You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD.

To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts.

Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk.

wipefs -a /dev/sda

As a result, installations work as expected without any errors.


Anaconda fails to verify existence of an administrator user account

While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account.

To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system.


New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10

PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL.

The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand.

To work around this problem, you can use another filesystem for /boot, for example ext4.


RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload

When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error:

The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.

To work around this issue:

  • Use an automatic partitioning scheme and do not add any mount points manually.
  • Manually assign mount points only inside /var directory. For example, /var/my-mount-point), and the following standard directories: /, /boot, /var.

As a result, the installation process finishes successfully.


NetworkManager fails to start after the installation when connected to a network but without DHCP or a static IP address configured

Starting with RHEL 9.0, Anaconda activates network devices automatically when there is no specific ip= or kickstart network configuration set. Anaconda creates a default persistent configuration file for each Ethernet device. The connection profile has the ONBOOT and autoconnect value set to true. As a consequence, during the start of the installed system, RHEL activates the network devices, and the networkManager-wait-online service fails.

As a workaround, do one of the following:

  • Delete all connections using the nmcli utility except one connection you want to use. For example:

    1. List all connection profiles:

      # nmcli connection show
    2. Delete the connection profiles that you do not require:

      # nmcli connection delete <connection_name>

      Replace <connection_name> with the name of the connection you want to delete.

  • Disable the auto connect network feature in Anaconda if no specific ip= or kickstart network configuration is set.

    1. In the Anaconda GUI, navigate to Network & Host Name.
    2. Select a network device to disable.
    3. Click Configure.
    4. On the General tab, deselect the Connect automatically with priority
    5. Click Save.


Unable to load an updated driver from the driver update disc in the installation environment

A new version of a driver from the driver update disc might not load if the same driver from the installation initial ramdisk has already been loaded. As a consequence, an updated version of the driver cannot be applied to the installation environment.

As a workaround, use the modprobe.blacklist= kernel command line option together with the inst.dd option. For example, to ensure that an updated version of the virtio_blk driver from a driver update disc is loaded, use modprobe.blacklist=virtio_blk and then continue with the usual procedure to apply drivers from the driver update disk. As a result, the system can load an updated version of the driver and use it in the installation environment.


Kickstart installations fail to configure the network connection

Anaconda performs the kickstart network configuration only through the NetworkManager API. Anaconda processes the network configuration after the %pre kickstart section. As a consequence, some tasks from the kickstart %pre section are blocked. For example, downloading packages from the %pre section fails due to unavailability of the network configuration.

To work around this problem:

  • Configure the network, for example using the nmcli tool, as a part of the %pre script.
  • Use the installer boot options to configure the network for the %pre script.

As a result, it is possible to use the network for tasks in the %pre section and the kickstart installation process completes.


Installation might fail with Anaconda error while using USB 3.0 port on select RAID volumes

The RHEL installation process might fail with the following Anaconda error when you try to install it on the select RAID 0 or RAID 1 with the bootable drive connected to USB 3.0 port:

dasbus.error.DBusError: 'DiskDevice' object has no attribute 'members'

Anaconda fails only when users select the Install Red Hat Enterprise Linux option on the boot menu.

As a workaround, use one of the following solutions:

  • Install RHEL 9.3 or later.
  • Connect to USB 2.0 port instead of USB 3.0 port.
  • Select Test this media and Install Red Hat Enterprise Linux instead of the default boot menu option.


11.2. Software management

The Installation process sometimes becomes unresponsive

When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log file displays the following message at the end:

10:20:56,416 DDEBUG dnf: RPM transaction over.

To workaround this problem, restart the installation process.


11.3. Shells and command-line tools

ReaR fails during recovery if the TMPDIR variable is set in the configuration file

Setting and exporting TMPDIR in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file does not work and is deprecated.

The ReaR default configuration file /usr/share/rear/conf/default.conf contains the following instructions:

# To have a specific working area directory prefix for Relax-and-Recover
# specify in /etc/rear/local.conf something like
# export TMPDIR="/prefix/for/rear/working/directory"
# where /prefix/for/rear/working/directory must already exist.
# This is useful for example when there is not sufficient free space
# in /tmp or $TMPDIR for the ISO image or even the backup archive.

The instructions mentioned above do not work correctly because the TMPDIR variable has the same value in the rescue environment, which is not correct if the directory specified in the TMPDIR variable does not exist in the rescue image.

As a consequence, setting and exporting TMPDIR in the /etc/rear/local.conf file leads to the following error when the rescue image is booted :

mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory
cp: missing destination file operand after '/etc/rear/mappings/mac'
Try 'cp --help' for more information.
No network interface mapping is specified in /etc/rear/mappings/mac

or the following error and abort later, when running rear recover:

ERROR: Could not create build area

To work around this problem, if you want to have a custom temporary directory, specify a custom directory for ReaR temporary files by exporting the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=…​ statement and then execute the rear command in the same shell session or script. As a result, the recovery is successful in the described configuration.


Renaming network interfaces using ifcfg files fails

On RHEL 9, the initscripts package is not installed by default. Consequently, renaming network interfaces using ifcfg files fails. To solve this problem, Red Hat recommends that you use udev rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the man page.

If you cannot use one of the recommended solutions, install the initscripts package.


The chkconfig package is not installed by default in RHEL 9

The chkconfig package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9.

To manage services, use the systemctl commands or install the chkconfig package manually.

For more information about systemd, see Managing systemd. For instructions on how to use the systemctl utility, see Managing system services with systemctl.


The Service Location Protocol (SLP) is vulnerable to an attack through UDP

The OpenSLP provides a dynamic configuration mechanism for applications in local area networks, such as printers and file servers. However, SLP is vulnerable to a reflective denial of service amplification attack through UDP on systems connected to the internet. SLP allows an unauthenticated attacker to register new services without limits set by the SLP implementation. By using UDP and spoofing the source address, an attacker can request the service list, creating a Denial of Service on the spoofed address.

To prevent external attackers from accessing the SLP service, disable SLP on all systems running on untrusted networks, such as those directly connected to the internet. Alternatively, to work around this problem, configure firewalls to block or filter traffic on UDP and TCP port 427.


11.4. Infrastructure services

Both bind and unbound disable validation of SHA-1-based signatures

The bind and unbound components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy.

As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable.

To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys.

For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution.


named fails to start if the same writable zone file is used in multiple zones

BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named service, named fails to start. To work around this problem, use the in-view clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path.

Note that writable zone files are typically used in zones with allowed dynamic updates, slave zones, or zones maintained by DNSSEC.


libotr is not compliant with FIPS

The libotr library and toolkit for off-the-record (OTR) messaging provides end-to-end encryption for instant messaging conversations. However, the libotr library does not conform to the Federal Information Processing Standards (FIPS) due to its use of the gcry_pk_sign() and gcry_pk_verify() functions. As a result, you cannot use the libotr library in FIPS mode.


Setting the console keymap requires the libxkbcommon library on your minimal install

In RHEL 9, certain systemd library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb command fails.

To work around this problem, install the libxkbcommon library:

# dnf install libxkbcommon


The %vmeff metric from the sysstat package displays incorrect values

The sysstat package provides the %vmeff metric to measure the page reclaim efficiency. The values of the %vmeff column returned by the sar -B command are incorrect because sysstat does not parse all relevant /proc/vmstat values provided by later kernel versions. To work around this problem, you can calculate the %vmeff value manually from the /proc/vmstat file. For details, see Why the sar(1) tool reports %vmeff values beyond 100 % in RHEL 8 and RHEL 9?


11.5. Security

tangd-keygen does not handle non-default umask correctly

The tangd-keygen script does not change file permissions for generated key files. Consequently, on systems with a default user file-creation mode mask (umask) that prevents reading keys to other users, the tang-show-keys command returns the error message Internal Error 500 instead of displaying the keys.

To work around the problem, use the chmod o+r *.jwk command to change permissions on the files in the /var/db/tang directory.


OpenSSL does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures

The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL library fail to work with an RSA key if the key is held by the PKCS #11 token. As a result, TLS communication fails in the described scenario.

To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available.


OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures

The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures.

To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file:

SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384
MaxProtocol = TLSv1.2

As a result, a TLS connection can be established in the described scenario.


scp empties files copied to themselves when a specific syntax is used

The scp utility changed from the Secure copy protocol (SCP) to the more secure SSH file transfer protocol (SFTP). Consequently, copying a file from a location to the same location erases the file content. The problem affects the following syntax:

scp localhost:/myfile localhost:/myfile

To work around this problem, do not copy files to a destination that is the same as the source location using this syntax.

The problem has been fixed for the following syntaxes:

  • scp /myfile localhost:/myfile
  • scp localhost:~/myfile ~/myfile


The OSCAP Anaconda add-on does not fetch tailored profiles in the graphical installation

The OSCAP Anaconda add-on does not provide an option to select or deselect tailoring of security profiles in the RHEL graphical installation. Starting from RHEL 8.8, the add-on does not take tailoring into account by default when installing from archives or RPM packages. Consequently, the installation displays the following error message instead of fetching an OSCAP tailored profile:

There was an unexpected problem with the supplied content.

To work around this problem, you must specify paths in the %addon org_fedora_oscap section of your Kickstart file, for example:

xccdf-path = /usr/share/xml/scap/sc_tailoring/ds-combined.xml
tailoring-path = /usr/share/xml/scap/sc_tailoring/tailoring-xccdf.xml

As a result, you can use the graphical installation for OSCAP tailored profiles only with the corresponding Kickstart specifications.


Ansible remediations require additional collections

With the replacement of Ansible Engine by the ansible-core package, the list of Ansible modules provided with the RHEL subscription is reduced. As a consequence, running remediations that use Ansible content included within the scap-security-guide package requires collections from the rhc-worker-playbook package.

For an Ansible remediation, perform the following steps:

  1. Install the required packages:

    # dnf install -y ansible-core scap-security-guide rhc-worker-playbook
  2. Navigate to the /usr/share/scap-security-guide/ansible directory:

    # cd /usr/share/scap-security-guide/ansible
  3. Run the relevant Ansible playbook using environment variables that define the path to the additional Ansible collections:

    # ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook-cis_server_l1.yml

    Replace cis_server_l1 with the ID of the profile against which you want to remediate the system.

As a result, the Ansible content is processed correctly.


Support of the collections provided in rhc-worker-playbook is limited to enabling the Ansible content sourced in scap-security-guide.


oscap-anaconda-addon does not allow CIS hardening of systems with Network Servers package group

When installing RHEL Network Servers with a CIS security profile (cis, cis_server_l1, cis_workstation_l1, or cis_workstation_l2) on systems with the Network Servers package group selected, oscap-anaconda-addon sends the error message package tftp has been added to the list of excluded packages, but it can’t be removed from the current software selection without breaking the install. To proceed with the installation, navigate back to Software Selection and uncheck the Network Servers additional software to allow the installation and hardening to finish. Then, install the required packages.


Keylime does not accept concatenated PEM certificates

When Keylime receives a certificate chain as multiple certificates in the PEM format concatenated in a single file, the keylime-agent-rust Keylime component does not correctly use all the provided certificates during signature verification, resulting in a TLS handshake failure. As a consequence, the client components (keylime_verifier and keylime_tenant) cannot connect to the Keylime agent. To work around this problem, use just one certificate instead of multiple certificates.


Keylime requires a specific file for tls_dir = default

When the tls_dir variable is set to default in Keylime verifier or registrar configuration, Keylime checks for the presence of the cacert.crt file in the /var/lib/keylime/cv_ca directory. If the file is not present, the keylime_verifier or keylime_registrar service fails to start and records the following message in a log: Exception: It appears that the verifier has not yet created a CA and certificates, please run the verifier first. As a consequence, Keylime rejects custom certificate authority (CA) certificates that have a different file name even when they are placed in the /var/lib/keylime/ca_cv directory.

To work around this problem and use custom CA certificates, manually specify tls_dir =/var/lib/keylime/ca_cv instead of using tls_dir = default.


Default SELinux policy allows unconfined executables to make their stack executable

The default state of the selinuxuser_execstack boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off.


SSH timeout rules in STIG profiles configure incorrect options

An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles:

  • DISA STIG for RHEL 9 (xccdf_org.ssgproject.content_profile_stig)
  • DISA STIG with GUI for RHEL 9 (xccdf_org.ssgproject.content_profile_stig_gui)

In each of these profiles, the following two rules are affected:

Title: Set SSH Client Alive Count Max to zero
CCE Identifier: CCE-90271-8
Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0

Title: Set SSH Idle Timeout Interval
CCE Identifier: CCE-90811-1
Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout

When applied to SSH servers, each of these rules configures an option (ClientAliveCountMax and ClientAliveInterval) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed.


GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies

The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures.

To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the non-secure SHA-1 signatures.


gpg-agent does not work as an SSH agent in FIPS mode

The gpg-agent tool creates MD5 fingerprints when adding keys to the ssh-agent program even though FIPS mode disables the MD5 digest. Consequently, the ssh-add utility fails to add the keys to the authentication agent.

To work around the problem, create the ~/.gnupg/sshcontrol file without using the gpg-agent --daemon --enable-ssh-support command. For example, you can paste the output of the gpg --list-keys command in the <FINGERPRINT> 0 format to ~/.gnupg/sshcontrol. As a result, gpg-agent works as an SSH authentication agent.


OpenSCAP memory-consumption problems

On systems with limited memory, the OpenSCAP scanner might terminate prematurely or it might not generate the results files. To work around this problem, you can customize the scanning profile to deselect rules that involve recursion over the entire / file system:

  • rpm_verify_hashes
  • rpm_verify_permissions
  • rpm_verify_ownership
  • file_permissions_unauthorized_world_writable
  • no_files_unowned_by_user
  • dir_perms_world_writable_system_owned
  • file_permissions_unauthorized_suid
  • file_permissions_unauthorized_sgid
  • file_permissions_ungroupowned
  • dir_perms_world_writable_sticky_bits

For more details and more workarounds, see the related Knowledgebase article.


Remediating service-related rules during kickstart installations might fail

During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues.


11.6. Networking

The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces

Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses:

  1. Stop and disable the nm-cloud-setup service and timer:

    # systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer
  2. Display the available connection profiles:

    # nmcli connection show
  3. Reactive the affected connection profiles:

    # nmcli connection up "<profile_name>"

As a result, the service no longer removes manually-configured secondary IP addresses from interfaces.


Failure to update the session key causes the connection to break

Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key.


The initscripts package is not installed by default

By default, the initscripts package is not installed. As a consequence, the ifup and ifdown utilities are not available. As an alternative, use the nmcli connection up and nmcli connection down commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown package, which provides a NetworkManager solution for the ifup and ifdown utilities.


Using the XDP multi buffer mode with the mlx5 driver and a MTU greater than 3498 bytes requires disabling RX Striding RQ

Running an eXpress Data Path (XDP) script with multi buffer mode on a host that matches all of the following conditions fails:

  • The host uses the mlx5 driver.
  • The Maximum Transmission Unit (MTU) value is greater than 3498 bytes.
  • The receive striding receive queue (RX Striding RQ) feature is enabled on the Mellanox interface.

If all conditions apply, the script fails with a link set xdp fd failed error. To run the XDP script on a host with a higher MTU, disable RX Striding RQ on the Mellanox interface:

# ethtool --set-priv-flags <interface_name> rx_striding_rq off

As a result, you can use the XDP multi buffer mode on interfaces that use the mlx5 driver and have an MTU value greater than 3498 bytes.


11.7. Kernel

The kdump mechanism in kernel causes OOM errors on the 64K kernel

The 64K kernel page size on the 64-bit ARM architecture uses more memory than the 4KB kernel. Consequently, kdump causes a kernel panic and memory allocation fails with out of memory (OOM) errors. As a work around, manually configure the crashkernel value to 640 MB. For example, set the crashkernel= parameter as crashkernel=2G- :640M.

As a result, the kdump mechanism does not fail on the 64K kernel in the described scenario.


Customer applications with dependencies on kernel page size may need updating when moving from 4k to 64k page size kernel

RHEL is compatible with both 4k and 64k page size kernels. Customer applications with dependencies on a 4k kernel page size may require updating when moving from 4k to 64k page size kernels. Known instances of this include jemalloc and dependent applications.

The jemalloc memory allocator library is sensitive to the page size used in the system’s runtime environment. The library can be built to be compatible with 4k and 64k page size kernels, for example, when configured with --with-lg-page=16 or env JEMALLOC_SYS_WITH_LG_PAGE=16 (for jemallocator Rust crate). Consequently, a mismatch can occur between the page size of the runtime environment and the page size that was present when compiling binaries that depend on jemalloc. As a result, using a jemalloc-based application triggers the following error:

<jemalloc>: Unsupported system page size

To avoid this problem, use one of the following approaches:

  • Use the appropriate build configuration or environment options to create 4k and 64k page size compatible binaries.
  • Build any userspace packages that use jemalloc after booting into the final 64k kernel and runtime environment.

For example, you can build the fd-find tool, which also uses jemalloc, with the cargo Rust package manager. In the final 64k environment, trigger a new build of all dependencies to resolve the mismatch in the page size by entering the cargo command:

# cargo install fd-find --force


The kdump service fails to build the initrd file on IBM Z systems

On the 64-bit IBM Z systems, the kdump service fails to load the initial RAM disk (initrd) when znet related configuration information such as s390-subchannels reside in an inactive NetworkManager connection profile. Consequently, the kdump mechanism fails with the following error:

dracut: Failed to set up znet
kdump: mkdumprd: failed to make kdump initrd

As a workaround, use one of the following solutions:

  • Configure a network bond or bridge by re-using the connection profile that has the znet configuration information:

    $ nmcli connection modify enc600 master bond0 slave-type bond
  • Copy the znet configuration information from the inactive connection profile to the active connection profile:

    1. Run the nmcli command to query the NetworkManager connection profiles:

      # nmcli connection show
      NAME                       UUID               TYPE   Device
      bridge-br0           ed391a43-bdea-4170-b8a2 bridge   br0
      bridge-slave-enc600  caf7f770-1e55-4126-a2f4 ethernet enc600
      enc600               bc293b8d-ef1e-45f6-bad1 ethernet --
    2. Update the active profile with configuration information from the inactive connection:

       for name in nettype subchannels options; do
       val=$(nmcli --get-values "$field"connection show "$inactive_connection")
       nmcli connection modify "$active_connection" "$field" $val"
    3. Restart the kdump service for changes to take effect:

      # kdumpctl restart


kTLS does not support offloading of TLS 1.3 to NICs

Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded.


The Delay Accounting functionality does not display the SWAPIN and IO% statistics columns by default

The Delayed Accounting functionality, unlike early versions, is disabled by default. Consequently, the iotop application does not show the SWAPIN and IO% statistics columns and displays the following warning:

CONFIG_TASK_DELAY_ACCT not enabled in kernel, cannot determine SWAPIN and IO%

The Delay Accounting functionality, using the taskstats interface, provides the delay statistics for all tasks or threads that belong to a thread group. Delays in task execution occur when they wait for a kernel resource to become available, for example, a task waiting for a free CPU to run on. The statistics help in setting a task’s CPU priority, I/O priority, and rss limit values appropriately.

As a workaround, you can enable the delayacct boot option either at run time or boot.

  • To enable delayacct at run time, enter:

    echo 1 > /proc/sys/kernel/task_delayacct

    Note that this command enables the feature system wide, but only for the tasks that you start after running this command.

  • To enable delayacct permanently at boot, use one of the following procedures:

As a result, the iotop application displays the SWAPIN and IO% statistics columns.


The kdump mechanism fails to capture the vmcore file on LUKS-encrypted targets

When running kdump on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file (vmcore) on LUKS-encrypted targets.

With the kdumpctl estimate command, you can query the Recommended crashkernel value, which is the recommended memory size required for kdump.

To work around this problem, use following steps to configure the required memory for kdump on LUKS encrypted targets:

  1. Print the estimate crashkernel value:

    # kdumpctl estimate
  2. Configure the amount of required memory by increasing the crashkernel value:

    # grubby --args=crashkernel=652M --update-kernel=ALL
  3. Reboot the system for changes to take effect.

    # reboot

As a result, kdump works correctly on systems with LUKS-encrypted partitions.


Allocating crash kernel memory fails at boot time

On certain Ampere Altra systems, allocating the crash kernel memory for kdump usage fails during boot when the available memory is below 1 GB. Consequently, the kdumpctl command fails to start the kdump service.

To workaround this problem, do one of the following:

  • Decrease the value of the crashkernel parameter by a minimum of 240 MB to fit the size requirement, for example crashkernel=240M.
  • Use the crashkernel=x,high option to reserve crash kernel memory above 4 GB for kdump.

As a result, the crash kernel memory allocation for kdump does not fail on Ampere Altra systems.


RHEL fails to recognize NVMe disks when VMD is enabled

When you reset or reattach the driver, the Volume Management Device (VMD) domain currently does not soft-reset. Consequently, the hardware cannot properly detect and enumerate its devices. As a result, the operating system with VMD enabled does not recognize NVMe disks, especially when resetting a server or working with a VM machine.


The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4

After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 9.1 and later, the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message:

kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110
kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms)
kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110

An unconfirmed work around is to power off the system and back on again. Do not reboot.


weak-modules from kmod fails to work with module inter-dependencies

The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario.

To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel.


The mlx5 driver fails while using the Mellanox ConnectX-5 adapter

In Ethernet switch device driver model (switchdev) mode, the mlx5 driver fails when configured with the device managed flow steering (DMFS) parameter and ConnectX-5 adapter supported hardware. As a consequence, you can see the following error message:

BUG: Bad page cache in process umount pfn:142b4b

To work around this problem, use the software managed flow steering (SMFS) parameter instead of DMFS.


Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1 boot parameter to avoid lock contentions

Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock, which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter.

To avoid lock conflicts, enable skew_tick=1:

  1. Enable the skew_tick=1 parameter with grubby.

    # grubby --update-kernel=ALL --args="skew_tick=1"
  2. Reboot for changes to take effect.
  3. Verify the new settings by running the cat /proc/cmdline command.

Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads.


dkms provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs

The Dynamic Kernel Module Support (dkms) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kilobytes and 64 kilobytes page sizes. As a result, when the kernel update is performed and the kernel-64k-devel package is not installed, dkms provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers package, which contains header files for both types of ARM CPU architectures and is not specific to dkms and its requirements.


11.8. File systems and storage

Anaconda fails to login iSCSI server using the no authentication method after unsuccessful CHAP authentication attempt

When you add iSCSI discs using CHAP authentication and the login attempt fails due to incorrect credentials, a relogin attempt to the discs with the no authentication method fails. To workaround this problem, close the current session and login using the no authentication method.


Device Mapper Multipath is not supported with NVMe/TCP

Using Device Mapper Multipath with the nvme-tcp driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath tools with NVMe.

By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices.


The blk-availability systemd service deactivates complex device stacks

In systemd, the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command:

# systemctl enable --now blk-availability.service

As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages.


Disabling quota accounting is no longer possible for an XFS filesystem mounted with quotas enabled

As of RHEL 9.2, it is no longer possible to disable quota accounting on an XFS filesystem which has been mounted with quotas enabled.

To work around this issue, disable quota accounting by remounting the filesystem, with the quota option removed.


System fails to boot when adding an NVMe-FC device as a mount point in /etc/fstab

The Non-volatile Memory Express over Fibre Channel (NVMe-FC) devices mounted through the /etc/fstab file fails to mount at boot and the system enters into emergency mode. This is due to a known bug in the nvme-cli nvmf-autoconnect systemd services.


udev rule change for NVMe devices

There is a udev rule change for NVMe devices that adds OPTIONS="string_escape=replace" parameter. This leads to a disk by-id naming change for some vendors, if the serial number of your device has leading whitespace.


11.9. Dynamic programming languages, web and database servers

python3.11-lxml does not provide the lxml.isoschematron submodule

The python3.11-lxml package is distributed without the lxml.isoschematron submodule because it is not under an open source license. The submodule implements ISO Schematron support. As an alternative, pre-ISO-Schematron validation is available in the lxml.etree.Schematron class. The remaining content of the python3.11-lxml package is unaffected.


The --ssl-fips-mode option in MySQL and MariaDB does not change FIPS mode

The --ssl-fips-mode option in MySQL and MariaDB in RHEL works differently than in upstream.

In RHEL 9, if you use --ssl-fips-mode as an argument for the mysqld or mariadbd daemon, or if you use ssl-fips-mode in the MySQL or MariaDB server configuration files, --ssl-fips-mode does not change FIPS mode for these database servers.


  • If you set --ssl-fips-mode to ON, the mysqld or mariadbd server daemon does not start.
  • If you set --ssl-fips-mode to OFF on a FIPS-enabled system, the mysqld or mariadbd server daemons still run in FIPS mode.

This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components.

Therefore, do not use the --ssl-fips-mode option in MySQL or MariaDB in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system:

  • Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode.
  • Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode.


11.10. Compilers and development tools

Certain symbol-based probes do not work in SystemTap on the 64-bit ARM architecture

Kernel configuration disables certain functionality needed for SystemTap. Consequently, some symbol-based probes do not work on the 64-bit ARM architecture. As a result, affected SystemTap scripts may not run or may not collect hits on desired probe points.

Note that this bug has been fixed for the remaining architectures with the release of the RHBA-2022:5259 advisory.


GCC in GCC Toolset 12: CPU detection may fail on Intel Sapphire Rapids processors

CPU detection on Intel Sapphire Rapids processors relies on the existence of the AVX512_VP2INTERSECT feature. This feature has been removed from the GCC Toolset 12 version of GCC and, as a consequence, CPU detection may fail on Intel Sapphire Rapids processors.


11.11. Identity Management

Configuring a referral for a suffix fails in Directory Server

If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral command fails with the following error:

Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state

As a consequence, configuring a referral for suffixes fail. To work around the problem:

  1. Set the nsslapd-referral parameter manually:

    # ldapmodify -D "cn=Directory Manager" -W -H ldap://
    dn: cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
    changetype: modify
    add: nsslapd-referral
    nsslapd-referral: ldap://remote_server:389/dc=example,dc=com
  2. Set the back-end state:

    # dsconf <instance_name> backend suffix set --state referral

As a result, with the workaround, you can configure a referral for a suffix.


The dsconf utility has no option to create fix-up tasks for the entryUUID plug-in

The dsconf utility does not provide an option to create fix-up tasks for the entryUUID plug-in. As a result, administrators cannot not use dsconf to create a task to automatically add entryUUID attributes to existing entries. As a workaround, create a task manually:

# ldapadd -D "cn=Directory Manager" -W -H ldap:// -x

dn: cn=entryuuid_fixup_<time_stamp>,cn=entryuuid task,cn=tasks,cn=config
objectClass: top
objectClass: extensibleObject
basedn: <fixup base tree>
cn: entryuuid_fixup_<time_stamp>
filter: <filtered_entry>

After the task has been created, Directory Server fixes entries with missing or invalid entryUUID attributes.


MIT Kerberos does not support ECC certificates for PKINIT

MIT Kerberos does not implement the RFC5349 request for comments document, which describes the design of elliptic-curve cryptography (ECC) support in Public Key Cryptography for initial authentication (PKINIT). Consequently, the MIT krb5-pkinit package, used by RHEL, does not support ECC certificates. For more information, see Elliptic Curve Cryptography (ECC) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT).


The DEFAULT:SHA1 subpolicy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs

The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm.

However, the Active Directory (AD) Kerberos Distribution Center (KDC) still uses the SHA-1 digest algorithm to sign CMS messages. As a result, RHEL 9 Kerberos clients fail to authenticate users by using PKINIT against an AD KDC.

To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command:

 # update-crypto-policies --set DEFAULT:SHA1


The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL-9 and non-AD Kerberos agent

If a RHEL 9 Kerberos agent, either a client or Kerberos Distribution Center (KDC), interacts with a non-RHEL-9 Kerberos agent that is not an Active Directory (AD) agent, the PKINIT authentication of the user fails. To work around the problem, perform one of the following actions:

  • Set the RHEL 9 agent’s crypto-policy to DEFAULT:SHA1 to allow the verification of SHA-1 signatures:

    # update-crypto-policies --set DEFAULT:SHA1
  • Update the non-RHEL-9 and non-AD agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos client or KDC packages to the versions that use SHA-256 instead of SHA-1:

    • CentOS 9 Stream: krb5-1.19.1-15
    • RHEL 8.7: krb5-1.18.2-17
    • RHEL 7.9: krb5-1.15.1-53
    • Fedora Rawhide/36: krb5-1.19.2-7
    • Fedora 35/34: krb5-1.19.2-3

As a result, the PKINIT authentication of the user works correctly.

Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1.

See also The DEFAULT:SHA1 subpolicy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs.


FIPS support for AD trust requires the AD-SUPPORT crypto subpolicy

Active Directory (AD) uses AES SHA-1 HMAC encryption types, which are not allowed in FIPS mode on RHEL 9 by default. If you want to use RHEL 9 IdM hosts with an AD trust, enable support for AES SHA-1 HMAC encryption types before installing IdM software.

Since FIPS compliance is a process that involves both technical and organizational agreements, consult your FIPS auditor before enabling the AD-SUPPORT subpolicy to allow technical measures to support AES SHA-1 HMAC encryption types, and then install RHEL IdM:

 # update-crypto-policies --set FIPS:AD-SUPPORT


Heimdal client fails to authenticate a user using PKINIT against RHEL 9 KDC

By default, a Heimdal Kerberos client initiates the PKINIT authentication of an IdM user by using Modular Exponential (MODP) Diffie-Hellman Group 2 for Internet Key Exchange (IKE). However, the MIT Kerberos Distribution Center (KDC) on RHEL 9 only supports MODP Group 14 and 16.

Consequently, the pre-autentication request fails with the krb5_get_init_creds: PREAUTH_FAILED error on the Heimdal client and Key parameters not accepted on the RHEL MIT KDC.

To work around this problem, ensure that the Heimdal client uses MODP Group 14. Set the pkinit_dh_min_bits parameter in the libdefaults section of the client configuration file to 1759:

pkinit_dh_min_bits = 1759

As a result, the Heimdal client completes the PKINIT pre-authentication against the RHEL MIT KDC.


IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust

Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate.


IdM to AD cross-realm TGS requests fail

The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD).

Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error:

Generic error (see e-text) while getting credentials for <service principal>


IdM Vault encryption and decryption fails in FIPS mode

The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate.


Users without SIDs cannot log in to IdM after an upgrade

After upgrading your IdM replica to RHEL 9.2, the IdM Kerberos Distribution Centre (KDC) might fail to issue ticket-granting tickets (TGTs) to users who do not have Security Identifiers (SIDs) assigned to their accounts. Consequently, the users cannot log in to their accounts.

To work around the problem, generate SIDs by running the following command as an IdM administrator on another IdM replica in the topology:

# ipa config-mod --enable-sid --add-sids

Afterward, if users still cannot log in, examine the Directory Server error log. You might have to adjust ID ranges to include user POSIX identities.

See the When upgrading to RHEL9, IDM users are not able to login anymore Knowledgebase solution for more information.


Migrated IdM users might be unable to log in due to mismatching domain SIDs

If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs.


MIT krb5 user fails to obtain an AD TGT because of incompatible encryption types generating the user PAC

In MIT krb5 1.20 and later packages, a Privilege Attribute Certificate (PAC) is included in all Kerberos tickets by default. The MIT Kerberos Distribution Center (KDC) selects the strongest encryption type available to generate the KDC checksum in the PAC, which currently is the AES HMAC-SHA2 encryption types defined in RFC8009. However, Active Directory (AD) does not support this RFC. Consequently, in an AD-MIT cross-realm setup, an MIT krb5 user fails to obtain an AD ticket-granting ticket (TGT) because the cross-realm TGT generated by MIT KDC contains an incompatible KDC checksum type in the PAC.

To work around the problem, set the disable_pac parameter to true for the MIT realm in the [realms] section of the /var/kerberos/krb5kdc/kdc.conf configuration file. As a result, the MIT KDC generates tickets without PAC, which means that AD skips the failing checksum verification and an MIT krb5 user can obtain an AD TGT.


Potential risk when using the default value for ldap_id_use_start_tls option

When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search.

Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls, defaults to false. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap. Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI.

If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL.


Adding a RHEL 9 replica in FIPS mode to an IdM deployment in FIPS mode that was initialized with RHEL 8.6 or earlier fails

The default RHEL 9 FIPS cryptographic policy aiming to comply with FIPS 140-3 does not allow the use of the AES HMAC-SHA1 encryption types' key derivation function as defined by RFC3961, section 5.1.

This constraint is a blocker when adding a RHEL 9 Identity Management (IdM) replica in FIPS mode to a RHEL 8 IdM environment in FIPS mode in which the first server was installed on a RHEL 8.6 system or earlier. This is because there are no common encryption types between RHEL 9 and the previous RHEL versions, which commonly use the AES HMAC-SHA1 encryption types but do not use the AES HMAC-SHA2 encryption types.

You can view the encryption type of your IdM master key by entering the following command on the server:

# kadmin.local getprinc K/M | grep -E '^Key:'

To work around the problem, enable the use of AES HMAC-SHA1 on the RHEL 9 replica:

update-crypto-policies --set FIPS:AD-SUPPORT
This workaround might violate FIPS compliance.

As a result, adding the RHEL 9 replica to the IdM deployment proceeds correctly.

Note that there is ongoing work to provide a procedure to generate missing AES HMAC-SHA2-encrypted Kerberos keys on RHEL 7 and RHEL 8 servers. This will achieve FIPS 140-3 compliance on the RHEL 9 replica. However, this process will not be fully automated, because the design of Kerberos key cryptography makes it impossible to convert existing keys to different encryption types. The only way is to ask users to renew their passwords.


SSSD registers the DNS names properly

Previously, if the DNS was set up incorrectly, SSSD always failed the first attempt to register the DNS name. To work around the problem, this update provides a new parameter dns_resolver_use_search_list. Set dns_resolver_use_search_list = false to avoid using the DNS search list.


Directory Server terminates unexpectedly when started in referral mode

Due to a bug, global referral mode does not work in Directory Server. If you start the ns-slapd process with the refer option as the dirsrv user, Directory Server ignores the port settings and terminates unexpectedly. Trying to run the process as the root user changes SELinux labels and prevents the service from starting in future in normal mode. There are no workarounds available.


Directory Server can import LDIF files only from /var/lib/dirsrv/slapd-instance_name/ldif/

Since RHEL 8.3, Red Hat Directory Server (RHDS) uses its own private directories and the PrivateTmp systemd directive is enabled by default for the LDAP services. As a result, RHDS can only import LDIF files from the /var/lib/dirsrv/slapd-instance_name/ldif/ directory. If the LDIF file is stored in a different directory, such as /var/tmp, /tmp, or /root, the import fails with an error similar to the following:

Could not open LDIF file "/tmp/example.ldif", errno 2 (No such file or directory)

To work around this problem, complete the following steps:

  1. Move the LDIF file to the /var/lib/dirsrv/slapd-instance_name/ldif/ directory:

    # mv /tmp/example.ldif /var/lib/dirsrv/slapd-instance_name__/ldif/
  2. Set permissions that allow the dirsrv user to read the file:

    # chown dirsrv /var/lib/dirsrv/slapd-instance_name/ldif/example.ldif
  3. Restore the SELinux context:

    # restorecon -Rv /var/lib/dirsrv/slapd-instance_name/ldif/

For more information, see the solution article LDAP Service cannot access files under the host’s /tmp and /var/tmp directories.


Installing a RHEL 7 IdM client with a RHEL 9.2+ IdM server in FIPS mode fails due to EMS enforcement

The TLS Extended Master Secret (EMS) extension (RFC 7627) is now mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9.2 and later systems. This is in accordance with FIPS-140-3 requirements. However, the openssl version available in RHEL 7.9 and lower does not support EMS. In consequence, installing a RHEL 7 Identity Management (IdM) client with a FIPS-enabled IdM server running on RHEL 9.2 and later fails.

If upgrading the host to RHEL 8 before installing an IdM client on it is not an option, work around the problem by removing the requirement for EMS usage on the RHEL 9 server by applying a NO-ENFORCE-EMS subpolicy on top of the FIPS crypto policy:

# update-crypto-policies --set FIPS:NO-ENFORCE-EMS

Note that this removal goes against the FIPS 140-3 requirements. As a result, you can establish and accept TLS 1.2 connections that do not use EMS, and the installation of a RHEL 7 IdM client succeeds.


11.12. Desktop

Firefox add-ons are disabled after upgrading to RHEL 9

If you upgrade from RHEL 8 to RHEL 9, all add-ons that you previously enabled in Firefox are disabled.

To work around the problem, manually reinstall or update the add-ons. As a result, the add-ons are enabled as expected.


VNC is not running after upgrading to RHEL 9

After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled.

To work around the problem, manually enable the vncserver service after the system upgrade:

# systemctl enable --now vncserver@:port-number

As a result, VNC is now enabled and starts after every system boot as expected.


User Creation screen is unresponsive

When installing RHEL using a graphical user interface, the User Creation screen is unresponsive. As a consequence, creating users during installation is more difficult.

To work around this problem, use one of the following solutions to create users:

  • Run the installation in VNC mode and resize the VNC window.
  • Create users after completing the installation process.


11.13. Graphics infrastructures

NVIDIA drivers might revert to

Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the display server:

  • If the version of the NVIDIA driver is lower than 470.
  • If the system is a laptop that uses hybrid graphics.
  • If you have not enabled the required NVIDIA driver options.

Additionally, Wayland is enabled but the desktop session uses by default if the version of the NVIDIA driver is lower than 510.


Night Light is not available on Wayland with NVIDIA

When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light.

Jira:RHELPLAN-119852 configuration utilities do not work under Wayland utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout.


11.14. The web console

VNC console works incorrectly at certain resolutions

When using the Virtual Network Computing (VNC) console under certain display resolutions, you might experience a mouse offset issue or you might see only a part of the interface. Consequently, using the VNC console might not be possible. To work around this issue, you can try expanding the size of the VNC console or use the Desktop Viewer in the console tab to launch the remote viewer instead.


11.15. Red Hat Enterprise Linux system roles

The metrics system role does not work with disabled fact gathering

Ansible fact gathering might be disabled in your environment for performance or other reasons. In such configurations, it is not currently possible to use the metrics system role. To work around this problem, enable fact caching, or do not use the metrics system role if it is not possible to use fact gathering.


If firewalld.service is masked, using the firewall RHEL system role fails

If firewalld.service is masked on a RHEL system, the firewall RHEL system role fails. To work around this problem, unmask the firewalld.service:

systemctl unmask firewalld.service


Unable to register systems with environment names

The rhc system role fails to register the system when specifying environment names in rhc_environment. As a workaround, use environment IDs instead of environment names while registering.


11.16. Virtualization

Installing a virtual machine over https or ssh in some cases fails

Currently, the virt-install utility fails when attempting to install a guest operating system (OS) from an ISO source over a https or ssh connection - for example using virt-install --cdrom https://example/path/to/image.iso. Instead of creating a virtual machine (VM), the described operation terminates unexpectedly with an internal error: process exited while connecting to monitor message.

Similarly, using the RHEL 9 web console to install a guest OS fails and displays an Unknown driver 'https' error if you use an https or ssh URL, or the Download OS function.

To work around this problem, install qemu-kvm-block-curl and qemu-kvm-block-ssh on the host to enable https and ssh protocol support, respectively. Alternatively, use a different connection protocol or a different installation source.


Using NVIDIA drivers in virtual machines disables Wayland

Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios:

  • When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM)
  • When you assign an NVIDIA vGPU mediated device to a RHEL VM


The Milan VM CPU type is sometimes not available on AMD Milan systems

On certain AMD Milan systems, the Enhanced REP MOVSB (erms) and Fast Short REP MOVSB (fsrm) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host.


A hostdev interface with failover settings cannot be hot-plugged after being hot-unplugged

After removing a hostdev network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM.


Live post-copy migration of VMs with failover VFs fails

Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration.


Host network cannot ping VMs with VFs during live migration

When live migrating a virtual machine (VM) with a configured virtual function (VF), such as a VMs that uses virtual SR-IOV software, the network of the VM is not visible to other devices and the VM cannot be reached by commands such as ping. After the migration is finished, however, the problem no longer occurs.


Failover virtio NICs are not assigned an IP address on Windows virtual machines

Currently, when starting a Windows virtual machine (VM) with only a failover virtio NIC, the VM fails to assign an IP address to the NIC. Consequently, the NIC is unable to set up a network connection. Currently, there is no workaround.


Disabling AVX causes VMs to become unbootable

On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM.


Windows VM fails to get IP address after network interface reset

Sometimes, Windows virtual machines fail to get an IP address after an automatic network interface reset. As a consequence, the VM fails to connect to the network. To work around this problem, disable and re-enable the network adapter driver in the Windows Device Manager.


Broadcom network adapters work incorrectly on Windows VMs after a live migration

Currently, network adapters from the Broadcom family of devices, such as Broadcom, Qlogic, or Marvell, cannot be hot-unplugged during live migration of Windows virtual machines (VMs). As a consequence, the adapters work incorrectly after the migration is complete.

This problem affects only those adapters that are attached to Windows VMs using Single-root I/O virtualization (SR-IOV).

Bugzilla:2090712, Bugzilla:2091528, Bugzilla:2111319

Windows Server 2016 VMs sometimes stops working after hot-plugging a vCPU

Currently, assigning a vCPU to a running virtual machine (VM) with a Windows Server 2016 guest operating system might cause a variety of problems, such as the VM terminating unexpectedly, becoming unresponsive, or rebooting.


Using a large number of queues might cause Windows virtual machines to fail

Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues.

This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail.

To work around this problem, choose one of the following two options:

  • Keep the vTPM device enabled, but use less than 250 queues.
  • Disable the vTPM device to use more than 250 queues.


Redundant error messages on VMs with NVIDIA passthrough devices

When using an Intel host machine with a RHEL 9.2 operating system, virtual machines (VMs) with a passed through NVDIA GPU device frequently log the following error message:

Spurious APIC interrupt (vector 0xFF) on CPU#2, should never happen.

However, this error message does not impact the functionality of the VM and can be ignored. For details, see the Red Hat KnoweldgeBase.


Some Windows guests fail to boot after a v2v conversion on hosts with AMD EPYC CPUs

After using the virt-v2v utility to convert a virtual machine (VM) that uses Windows 11 or a Windows Server 2022 as the guest OS, the VM currently fails to boot. This occurs on hosts that use AMD EPYC series CPUs.


Restarting the OVS service on a host might block network connectivity on its running VMs

When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets.

This problem only affects systems that use the packed virtqueue format in their virtio networking stack.

To work around this problem, use the packed=off parameter in the virtio networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM.


The Nvidia GPU driver stops working after the VM shutdown

The RHEL kernel has adopted an upstream Linux change that aligns device power transitions delays more closely to those required by the PCIe specification. As a consequence, due to the audio function of the GPU, some Nvidia GPUs might stop working after the shutdown of a VM.

To work around the problem, unassign the audio function of the GPU from the VM. In addition, due to the DMA isolation requirements for device assignment (that is, IOMMU grouping), bind the audio function to the vfio-pci driver, which allows the GPU function to continue to be assigned and function normally.


nodedev-dumpxml does not list attributes correctly for certain mediated devices

Currently, the nodedev-dumpxml does not list attributes correctly for mediated devices that were created using the nodedev-create command. To work around this problem, use the nodedev-define and nodedev-start commands instead.


Recovering an interrupted post-copy VM migration might fail

If a post-copy migration of a virtual machine (VM) is interrupted and then immediately resumed on the same incoming port, the migration might fail with the following error: Address already in use

To work around this problem, wait at least 10 seconds before resuming the post-copy migration or switch to another port for migration recovery.


virtiofs devices cannot be attached after restarting virtqemud or libvirtd

Currently, restarting the virtqemud or libvirtd services prevents virtiofs storage devices from being attached to virtual machines on your host.


virsh blkiotune --weight command fails to set the correct cgroup I/O controller value

Currently, using the virsh blkiotune --weight command to set the VM weight does not work as expected. The command fails to set the correct io.bfq.weight value in the cgroup I/O controller interface file. There is no workaround at this time.


Hotplugging a Watchdog card to a virtual machine fails

Currently, if there are no PCI slots available, adding a Watchdog card to a running virtual machine (VM) fails with the following error:

Failed to configure watchdog
ERROR Error attempting device hotplug: internal error: No more available PCI slots

To work around this problem, shut down the VM before adding the Watchdog card.


NUMA node mapping not working correctly on AMD EPYC CPUs

QEMU does not handle NUMA node mapping on AMD EPYC CPUs correctly. As a result, the performance of virtual machines (VMs) with these CPUs might be negatively impacted if using a NUMA node configuration. In addition, the VMs display a warning similar to the following during boot.

sched: CPU #4's llc-sibling CPU #3 is not on the same node! [node: 1 != 0]. Ignoring dependency.
WARNING: CPU: 4 PID: 0 at arch/x86/kernel/smpboot.c:415 topology_sane.isra.0+0x6b/0x80

To work around this issue, do not use AMD EPYC CPUs for NUMA node configurations.


NFS failure during VM migration causes migration failure and source VM coredump

Currently, if the NFS service or server is shut down during virtual machine (VM) migration, the source VM’s QEMU is unable to reconnect to the NFS server when it starts running again. As a result, the migration fails and a coredump is initiated on the source VM. Currently, there is no workaround available.


PCIe ATS devices do not work on Windows VMs

When you configure a PCIe Address Translation Services (ATS) device in the XML configuration of virtual machine (VM) with a Windows guest operating system, the guest does not enable the ATS device after booting the VM. This is because Windows currently does not support ATS on virtio devices.


Kdump fails on virtual machines with AMD SEV-SNP

Currently, kdump fails on RHEL 9 virtual machines (VMs) that use the AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature.


11.17. RHEL in cloud environments

Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear

When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur:

  • After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode.
  • A VM created by cloning cannot boot, and instead enters emergency mode.

To work around these problems, do the following in emergency mode of the VM:

  1. Remove the LVM system devices file: rm /etc/lvm/devices/system.devices
  2. Recreate LVM device settings: vgimportdevices -a
  3. Reboot the VM

This makes it possible for the cloned or restored VM to boot up correctly.

Alternatively, to prevent the issue from occurring, do the following before cloning a VM or creating a VM snapshot:

  1. Uncomment the use_devicesfile = 0 line in the /etc/lvm/lvm.conf file
  2. Reboot the VM


Customizing RHEL 9 guests on ESXi sometimes causes networking problems

Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway.

For details and workaround instructions, see the VMware Knowledge Base.


RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry

Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file.


Setting static IP in a RHEL virtual machine on a VMware host does not work

Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM’s network to static IP and then reboot the VM, the VM’s network will be changed to DHCP.

To work around this issue, see the VNware knowledgebase.


11.18. Supportability

Timeout when running sos report on IBM Power Systems, Little Endian

When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin’s timeout accordingly:

  • For one-time setting, run:
# sos report -k processor.timeout=1800
  • For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file:
# Specify any plugin options and their values here. These options take the form
# plugin_name.option_name = value
#rpm.rpmva = off
processor.timeout = 1800

The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin’s timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command:

# time sos report -o processor -k processor.timeout=0 --batch --build


11.19. Containers

Running systemd within an older container image does not work

Running systemd within an older container image, for example, centos:7, does not work:

$ podman run --rm -ti centos:7 /usr/lib/systemd/systemd
 Storing signatures
 Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
 [!!!!!!] Failed to mount API filesystems, freezing.

To work around this problem, use the following commands:

# mkdir /sys/fs/cgroup/systemd
# mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd
# podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd


Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.