Este conteúdo não está disponível no idioma selecionado.

Chapter 11. Known issues


This part describes known issues in Red Hat Enterprise Linux 9.1.

11.1. Installer and image creation

The reboot --kexec and inst.kexec commands do not provide a predictable system state

Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.

Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.

(BZ#1697896)

Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool

When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only Red Hat CDN is detected).

This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format.

As a workaround, use either of the following solution:

  • When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo=.
  • To create a bootable USB device on Windows, use Fedora Media Writer.
  • When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device.

For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3.

(BZ#1877697)

The auth and authconfig Kickstart commands require the AppStream repository

The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository.

To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation.

(BZ#1640697)

Driver disk menu fails to display user inputs on the console

When you start RHEL installation using the inst.dd option on the Kernel command line with a driver disk, the console fails to display the user input. Consequently, it appears that the application does not respond to the user input and freezes, but displays the output which is confusing for users. However, this behavior does not affect the functionality, and user input gets registered after pressing Enter.

As a workaround, to see the expected results, ignore the absence of user inputs in the console and press Enter when you finish adding inputs.

(BZ#2109231)

Unexpected SELinux policies on systems where Anaconda is running as an application

When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the –image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue.

(BZ#2050140)

The USB CD-ROM drive is not available as an installation source in Anaconda

Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk.

To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail.

(BZ#1914955)

Hard drive partitioned installations with iso9660 filesystem fails

You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD.

To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts.

Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk.

%pre
wipefs -a /dev/sda
%end

As a result, installations work as expected without any errors.

(BZ#1929105)

Anaconda fails to verify existence of an administrator user account

While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account.

To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system.

(BZ#2047713)

New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10

PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL.

The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand.

To work around this problem, you can use another filesystem for /boot, for example ext4.

(BZ#1997832)

Cannot install RHEL when PReP is not 4 or 8 MiB in size

The RHEL installer cannot install the boot loader if the PowerPC Reference Platform (PReP) partition is of a different size than 4 MiB or 8 MiB on a disk that uses 4 kiB sectors. As a consequence, you cannot install RHEL on the disk.

To work around the problem, make sure that the PReP partition is exactly 4 MiB or 8 MiB in size, and that the size is not rounded to another value. As a result, the installer can now install RHEL on the disk.

(BZ#2026579)

The installer displays an incorrect total disk space while custom partitioning with multipath devices

The installer does not filter out individual paths of multipath devices while custom partitioning. This causes the installer to display individual paths to multipath devices and users can select individual paths to multipath devices for the created partitions. As a consequence, an incorrect sum of the total disk space is displayed. It is computed by adding the size of each individual path to the total disk space.

As a workaround, use only the multipath devices and not individual paths while custom partitioning, and ignore the incorrectly computed total disk space.

(BZ#2052938)

Installation fails with NVMe over Fibre Channel devices

When installing RHEL, the installer shows and allows selecting Non-volatile Memory Express (NVMe) over Fibre Channel devices. Use of such devices during the installation process is not supported. As a result, the installation process might fail or the installed system might fail to boot correctly.

To work around this problem, do not use NVMe over Fibre Channel devices during interactive installation (text or graphical mode). When running a Kickstart installation, configure the system to ignore NVMe over Fibre Channel devices by using the ignoredisk --drives=<IGNORE_DISKS> Kickstart command, replacing <IGNORE_DISKS> with the NVMe over Fibre Channel devices. Alternatively, you can define the disks Kickstart uses during installation with ignoredisk --only-use=<ONLY_USE_DISKS>, replacing <ONLY_USE_DISKS> with supported devices.

Note

Installation fails for NVMe over Fibre Channel devices only. Locally attached NVMe devices work correctly.

For detailed information on the ignoredisk Kickstart command, see Kickstart commands for handling storage in the Performing an advanced RHEL 9 installation guide.

(BZ#2107346)

RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload

When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error:

The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.

To work around this issue:

  • Use an automatic partitioning scheme and do not add any mount points manually.
  • Manually assign mount points only inside /var directory. For example, /var/my-mount-point), and the following standard directories: /, /boot, /var.

As a result, the installation process finishes successfully.

(BZ#2125542)

NetworkManager fails to start after the installation when connected to a network but without DHCP or a static IP address configured

Starting with RHEL 9.0, Anaconda activates network devices automatically when there is no specific ip= or kickstart network configuration set. Anaconda creates a default persistent configuration file for each Ethernet device. The connection profile has the ONBOOT and autoconnect value set to true. As a consequence, during the start of the installed system, RHEL activates the network devices, and the networkManager-wait-online service fails.

As a workaround, do one of the following:

  • Delete all connections using the nmcli utility except one connection you want to use. For example:

    1. List all connection profiles:

      # nmcli connection show
    2. Delete the connection profiles that you do not require:

      # nmcli connection delete <connection_name>

      Replace <connection_name> with the name of the connection you want to delete.

  • Disable the auto connect network feature in Anaconda if no specific ip= or kickstart network configuration is set.

    1. In the Anaconda GUI, navigate to Network & Host Name.
    2. Select a network device to disable.
    3. Click Configure.
    4. On the General tab, deselect the Connect automatically with priority
    5. Click Save.

(BZ#2115783)

RHEL installer does not process the inst.proxy boot option correctly

When running Anaconda, the installation program does not process the inst.proxy boot option correctly. As a consequence, you cannot use the specified proxy to fetch the installation image.

To work around this issue: * Use the latest version of RHEL distribution. * Use proxy instead of inst.proxy boot option.

(JIRA:RHELDOCS-18764)

RHEL installation fails on IBM Z architectures with multi-LUNs

RHEL installation fails on IBM Z architectures when using multiple LUNs during installation. Due to the multipath setup of FCP and the LUN auto-scan behavior, the length of the kernel command line in the configuration file exceeds 896 bytes.

To work around this problem, you can do one of the following:

  • Install the latest version of RHEL (RHEL 9.2 or later).
  • Install the RHEL system with a single LUN and add additional LUNs post installation.
  • Optimize the redundant zfcp entries in the boot configuration on the installed system.
  • Create a physical volume (pvcreate) for each of the additional LUNs listed under /dev/mapper/.
  • Extend the VG with PVs, for example, vgextend <vg_name> /dev/mapper/mpathX.
  • Increase the LV as needed for example, lvextend -r -l +100%FREE /dev/<vg name>/root.

For more information, see the KCS solution.

(JIRA:RHELDOCS-18638)

RHEL installer does not automatically discover or use iSCSI devices as boot devices on aarch64

The absence of the iscsi_ibft kernel module in RHEL installers running on aarch64 prevents automatic discovery of iSCSI devices defined in firmware. These devices are not automatically visible in the installer nor selectable as boot devices when added manually by using the GUI. As a workaround, add the "inst.nonibftiscsiboot" parameter to the kernel command line when booting the installer and then manually attach iSCSI devices through the GUI. As a result, the installer can recognize the attached iSCSI devices as bootable and installation completes as expected.

For more information, see KCS solution.

(JIRA:RHEL-56135)

11.2. Subscription management

The subscription-manager utility retains nonessential text in the terminal after completing a command

Starting with RHEL 9.1, the subscription-manager utility displays progress information while processing an operation. For some languages (typically non-Latin), progress messages might not be cleared after the operation finishes. As a result, you might see parts of old progress messages in the terminal.

Note that this is not a functional failure for subscription-manager.

To work around this problem, perform either of the following steps:

  • Include the --no-progress-messages option when running `subscription-manager`commands in the terminal
  • Configure subscription-manager to operate without displaying progress messages by entering the following command:

    # subscription-manager config --rhsm.progress_messages=0

(BZ#2136694)

11.3. Software management

The Installation process sometimes becomes unresponsive

When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log file displays the following message at the end:

10:20:56,416 DDEBUG dnf: RPM transaction over.

To workaround this problem, restart the installation process.

(BZ#2073510)

A security DNF upgrade fails for packages that change their architecture through the upgrade

The patch for BZ#2108969, released with the RHBA-2022:8295 advisory, introduced the following regression: The DNF upgrade using security filters fails for packages that change their architecture from or to noarch through the upgrade. Consequently, it can leave the system in a vulnerable state.

To work around this problem, perform the regular upgrade without security filters.

(BZ#2108969)

11.4. Shells and command-line tools

ReaR fails during recovery if the TMPDIR variable is set in the configuration file

Setting and exporting TMPDIR in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file does not work and is deprecated.

The ReaR default configuration file /usr/share/rear/conf/default.conf contains the following instructions:

# To have a specific working area directory prefix for Relax-and-Recover
# specify in /etc/rear/local.conf something like
#
# export TMPDIR="/prefix/for/rear/working/directory"
#
# where /prefix/for/rear/working/directory must already exist.
# This is useful for example when there is not sufficient free space
# in /tmp or $TMPDIR for the ISO image or even the backup archive.

The instructions mentioned above do not work correctly because the TMPDIR variable has the same value in the rescue environment, which is not correct if the directory specified in the TMPDIR variable does not exist in the rescue image.

As a consequence, setting and exporting TMPDIR in the /etc/rear/local.conf file leads to the following error when the rescue image is booted :

mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory
cp: missing destination file operand after '/etc/rear/mappings/mac'
Try 'cp --help' for more information.
No network interface mapping is specified in /etc/rear/mappings/mac

or the following error and abort later, when running rear recover:

ERROR: Could not create build area

To work around this problem, if you want to have a custom temporary directory, specify a custom directory for ReaR temporary files by exporting the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=…​ statement and then execute the rear command in the same shell session or script. As a result, the recovery is successful in the described configuration.

Jira:RHEL-24847

Renaming network interfaces using ifcfg files fails

On RHEL 9, the initscripts package is not installed by default. Consequently, renaming network interfaces using ifcfg files fails. To solve this problem, Red Hat recommends that you use udev rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the systemd.link(5) man page.

If you cannot use one of the recommended solutions, install the initscripts package.

(BZ#2018112)

The chkconfig package is not installed by default in RHEL 9

The chkconfig package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9.

To manage services, use the systemctl commands or install the chkconfig package manually.

For more information about systemd, see Managing systemd. For instructions on how to use the systemctl utility, see Managing system services with systemctl.

(BZ#2053598)

11.5. Infrastructure services

Both bind and unbound disable validation of SHA-1-based signatures

The bind and unbound components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy.

As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable.

To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys.

For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution.

(BZ#2070495)

named fails to start if the same writable zone file is used in multiple zones

BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named service, named fails to start. To work around this problem, use the in-view clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path.

Note that writable zone files are typically used in zones with allowed dynamic updates, slave zones, or zones maintained by DNSSEC.

(BZ#1984982)

Setting the console keymap requires the libxkbcommon library on your minimal install

In RHEL 9, certain systemd library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb command fails.

To work around this problem, install the libxkbcommon library:

# dnf install libxkbcommon

(BZ#2214130)

11.6. Security

OpenSSL does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures

The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL library fail to work with an RSA key if the key is held by the PKCS #11 token. As a result, TLS communication fails in the described scenario.

To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available.

(BZ#1681178)

OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures

The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures.

To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file:

SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384
MaxProtocol = TLSv1.2

As a result, a TLS connection can be established in the described scenario.

(BZ#1685470)

scp empties files copied to themselves when a specific syntax is used

The scp utility changed from the Secure copy protocol (SCP) to the more secure SSH file transfer protocol (SFTP). Consequently, copying a file from a location to the same location erases the file content. The problem affects the following syntax:

scp localhost:/myfile localhost:/myfile

To work around this problem, do not copy files to a destination that is the same as the source location using this syntax.

The problem has been fixed for the following syntaxes:

  • scp /myfile localhost:/myfile
  • scp localhost:~/myfile ~/myfile

(BZ#2056884)

PSK ciphersuites do not work with the FUTURE crypto policy

Pre-shared key (PSK) ciphersuites are not recognized as performing perfect forward secrecy (PFS) key exchange methods. As a consequence, the ECDHE-PSK and DHE-PSK ciphersuites do not work with OpenSSL configured to SECLEVEL=3, for example with the FUTURE crypto policy. As a workaround, you can set a less restrictive crypto policy or set a lower security level (SECLEVEL) for applications that use PSK ciphersuites.

(BZ#2060044)

GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies

The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures.

To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the non-secure SHA-1 signatures.

(BZ#2070722)

gpg-agent does not work as an SSH agent in FIPS mode

The gpg-agent tool creates MD5 fingerprints when adding keys to the ssh-agent program even though FIPS mode disables the MD5 digest. Consequently, the ssh-add utility fails to add the keys to the authentication agent.

To work around the problem, create the ~/.gnupg/sshcontrol file without using the gpg-agent --daemon --enable-ssh-support command. For example, you can paste the output of the gpg --list-keys command in the <FINGERPRINT> 0 format to ~/.gnupg/sshcontrol. As a result, gpg-agent works as an SSH authentication agent.

(BZ#2073567)

Default SELinux policy allows unconfined executables to make their stack executable

The default state of the selinuxuser_execstack boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off.

(BZ#2064274)

Remediating service-related rules during kickstart installations might fail

During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues.

(BZ#1834716)

Remediation of SCAP Audit rules fails incorrectly

Bash remediation of some SCAP rules related to Audit configuration does not add the Audit key when remediating. This applies to the following rules:

  • audit_rules_login_events
  • audit_rules_login_events_faillock
  • audit_rules_login_events_lastlog
  • audit_rules_login_events_tallylog
  • audit_rules_usergroup_modification
  • audit_rules_usergroup_modification_group
  • audit_rules_usergroup_modification_gshadow
  • audit_rules_usergroup_modification_opasswd
  • audit_rules_usergroup_modification_passwd
  • audit_rules_usergroup_modification_shadow
  • audit_rules_time_watch_localtime
  • audit_rules_mac_modification
  • audit_rules_networkconfig_modification
  • audit_rules_sysadmin_actions
  • audit_rules_session_events
  • audit_rules_sudoers
  • audit_rules_sudoers_d

In consequence, if the relevant Audit rule already exists but does not fully conform to the OVAL check, the remediation fixes the functional part of the Audit rule, that is, the path and access bits, but does not add the Audit key. Therefore, the resulting Audit rule works correctly, but the SCAP rule incorrectly reports FAIL. To work around this problem, add the correct keys to the Audit rules manually.

(BZ#2120978)

SSH timeout rules in STIG profiles configure incorrect options

An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles:

  • DISA STIG for RHEL 9 (xccdf_org.ssgproject.content_profile_stig)
  • DISA STIG with GUI for RHEL 9 (xccdf_org.ssgproject.content_profile_stig_gui)

In each of these profiles, the following two rules are affected:

Title: Set SSH Client Alive Count Max to zero
CCE Identifier: CCE-90271-8
Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0

Title: Set SSH Idle Timeout Interval
CCE Identifier: CCE-90811-1
Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout

When applied to SSH servers, each of these rules configures an option (ClientAliveCountMax and ClientAliveInterval) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed.

(BZ#2038978)

Keylime might fail attestation of systems that access multiple IMA-measured files

If a system that runs the Keylime agent accesses multiple files measured by the Integrity Measurement Architecture (IMA) in quick succession, the Keylime verifier might incorrectly process the IMA log additions. As a consequence, the running hash does not match the correct Platform Configuration Register (PCR) state, and the system fails attestation. There is currently no workaround.

(BZ#2138167)

Keylime measured boot policy generation script might cause a segmentation fault and core dump

The create_mb_refstate script, which generates policies for measure boot attestation in Keylime, might incorrectly calculate the data length in the DevicePath field instead of using the value of the LengthOfDevicePath field when handling the output of the tpm2_eventlog tool depending on the input provided. As a consequence, the script tries to access invalid memory using the incorrectly calculated length, which results in a segmentation fault and core dump. The main functionality of Keylime is not affected by this problem, but you might be unable to generate a measured boot policy.

To work around this problem, do not use a measured boot policy or write the policy file manually from the data obtained using the tpm2_eventlog tool from the tpm2-tools package.

(BZ#2140670)

Some TPM certificates cause Keylime registrar to crash

The require_ek_cert configuration option in tenant.conf, which should be enabled in production deployments, determines whether the Keylime tenant requires an endorsement key (EK) certificate from the Trusted Platform Module (TPM). When performing the initial identity quote with require_ek_cert enabled, Kelime attempts to verify whether the TPM device on the agent is genuine by comparing the EK certificate against the trusted certificates present in the Keylime TPM certificate store. However, some certificates in the store are malformed x509 certificates and cause the Keylime registrar to crash. There is currently no simple workaround to this problem, except for setting require_ek_cert to false, and defining a custom script in the ek_check_script option that will perform EK validation.

(BZ#2142009)

11.7. Networking

The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces

Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses:

  1. Stop and disable the nm-cloud-setup service and timer:

    # systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer
  2. Display the available connection profiles:

    # nmcli connection show
  3. Reactive the affected connection profiles:

    # nmcli connection up "<profile_name>"

As a result, the service no longer removes manually-configured secondary IP addresses from interfaces.

(BZ#2151040)

Failure to update the session key causes the connection to break

Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key.

(BZ#2013650)

The initscripts package is not installed by default

By default, the initscripts package is not installed. As a consequence, the ifup and ifdown utilities are not available. As an alternative, use the nmcli connection up and nmcli connection down commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown package, which provides a NetworkManager solution for the ifup and ifdown utilities.

(BZ#2082303)

11.8. Kernel

The mlx5 driver fails while using Mellanox ConnectX-5 adapter

In Ethernet switch device driver model (switchdev) mode, mlx5 driver fails when configured with device managed flow steering (DMFS) parameter and ConnectX-5 adapter supported hardware. As a consequence, you can see the following error message:

BUG: Bad page cache in process umount pfn:142b4b

To workaround this problem, you need to use the software managed flow steering (SMFS) parameter instead of DMFS.

(BZ#2180665)

FADump enabled with Secure Boot might lead to GRUB Out of Memory (OOM)

In the Secure Boot environment, GRUB and PowerVM together allocate a 512 MB memory region, known as the Real Mode Area (RMA), for boot memory. The region is divided among the boot components and, if any component exceeds its allocation, out-of-memory failures occur.

Generally, the default installed initramfs file system and the vmlinux symbol table are within the limits to avoid such failures. However, if Firmware Assisted Dump (FADump) is enabled in the system, the default initramfs size can increase and exceed 95 MB. As a consequence, every system reboot leads to a GRUB OOM state.

To avoid this issue, do not use Secure Boot and FADump together. For more information and methods on how to work around this issue, see https://www.ibm.com/support/pages/node/6846531.

(BZ#2149172)

weak-modules from kmod fails to work with module inter-dependencies

The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario.

To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel.

(BZ#2103605)

The kdump service fails to build the initrd file on IBM Z systems

On the 64-bit IBM Z systems, the kdump service fails to load the initial RAM disk (initrd) when znet related configuration information such as s390-subchannels reside in an inactive NetworkManager connection profile. Consequently, the kdump mechanism fails with the following error:

dracut: Failed to set up znet
kdump: mkdumprd: failed to make kdump initrd

As a workaround, use one of the following solutions:

  • Configure a network bond or bridge by re-using the connection profile that has the znet configuration information:

    $ nmcli connection modify enc600 master bond0 slave-type bond
  • Copy the znet configuration information from the inactive connection profile to the active connection profile:

    1. Run the nmcli command to query the NetworkManager connection profiles:

      # nmcli connection show
      
      NAME                       UUID               TYPE   Device
      
      bridge-br0           ed391a43-bdea-4170-b8a2 bridge   br0
      bridge-slave-enc600  caf7f770-1e55-4126-a2f4 ethernet enc600
      enc600               bc293b8d-ef1e-45f6-bad1 ethernet --
    2. Update the active profile with configuration information from the inactive connection:

      #!/bin/bash
       inactive_connection=enc600
       active_connection=bridge-slave-enc600
       for name in nettype subchannels options; do
       field=802-3-ethernet.s390-$name
       val=$(nmcli --get-values "$field"connection show "$inactive_connection")
       nmcli connection modify "$active_connection" "$field" $val"
       done
    3. Restart the kdump service for changes to take effect:

      # kdumpctl restart

(BZ#2064708)

The kdump mechanism fails to capture the vmcore file on LUKS-encrypted targets

When running kdump on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file (vmcore) on LUKS-encrypted targets.

With the kdumpctl estimate command, you can query the Recommended crashkernel value, which is the recommended memory size required for kdump.

To work around this problem, use following steps to configure the required memory for kdump on LUKS encrypted targets:

  1. Print the estimate crashkernel value:

    # kdumpctl estimate
  2. Configure the amount of required memory by increasing the crashkernel value:

    # grubby --args=crashkernel=652M --update-kernel=ALL
  3. Reboot the system for changes to take effect.

    # reboot

As a result, kdump works correctly on systems with LUKS-encrypted partitions.

(BZ#2017401)

Allocating crash kernel memory fails at boot time

On certain Ampere Altra systems, allocating the crash kernel memory for kdump usage fails during boot when the available memory is below 1 GB. Consequently, the kdumpctl command fails to start the kdump service.

To workaround this problem, do one of the following:

  • Decrease the value of the crashkernel parameter by a minimum of 240 MB to fit the size requirement, for example crashkernel=240M.
  • Use the crashkernel=x,high option to reserve crash kernel memory above 4 GB for kdump.

As a result, the crash kernel memory allocation for kdump does not fail on Ampere Altra systems.

(BZ#2065013)

The Delay Accounting functionality does not display the SWAPIN and IO% statistics columns by default

The Delayed Accounting functionality, unlike early versions, is disabled by default. Consequently, the iotop application does not show the SWAPIN and IO% statistics columns and displays the following warning:

CONFIG_TASK_DELAY_ACCT not enabled in kernel, cannot determine SWAPIN and IO%

The Delay Accounting functionality, using the taskstats interface, provides the delay statistics for all tasks or threads that belong to a thread group. Delays in task execution occur when they wait for a kernel resource to become available, for example, a task waiting for a free CPU to run on. The statistics help in setting a task’s CPU priority, I/O priority, and rss limit values appropriately.

As a workaround, you can enable the delayacct boot option either at runtime or boot.

  • To enable delayacct at runtime, enter:

    echo 1 > /proc/sys/kernel/task_delayacct

    Note that this command enables the feature system wide, but only for the tasks that you start after running this command.

  • To enable delayacct permanently at boot, use one of the following procedures:

    • Edit the /etc/sysctl.conf file to override the default parameters:

      1. Add the following entry to the /etc/sysctl.conf file:

        kernel.task_delayacct = 1

        For more information, see How to set sysctl variables on Red Hat Enterprise Linux.

      2. Reboot the system for changes to take effect.
    • Edit the GRUB 2 configuration file to override the default parameters:

      1. Append the delayacct option to the /etc/default/grub file’s GRUB _CMDLINE_LINUX entry.
      2. Run the grub2-mkconfig utility to regenerate the boot configuration:

        # grub2-mkconfig -o /boot/grub2/grub.cfg

        For more information, see How do I permanently modify the kernel command line?.

      3. Reboot the system for changes to take effect.

As a result, the iotop application displays the SWAPIN and IO% statistics columns.

(BZ#2132480)

kTLS does not support offloading of TLS 1.3 to NICs

Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded.

(BZ#2000616)

The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4

After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 8.7 and/or RHEL 9.1 (and later), the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message:

kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110
kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms)
kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110

An unconfirmed work around is to power off the system and back on again. Do not reboot.

(BZ#2129288)

dkms provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs

The Dynamic Kernel Module Support (dkms) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kilobytes and 64 kilobytes page sizes. As a result, when the kernel update is performed and the kernel-64k-devel package is not installed, dkms provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers package, which contains header files for both types of ARM CPU architectures and is not specific to dkms and its requirements.

(JIRA:RHEL-25967)

11.9. Boot loader

The behavior of grubby diverges from its documentation

When you add a new kernel using the grubby tool and do not specify any arguments, grubby passes the default arguments to the new entry. This behavior occurs even without passing the --copy-default argument. Using --args and --copy-default options ensures those arguments are appended to the default arguments as stated in the grubby documentation.

However, when you add additional arguments, such as $tuned_params, the grubby tool does not pass these arguments unless the --copy-default option is invoked.

In this situation, two workarounds are available:

  • Either set the root= argument and leave --args empty:

    # grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args "root=/dev/mapper/rhel-root" --title "entry_with_root_set"
  • Or set the root= argument and the specified arguments, but not the default ones:

    # grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args "root=/dev/mapper/rhel-root some_args and_some_more" --title "entry_with_root_set_and_other_args_too"

(BZ#2127453)

11.10. File systems and storage

RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry

Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file.

(BZ#2081114)

Anaconda fails to login iSCSI server using the no authentication method after unsuccessful CHAP authentication attempt

When you add iSCSI discs using CHAP authentication and the login attempt fails due to incorrect credentials, a relogin attempt to the discs with the no authentication method fails. To workaround this problem, close the current session and login using the no authentication method.

(BZ#1983602)

Device Mapper Multipath is not supported with NVMe/TCP

Using Device Mapper Multipath with the nvme-tcp driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath tools with NVMe.

By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices.

(BZ#2033080)

The blk-availability systemd service deactivates complex device stacks

In systemd, the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command:

# systemctl enable --now blk-availability.service

As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages.

(BZ#2011699)

supported_speeds sysfs attribute reports incorrect speed values

Previously, due to an incorrect definition in the qla2xxx driver, the supported_speeds sysfs attribute for the HBA reported 20 Gb/s speed instead of the expected 64 Gb/s speed. Consequently, if the HBA supported 64 Gb/s link speed, the sysfs supported_speeds value was incorrect, which affected the reported speed value.

But now the supported_speeds sysfs attribute for the HBA returns a 100 Gb/s speed instead of the intended 64 Gb/s, and 50 Gb/s speed instead of the intended 128 Gb/s speed. This only affects the reported speed value, and the actual link rates used on the Fibre connection are correct.

(BZ#2069758)

11.11. Dynamic programming languages, web and database servers

The --ssl-fips-mode option in MySQL and MariaDB does not change FIPS mode

The --ssl-fips-mode option in MySQL and MariaDB in RHEL works differently than in upstream.

In RHEL 9, if you use --ssl-fips-mode as an argument for the mysqld or mariadbd daemon, or if you use ssl-fips-mode in the MySQL or MariaDB server configuration files, --ssl-fips-mode does not change FIPS mode for these database servers.

Instead:

  • If you set --ssl-fips-mode to ON, the mysqld or mariadbd server daemon does not start.
  • If you set --ssl-fips-mode to OFF on a FIPS-enabled system, the mysqld or mariadbd server daemons still run in FIPS mode.

This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components.

Therefore, do not use the --ssl-fips-mode option in MySQL or MariaDB in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system:

  • Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode.
  • Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode.

(BZ#1991500)

11.12. Compilers and development tools

Certain symbol-based probes do not work in SystemTap on the 64-bit ARM architecture

Kernel configuration disables certain functionality needed for SystemTap. Consequently, some symbol-based probes do not work on the 64-bit ARM architecture. As a result, affected SystemTap scripts may not run or may not collect hits on desired probe points.

Note that this bug has been fixed for the remaining architectures with the release of the RHBA-2022:5259 advisory.

(BZ#2083727)

11.13. Identity Management

MIT Kerberos does not support ECC certificates for PKINIT

MIT Kerberos does not implement the RFC5349 request for comments document, which describes the design of elliptic-curve cryptography (ECC) support in Public Key Cryptography for initial authentication (PKINIT). Consequently, the MIT krb5-pkinit package, used by RHEL, does not support ECC certificates. For more information, see Elliptic Curve Cryptography (ECC) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT).

(BZ#2106043)

The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs

The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm.

However, the Active Directory (AD) Kerberos Distribution Center (KDC) still uses the SHA-1 digest algorithm to sign CMS messages. As a result, RHEL 9 Kerberos clients fail to authenticate users by using PKINIT against an AD KDC.

To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command:

 # update-crypto-policies --set DEFAULT:SHA1

(BZ#2060798)

The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL-9 and non-AD Kerberos agent

If a RHEL 9 Kerberos agent, either a client or Kerberos Distribution Center (KDC), interacts with a non-RHEL-9 Kerberos agent that is not an Active Directory (AD) agent, the PKINIT authentication of the user fails. To work around the problem, perform one of the following actions:

  • Set the RHEL 9 agent’s crypto-policy to DEFAULT:SHA1 to allow the verification of SHA-1 signatures:

    # update-crypto-polices --set DEFAULT:SHA1
  • Update the non-RHEL-9 and non-AD agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos client or KDC packages to the versions that use SHA-256 instead of SHA-1:

    • CentOS 9 Stream: krb5-1.19.1-15
    • RHEL 8.7: krb5-1.18.2-17
    • RHEL 7.9: krb5-1.15.1-53
    • Fedora Rawhide/36: krb5-1.19.2-7
    • Fedora 35/34: krb5-1.19.2-3

As a result, the PKINIT authentication of the user works correctly.

Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1.

See also The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs.

(BZ#2077450)

Heimdal client fails to authenticate a user using PKINIT against RHEL 9 KDC

By default, a Heimdal Kerberos client initiates the PKINIT authentication of an IdM user by using Modular Exponential (MODP) Diffie-Hellman Group 2 for Internet Key Exchange (IKE). However, the MIT Kerberos Distribution Center (KDC) on RHEL 9 only supports MODP Group 14 and 16.

Consequently, the pre-autentication request fails with the krb5_get_init_creds: PREAUTH_FAILED error on the Heimdal client and Key parameters not accepted on the RHEL MIT KDC.

To work around this problem, ensure that the Heimdal client uses MODP Group 14. Set the pkinit_dh_min_bits parameter in the libdefaults section of the client configuration file to 1759:

[libdefaults]
pkinit_dh_min_bits = 1759

As a result, the Heimdal client completes the PKINIT pre-authentication against the RHEL MIT KDC.

(BZ#2106296)

IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust

Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate.

(BZ#2124243)

IdM to AD cross-realm TGS requests fail

The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD).

Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error:

"Generic error (see e-text) while getting credentials for <service principal>"

(BZ#2060421)

IdM Vault encryption and decryption fails in FIPS mode

The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate.

(BZ#2089907)

Migrated IdM users might be unable to log in due to mismatching domain SIDs

If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs.

(JIRA:RHELPLAN-109613)

Directory Server terminates unexpectedly when started in referral mode

Due to a bug, global referral mode does not work in Directory Server. If you start the ns-slapd process with the refer option as the dirsrv user, Directory Server ignores the port settings and terminates unexpectedly. Trying to run the process as the root user changes SELinux labels and prevents the service from starting in future in normal mode. There are no workarounds available.

(BZ#2053204)

Configuring a referral for a suffix fails in Directory Server

If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral command fails with the following error:

Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state

As a consequence, configuring a referral for suffixes fail. To work around the problem:

  1. Set the nsslapd-referral parameter manually:

    # ldapmodify -D "cn=Directory Manager" -W -H ldap://server.example.com
    
    dn: cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
    changetype: modify
    add: nsslapd-referral
    nsslapd-referral: ldap://remote_server:389/dc=example,dc=com
  2. Set the back-end state:

    # dsconf <instance_name> backend suffix set --state referral

As a result, with the workaround, you can configure a referral for a suffix.

(BZ#2063140)

The dsconf utility has no option to create fix-up tasks for the entryUUID plug-in

The dsconf utility does not provide an option to create fix-up tasks for the entryUUID plug-in. As a result, administrators cannot not use dsconf to create a task to automatically add entryUUID attributes to existing entries. As a workaround, create a task manually:

# ldapadd -D "cn=Directory Manager" -W -H ldap://server.example.com -x

dn: cn=entryuuid_fixup_<time_stamp>,cn=entryuuid task,cn=tasks,cn=config
objectClass: top
objectClass: extensibleObject
basedn: <fixup base tree>
cn: entryuuid_fixup_<time_stamp>
filter: <filtered_entry>

After the task has been created, Directory Server fixes entries with missing or invalid entryUUID attributes.

(BZ#2047175)

Potential risk when using the default value for ldap_id_use_start_tls option

When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search.

Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls, defaults to false. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap. Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI.

If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL.

(JIRA:RHELPLAN-155168)

11.14. Desktop

Firefox add-ons are disabled after upgrading to RHEL 9

If you upgrade from RHEL 8 to RHEL 9, all add-ons that you previously enabled in Firefox are disabled.

To work around the problem, manually reinstall or update the add-ons. As a result, the add-ons are enabled as expected.

(BZ#2013247)

User Creation screen is unresponsive

When installing RHEL using a graphical user interface, the User Creation screen is unresponsive. As a consequence, creating users during installation is more difficult.

To work around this problem, use one of the following solutions to create users:

  • Run the installation in VNC mode and resize the VNC window.
  • Create users after completing the installation process.

(BZ#2122636)

VNC is not running after upgrading to RHEL 9

After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled.

To work around the problem, manually enable the vncserver service after the system upgrade:

# systemctl enable --now vncserver@:port-number

As a result, VNC is now enabled and starts after every system boot as expected.

(BZ#2060308)

11.15. Graphics infrastructures

Matrox G200e shows no output on a VGA display

Your display might show no graphical output if you use the following system configuration:

  • The Matrox G200e GPU
  • A display connected over the VGA controller

As a consequence, you cannot use or install RHEL on this configuration.

To work around the problem, use the following procedure:

  1. Boot the system to the boot loader menu.
  2. Add the module_blacklist=mgag200 option to the kernel command line.

As a result, RHEL boots and shows graphical output as expected, but the maximum resolution is limited to 1024x768 at the 16-bit color depth.

(BZ#1960467)

X.org configuration utilities do not work under Wayland

X.org utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout.

(JIRA:RHELPLAN-121049)

NVIDIA drivers might revert to X.org

Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the X.org display server:

  • If the version of the NVIDIA driver is lower than 470.
  • If the system is a laptop that uses hybrid graphics.
  • If you have not enabled the required NVIDIA driver options.

Additionally, Wayland is enabled but the desktop session uses X.org by default if the version of the NVIDIA driver is lower than 510.

(JIRA:RHELPLAN-119001)

Night Light is not available on Wayland with NVIDIA

When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light.

(JIRA:RHELPLAN-119852)

11.16. The web console

VNC console works incorrectly at certain resolutions

When using the Virtual Network Computing (VNC) console under certain display resolutions, you might experience a mouse offset issue or you might see only a part of the interface. Consequently, using the VNC console might not be possible. To work around this issue, you can try expanding the size of the VNC console or use the Desktop Viewer in the Console tab to launch the remote viewer instead.

(BZ#2030836)

11.17. Virtualization

Installing a virtual machine over https or ssh in some cases fails

Currently, the virt-install utility fails when attempting to install a guest operating system (OS) from an ISO source over a https or ssh connection - for example using virt-install --cdrom https://example/path/to/image.iso. Instead of creating a virtual machine (VM), the described operation terminates unexpectedly with an internal error: process exited while connecting to monitor message.

Similarly, using the RHEL 9 web console to install a guest OS fails and displays an Unknown driver 'https' error if you use an https or ssh URL, or the Download OS function.

To work around this problem, install qemu-kvm-block-curl and qemu-kvm-block-ssh on the host to enable https and ssh protocol support, respectively. Alternatively, use a different connection protocol or a different installation source.

(BZ#2014229)

Using NVIDIA drivers in virtual machines disables Wayland

Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios:

  • When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM)
  • When you assign an NVIDIA vGPU mediated device to a RHEL VM

(JIRA:RHELPLAN-117234)

The Milan VM CPU type is sometimes not available on AMD Milan systems

On certain AMD Milan systems, the Enhanced REP MOVSB (erms) and Fast Short REP MOVSB (fsrm) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host.

(BZ#2077767)

Disabling AVX causes VMs to become unbootable

On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM.

(BZ#2005173)

VNC is unable to connect to UEFI VMs after migration

If you enable or disable a message queue while migrating a virtual machine (VM), the Virtual Network Computing (VNC) client will fail to connect to the VM after the migration is complete.

This problem affects only UEFI based VMs that use the Open Virtual Machine Firmware (OVMF).

(JIRA:RHELPLAN-135600)

Failover virtio NICs are not assigned an IP address on Windows virtual machines

Currently, when starting a Windows virtual machine (VM) with only a failover virtio NIC, the VM fails to assign an IP address to the NIC. Consequently, the NIC is unable to set up a network connection. Currently, there is no workaround.

(BZ#1969724)

Windows VM fails to get IP address after network interface reset

Sometimes, Windows virtual machines fail to get an IP address after an automatic network interface reset. As a consequence, the VM fails to connect to the network. To work around this problem, disable and re-enable the network adapter driver in the Windows Device Manager.

(BZ#2084003)

Broadcom network adapters work incorrectly on Windows VMs after a live migration

Currently, network adapters from the Broadcom family of devices, such as Broadcom, Qlogic, or Marvell, cannot be hot-unplugged during live migration of Windows virtual machines (VMs). As a consequence, the adapters work incorrectly after the migration is complete.

This problem affects only those adapters that are attached to Windows VMs using Single-root I/O virtualization (SR-IOV).

(BZ#2090712, BZ#2091528, BZ#2111319)

A hostdev interface with failover settings cannot be hot-plugged after being hot-unplugged

After removing a hostdev network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM.

(BZ#2052424)

Live post-copy migration of VMs with failover VFs fails

Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration.

(BZ#1817965)

Host network cannot ping VMs with VFs during live migration

When live migrating a virtual machine (VM) with a configured virtual function (VF), such as a VMs that uses virtual SR-IOV software, the network of the VM is not visible to other devices and the VM cannot be reached by commands such as ping. After the migration is finished, however, the problem no longer occurs.

(BZ#1789206)

Using a large number of queues might cause Windows virtual machines to fail

Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues.

This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail.

To work around this problem, choose one of the following two options:

  • Keep the vTPM device enabled, but use less than 250 queues.
  • Disable the vTPM device to use more than 250 queues.

(BZ#2020146)

PCIe ATS devices do not work on Windows VMs

When you configure a PCIe Address Translation Services (ATS) device in the XML configuration of virtual machine (VM) with a Windows guest operating system, the guest does not enable the ATS device after booting the VM. This is because Windows currently does not support ATS on virtio devices.

For more information, see the Red Hat KnowledgeBase.

(BZ#2073872)

Kdump fails on virtual machines with AMD SEV-SNP

Currently, kdump fails on RHEL 9 virtual machines (VMs) that use the AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature.

(JIRA:RHEL-10019)

11.18. RHEL in cloud environments

Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear

When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur:

  • After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode.
  • A VM created by cloning cannot boot, and instead enters emergency mode.

To work around these problems, do the following in emergency mode of the VM:

  1. Remove the LVM system devices file: rm /etc/lvm/devices/system.devices
  2. Recreate LVM device settings: vgimportdevices -a
  3. Reboot the VM

This makes it possible for the cloned or restored VM to boot up correctly.

Alternatively, to prevent the issue from occurring, do the following before cloning a VM or creating a VM snapshot:

  1. Uncomment the use_devicesfile = 0 line in the /etc/lvm/lvm.conf file
  2. Reboot the VM

(BZ#2059545)

Customizing RHEL 9 guests on ESXi sometimes causes networking problems

Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway.

For details and workaround instructions, see the VMware Knowledge Base.

(BZ#2037657)

Setting static IP in a RHEL virtual machine on a VMware host does not work

Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM’s network to static IP and then reboot the VM, the VM’s network will be changed to DHCP.

(BZ#1750862)

11.19. Supportability

Timeout when running sos report on IBM Power Systems, Little Endian

When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin’s timeout accordingly:

  • For one-time setting, run:
# sos report -k processor.timeout=1800
  • For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file:
[plugin_options]
# Specify any plugin options and their values here. These options take the form
# plugin_name.option_name = value
#rpm.rpmva = off
processor.timeout = 1800

The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin’s timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command:

# time sos report -o processor -k processor.timeout=0 --batch --build

(BZ#1869561)

11.20. Containers

Running systemd within an older container image does not work

Running systemd within an older container image, for example, centos:7, does not work:

$ podman run --rm -ti centos:7 /usr/lib/systemd/systemd
 Storing signatures
 Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
 [!!!!!!] Failed to mount API filesystems, freezing.

To work around this problem, use the following commands:

# mkdir /sys/fs/cgroup/systemd
# mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd
# podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd

(JIRA:RHELPLAN-96940)

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.