Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Known issues
This part describes known issues in Red Hat Enterprise Linux 9.0.
8.1. Installer and image creation
The reboot --kexec
and inst.kexec
commands do not provide a predictable system state
Performing a RHEL installation with the reboot --kexec
Kickstart command or the inst.kexec
kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results.
Note that the kexec
feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux.
(BZ#1697896)
Local Media
installation source is not detected when booting the installation from a USB that is created using a third party tool
When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media
installation source (only Red Hat CDN is detected).
This issue occurs because the default boot option int.stage2=
attempts to search for iso9660
image format. However, a third party tool might create an ISO image with a different format.
As a workaround, use either of the following solution:
-
When booting the installation, click the
Tab
key to edit the kernel command line, and change the boot optioninst.stage2=
toinst.repo=
. - To create a bootable USB device on Windows, use Fedora Media Writer.
- When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device.
For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3.
(BZ#1877697)
The auth
and authconfig
Kickstart commands require the AppStream repository
The authselect-compat
package is required by the auth
and authconfig
Kickstart commands during installation. Without this package, the installation fails if auth
or authconfig
are used. However, by design, the authselect-compat
package is only available in the AppStream repository.
To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect
Kickstart command during installation.
(BZ#1640697)
Unexpected SELinux policies on systems where Anaconda is running as an application
When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the –image
anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso
or dvd.iso
is not affected by this issue.
The USB CD-ROM drive is not available as an installation source in Anaconda
Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use=
command is specified. In this case, Anaconda cannot find and use this source disk.
To work around this problem, use the harddrive --partition=sdX --dir=/
command to install from USB CD-ROM drive. As a result, the installation does not fail.
Minimal RHEL installation no longer includes the s390utils-base
package
In RHEL 8.4 and later, the s390utils-base
package is split into an s390utils-core
package and an auxiliary s390utils-base
package. Consequently, setting the RHEL installation to minimal-environment
installs only the necessary s390utils-core
package and not the auxiliary s390utils-base
package. To work around this problem, manually install the s390utils-base
package after completing the RHEL installation or explicitly install s390utils-base
using a kickstart file.
(BZ#1932480)
Hard drive partitioned installations with iso9660 filesystem fails
You cannot install RHEL on systems where the hard drive is partitioned with the iso9660
filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660
file system partition. This happens even when RHEL is installed without using a DVD.
To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts.
Note: Before performing the workaround, backup the data available on the disk. The wipefs
command formats all the existing data from the disk.
%pre
wipefs -a /dev/sda
%end
As a result, installations work as expected without any errors.
Anaconda fails to verify existence of an administrator user account
While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account.
To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system.
Anaconda fails to login iSCSI server using the no authentication
method after unsuccessful CHAP authentication attempt
When you add iSCSI discs using CHAP authentication and the login attempt fails due to incorrect credentials, a relogin attempt to the discs with the no authentication
method fails. To workaround this problem, close the current session and login using the no authentication
method.
(BZ#1983602)
New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10
PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot
and Petitboot reading the GRUB config and booting RHEL.
The RHEL 9 kernel introduces bigtime=1
and inobtcount=1
features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand.
To work around this problem, you can use another filesystem for /boot
, for example ext4.
(BZ#1997832)
Cannot install RHEL when PReP is not 4 or 8 MiB in size
The RHEL installer cannot install the boot loader if the PowerPC Reference Platform (PReP) partition is of a different size than 4 MiB or 8 MiB on a disk that uses 4 kiB sectors. As a consequence, you cannot install RHEL on the disk.
To work around the problem, make sure that the PReP partition is exactly 4 MiB or 8 MiB in size, and that the size is not rounded to another value. As a result, the installer can now install RHEL on the disk.
(BZ#2026579)
New XFS features prevent booting of PowerNV IBM POWER systems with firmware kernel older than version 5.10
PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot
and Petitboot reading the GRUB config and booting RHEL.
The RHEL 9 kernel introduces bigtime=1
and inobtcount=1
features to the XFS filesystem, which firmware with kernel older than version 5.10 do not understand. As a consequence, Anaconda prevents the installation with the following error message:
Your firmware doesn’t support XFS file system features on the /boot
file system. The system will not be bootable. Please, upgrade the firmware or change the file system type.
As a workaround, use another filesystem for /boot
, for example ext4
.
(BZ#2008792)
RHEL installer does not process the inst.proxy
boot option correctly
When running Anaconda, the installation program does not process the inst.proxy
boot option correctly. As a consequence, you cannot use the specified proxy to fetch the installation image.
To work around this issue: * Use the latest version of RHEL distribution. * Use proxy
instead of inst.proxy
boot option.
(JIRA:RHELDOCS-18764)
RHEL installation fails on IBM Z architectures with multi-LUNs
RHEL installation fails on IBM Z architectures when using multiple LUNs during installation. Due to the multipath setup of FCP and the LUN auto-scan behavior, the length of the kernel command line in the configuration file exceeds 896 bytes.
To work around this problem, you can do one of the following:
- Install the latest version of RHEL (RHEL 9.2 or later).
- Install the RHEL system with a single LUN and add additional LUNs post installation.
-
Optimize the redundant
zfcp
entries in the boot configuration on the installed system. -
Create a physical volume (
pvcreate
) for each of the additional LUNs listed under/dev/mapper/
. -
Extend the VG with PVs, for example,
vgextend <vg_name> /dev/mapper/mpathX
. -
Increase the LV as needed for example,
lvextend -r -l +100%FREE /dev/<vg name>/root
.
For more information, see the KCS solution.
(JIRA:RHELDOCS-18638)
RHEL installer does not automatically discover or use iSCSI devices as boot devices on aarch64
The absence of the iscsi_ibft
kernel module in RHEL installers running on aarch64 prevents automatic discovery of iSCSI devices defined in firmware. These devices are not automatically visible in the installer nor selectable as boot devices when added manually by using the GUI. As a workaround, add the "inst.nonibftiscsiboot" parameter to the kernel command line when booting the installer and then manually attach iSCSI devices through the GUI. As a result, the installer can recognize the attached iSCSI devices as bootable and installation completes as expected.
For more information, see KCS solution.
(JIRA:RHEL-56135)
8.2. Subscription management
virt-who
cannot connect to ESX servers when in FIPS mode
When using the virt-who
utility on a RHEL 9 system in FIPS mode, virt-who
cannot connect to ESX servers. As a consequence, virt-who
does not report any ESX servers, even if configured for them, and logs the following error message:
ValueError: [digital envelope routines] unsupported
To work around this issue, do one of the following:
-
Do not set the RHEL 9 system you use for running
virt-who
to FIPS mode. -
Do not upgrade the RHEL system you use for running
virt-who
to version 9.0.
8.3. Software management
The Installation process sometimes becomes unresponsive
When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log
file displays the following message at the end:
10:20:56,416 DDEBUG dnf: RPM transaction over.
To workaround this problem, restart the installation process.
8.4. Shells and command-line tools
ReaR fails during recovery if the TMPDIR
variable is set in the configuration file
Setting and exporting TMPDIR
in the /etc/rear/local.conf
or /etc/rear/site.conf
ReaR configuration file does not work and is deprecated.
The ReaR default configuration file /usr/share/rear/conf/default.conf
contains the following instructions:
# To have a specific working area directory prefix for Relax-and-Recover # specify in /etc/rear/local.conf something like # # export TMPDIR="/prefix/for/rear/working/directory" # # where /prefix/for/rear/working/directory must already exist. # This is useful for example when there is not sufficient free space # in /tmp or $TMPDIR for the ISO image or even the backup archive.
The instructions mentioned above do not work correctly because the TMPDIR
variable has the same value in the rescue environment, which is not correct if the directory specified in the TMPDIR
variable does not exist in the rescue image.
As a consequence, setting and exporting TMPDIR
in the /etc/rear/local.conf
file leads to the following error when the rescue image is booted :
mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory cp: missing destination file operand after '/etc/rear/mappings/mac' Try 'cp --help' for more information. No network interface mapping is specified in /etc/rear/mappings/mac
or the following error and abort later, when running rear recover
:
ERROR: Could not create build area
To work around this problem, if you want to have a custom temporary directory, specify a custom directory for ReaR temporary files by exporting the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=…
statement and then execute the rear
command in the same shell session or script. As a result, the recovery is successful in the described configuration.
Renaming network interfaces using ifcfg
files fails
On RHEL 9, the initscripts
package is not installed by default. Consequently, renaming network interfaces using ifcfg
files fails. To solve this problem, Red Hat recommends that you use udev
rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the systemd.link(5)
man page.
If you cannot use one of the recommended solutions, install the initscripts
package.
(BZ#2018112)
The chkconfig
package is not installed by default in RHEL 9
The chkconfig
package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9.
To manage services, use the systemctl
commands or install the chkconfig
package manually.
For more information about systemd
, see Managing systemd. For instructions on how to use the systemctl
utility, see Managing system services with systemctl.
(BZ#2053598)
8.5. Infrastructure services
Both bind
and unbound
disable validation of SHA-1-based signatures
The bind
and unbound
components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy.
As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable.
To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys.
For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution.
named
fails to start if the same writable zone file is used in multiple zones
BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named
service, named
fails to start. To work around this problem, use the in-view
clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path.
Note that writable zone files are typically used in zones with allowed dynamic updates, slave zones, or zones maintained by DNSSEC.
Setting the console keymap
requires the libxkbcommon
library on your minimal install
In RHEL 9, certain systemd
library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb
command fails.
To work around this problem, install the libxkbcommon
library:
# dnf install libxkbcommon
8.6. Security
OpenSSL
does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures
The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL
library fail to work with an RSA
key if the key is held by the PKCS #11
token. As a result, TLS communication fails in the described scenario.
To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available.
(BZ#1681178)
OpenSSL
incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures
The OpenSSL
library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures.
To work around the problem, add the following lines after the .include
line at the end of the crypto_policy
section in the /etc/pki/tls/openssl.cnf
file:
SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2
As a result, a TLS connection can be established in the described scenario.
(BZ#1685470)
Cryptography not approved by FIPS works in OpenSSL in FIPS mode
Cryptography that is not FIPS-approved works in the OpenSSL toolkit regardless of system settings. Consequently, you can use cryptographic algorithms and ciphers that should be disabled when the system is running in FIPS mode, for example:
- TLS cipher suites using the RSA key exchange work.
- RSA-based algorithms for public-key encryption and decryption work despite using the PKCS #1 and SSLv23 paddings or using keys shorter than 2048 bits.
OpenSSL cannot use engines in FIPS mode
Engine API is deprecated in OpenSSL 3.0 and is incompatible with OpenSSL Federal Information Processing Standards (FIPS) implementation and other FIPS-compatible implementations. Therefore, OpenSSL cannot run engines in FIPS mode. There is no workaround for this problem.
PSK ciphersuites do not work with the FUTURE
crypto policy
Pre-shared key (PSK) ciphersuites are not recognized as performing perfect forward secrecy (PFS) key exchange methods. As a consequence, the ECDHE-PSK
and DHE-PSK
ciphersuites do not work with OpenSSL configured to SECLEVEL=3
, for example with the FUTURE
crypto policy. As a workaround, you can set a less restrictive crypto policy or set a lower security level (SECLEVEL
) for applications that use PSK ciphersuites.
GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies
The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT
cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures.
To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the non-secure SHA-1 signatures.
Some OpenSSH operations do not used FIPS-approved interfaces
The OpenSSL cryptographic library, which is used by OpenSSH, provides two interfaces: legacy and modern. Because of changes to OpenSSL internals, only the modern interfaces use FIPS-certified implementations of cryptographic algorithms. Because OpenSSH uses legacy interfaces for some operations, it does not comply with FIPS requirements.
gpg-agent
does not work as an SSH agent in FIPS mode
The gpg-agent
tool creates MD5 fingerprints when adding keys to the ssh-agent
program even though FIPS mode disables the MD5 digest. Consequently, the ssh-add
utility fails to add the keys to the authentication agent.
To work around the problem, create the ~/.gnupg/sshcontrol
file without using the gpg-agent --daemon --enable-ssh-support
command. For example, you can paste the output of the gpg --list-keys
command in the <FINGERPRINT> 0
format to ~/.gnupg/sshcontrol
. As a result, gpg-agent
works as an SSH authentication agent.
SELinux staff_u
users can incorrectly switch to unconfined_r
When the secure_mode
boolean is enabled, staff_u
users can incorrectly switch to the unconfined_r
role. As a consequence, staff_u
users can perform privileged operations affecting the security of the system.
Default SELinux policy allows unconfined executables to make their stack executable
The default state of the selinuxuser_execstack
boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off
.
Remediating service-related rules during kickstart installations might fail
During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable
or disable
state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues.
SSH timeout rules in STIG profiles configure incorrect options
An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles:
-
DISA STIG for RHEL 9 (
xccdf_org.ssgproject.content_profile_stig
) -
DISA STIG with GUI for RHEL 9 (
xccdf_org.ssgproject.content_profile_stig_gui
)
In each of these profiles, the following two rules are affected:
Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-90271-8 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-90811-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout
When applied to SSH servers, each of these rules configures an option (ClientAliveCountMax
and ClientAliveInterval
) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed.
fagenrules --load
does not work correctly
The fapolicyd
service does not correctly handle the signal hang up (SIGHUP). Consequently, fapolicyd
terminates after receiving the SIGHUP signal. Therefore, the fagenrules --load
command does not work properly, and rule updates require manual restarts of fapolicyd
. To work around this problem, restart the fapolicyd
service after any change in rules, and as a result fagenrules --load
will work correctly.
Ansible remediations require additional collections
With the replacement of Ansible Engine by the ansible-core
package, the list of Ansible modules provided with the RHEL subscription is reduced. As a consequence, running remediations that use Ansible content included within the scap-security-guide
package requires collections from the rhc-worker-playbook
package.
For an Ansible remediation, perform the following steps:
Install the required packages:
# dnf install -y ansible-core scap-security-guide rhc-worker-playbook
-
Navigate to the
/usr/share/scap-security-guide/ansible
directory: # cd /usr/share/scap-security-guide/ansible Run the relevant Ansible playbook using environment variables that define the path to the additional Ansible collections:
# ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook-cis_server_l1.yml
Replace
cis_server_l1
with the ID of the profile against which you want to remediate the system.
As a result, the Ansible content is processed correctly.
Support of the collections provided in rhc-worker-playbook
is limited to enabling the Ansible content sourced in scap-security-guide
.
8.7. Networking
The nm-cloud-setup
service removes manually-configured secondary IP addresses from interfaces
Based on the information received from the cloud environment, the nm-cloud-setup
service configures network interfaces. Disable nm-cloud-setup
to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup
removes secondary IP addresses:
Stop and disable the
nm-cloud-setup
service and timer:# systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer
Display the available connection profiles:
# nmcli connection show
Reactive the affected connection profiles:
# nmcli connection up "<profile_name>"
As a result, the service no longer removes manually-configured secondary IP addresses from interfaces.
An empty rd.znet
option in the kernel command line causes the network configuration to fail
An rd.znet
option without any arguments, such as net types or subchannels, in the kernel fails to configure networking. To work around this problem, either remove the rd.znet
option from the command line completely or specify relevant net types, subchannels, and other relevant options. For more information about these options, see the dracut.cmdline(7)
man page.
(BZ#1931284)
Failure to update the session key causes the connection to break
Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key.
(BZ#2013650)
The initscripts
package is not installed by default
By default, the initscripts
package is not installed. As a consequence, the ifup
and ifdown
utilities are not available. As an alternative, use the nmcli connection up
and nmcli connection down
commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown
package, which provides a NetworkManager solution for the ifup
and ifdown
utilities.
The primary IP address of an instance changes after starting the nm-cloud-setup service in Alibaba Cloud
After launching an instance in the Alibaba Cloud, the nm-cloud-setup
service assigns the primary IP address to an instance. However, if you assign multiple secondary IP addresses to an instance and start the nm-cloud-setup
service, the former primary IP address gets replaced by one of the already assigned secondary IP addresses. The returned list of metadata verifies the same. To work around the problem, configure secondary IP addresses manually to avoid that the primary IP address changes. As a result, an instance retains both IP addresses and the primary IP address does not change.
8.8. Kernel
kdump
fails to start on RHEL 9 kernel
The RHEL 9 kernel does not have the crashkernel=auto
parameter configured as default. Consequently, the kdump
service fails to start by default.
To work around this problem, configure the crashkernel=
option to the required value.
For example, to reserve 256 MB of memory using the grubby
utility, enter the following command:
# grubby --args crashkernel=256M --update-kernel ALL
As a result, the RHEL 9 kernel starts kdump
and uses the configured memory size value to dump the vmcore
file.
(BZ#1894783)
The kdump
mechanism fails to capture vmcore
on LUKS-encrypted targets
When running kdump
on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup
service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file (vmcore
) on LUKS-encrypted targets.
With the kdumpctl estimate
command, you can query the Recommended crashkernel value
, which is the recommended memory size required for kdump
.
To work around this issue, use following steps to configure the required memory for kdump
on LUKS encrypted targets:
Print the estimate
crashkernel
value:# kdumpctl estimate
Configure the amount of required memory by increasing the
crashkernel
value:# grubby --args=crashkernel=652M --update-kernel=ALL
Reboot the system for changes to take effect.
# reboot
As a result, kdump
works correctly on systems with LUKS-encrypted partitions.
(BZ#2017401)
Allocating crash kernel memory fails at boot time
On certain Ampere Altra systems, allocating the crash kernel memory for kdump
usage fails during boot when the available memory is below 1 GB. Consequently, the kdumpctl
command fails to start the kdump
service as the required memory is more than the available memory size.
As a workaround, decrease the value of the crashkernel
parameter by a minimum of 240 MB to fit the size requirement, for example crashkernel=240M
. As a result, the crash kernel memory allocation for kdump
does not fail on Ampere Altra systems.
kTLS does not support offloading of TLS 1.3 to NICs
Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded.
(BZ#2000616)
FADump enabled with Secure Boot might lead to GRUB Out of Memory (OOM)
In the Secure Boot environment, GRUB and PowerVM together allocate a 512 MB memory region, known as the Real Mode Area (RMA), for boot memory. The region is divided among the boot components and, if any component exceeds its allocation, out-of-memory failures occur.
Generally, the default installed initramfs
file system and the vmlinux
symbol table are within the limits to avoid such failures. However, if Firmware Assisted Dump (FADump) is enabled in the system, the default initramfs
size can increase and exceed 95 MB. As a consequence, every system reboot leads to a GRUB OOM state.
To avoid this issue, do not use Secure Boot and FADump together. For more information and methods on how to work around this issue, see link:https://www.ibm.com/support/pages/node/6846531.
(BZ#2149172)
Systems in Secure Boot cannot run dynamic LPAR operations
Users cannot run dynamic logical partition (DLPAR) operations from the Hardware Management Console (HMC) if either of these conditions are met:
-
The Secure Boot feature is enabled that implicitly enables kernel
lockdown
mechanism in integrity mode. -
The kernel
lockdown
mechanism is manually enabled in integrity or confidentiality mode.
In RHEL 9, kernel lockdown
completely blocks Run Time Abstraction Services (RTAS) access to system memory accessible through the /dev/mem
character device file. Several RTAS calls require write access to /dev/mem
to function properly. Consequently, RTAS calls do not execute correctly and users see the following error message:
HSCL2957 Either there is currently no RMC connection between the management console and the partition <LPAR name> or the partition does not support dynamic partitioning operations. Verify the network setup on the management console and the partition and ensure that any firewall authentication between the management console and the partition has occurred. Run the management console diagrmc command to identify problems that might be causing no RMC connection.
(BZ#2083106)
dkms
provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs
The Dynamic Kernel Module Support (dkms
) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kilobytes and 64 kilobytes page sizes. As a result, when the kernel update is performed and the kernel-64k-devel
package is not installed, dkms
provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers
package, which contains header files for both types of ARM CPU architectures and is not specific to dkms
and its requirements.
(JIRA:RHEL-25967)
8.9. Boot loader
New kernels lose previous command-line options
The GRUB boot loader does not apply custom, previously configured kernel command-line options to new kernels. Consequently, when you upgrade the kernel package, the system behavior might change after reboot due to the missing options.
To work around the problem, manually add all custom kernel command-line options after each kernel upgrade. As a result, the kernel applies custom options as expected, until the next kernel upgrade.
8.10. File systems and storage
Device Mapper Multipath is not supported with NVMe/TCP
Using Device Mapper Multipath with the nvme-tcp
driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath
tools with NVMe.
By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices.
(BZ#2033080)
The blk-availability systemd
service deactivates complex device stacks
In systemd
, the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command:
# systemctl enable --now blk-availability.service
As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages.
(BZ#2011699)
Invalid sysfs
value for supported_speeds
The qla2xxx
driver reports 20Gb/s instead of the expected 64Gb/s as one of the supported port speeds in the sysfs supported_speeds
attribute:
$ cat /sys/class/fc_host/host12/supported_speeds 16 Gbit, 32 Gbit, 20 Gbit
As a consequence, if the HBA supports 64Gb/s link speed, the sysfs supported_speeds
value is incorrect. This affects only the supported_speeds
value of sysfs
and the port operates at the expected negotiated link rate.
(BZ#2069758)
Unable to connect to NVMe namespaces from Broadcom initiator on AMD EPYC systems
By default, the RHEL kernel enables the IOMMU on AMD-based platforms. Consequently, when you use IOMMU-enabled platforms on servers with AMD processors, you might experience NVMe I/O problems, such as I/Os failing due to transfer length mismatches.
To work around this problem, add the IOMMU in passthrough mode by using the kernel command-line option, iommu=pt
. As a result, you can now connect to NVMe namespaces from Broadcom initiator on AMD EPYC systems.
(BZ#2073541)
8.11. Dynamic programming languages, web and database servers
The --ssl-fips-mode
option in MySQL
and MariaDB
does not change FIPS mode
The --ssl-fips-mode
option in MySQL
and MariaDB
in RHEL works differently than in upstream.
In RHEL 9, if you use --ssl-fips-mode
as an argument for the mysqld
or mariadbd
daemon, or if you use ssl-fips-mode
in the MySQL
or MariaDB
server configuration files, --ssl-fips-mode
does not change FIPS mode for these database servers.
Instead:
-
If you set
--ssl-fips-mode
toON
, themysqld
ormariadbd
server daemon does not start. -
If you set
--ssl-fips-mode
toOFF
on a FIPS-enabled system, themysqld
ormariadbd
server daemons still run in FIPS mode.
This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components.
Therefore, do not use the --ssl-fips-mode
option in MySQL
or MariaDB
in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system:
- Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode.
- Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode.
8.12. Compilers and development tools
Certain symbol-based probes do not work in SystemTap
on the 64-bit ARM architecture
Kernel configuration disables certain functionality needed for SystemTap
. Consequently, some symbol-based probes do not work on the 64-bit ARM architecture. As a result, affected SystemTap
scripts may not run or may not collect hits on desired probe points.
Note that this bug has been fixed for the remaining architectures with the release of the RHBA-2022:5259 advisory.
(BZ#2083727)
8.13. Identity Management
RHEL 9 Kerberos client fails to authenticate a user using PKINIT against Heimdal KDC
During the PKINIT authentication of an IdM user on a RHEL 9 Kerberos client, the Heimdal Kerberos Distribution Center (KDC) on RHEL 9 or earlier uses the SHA-1 backup signature algorithm because the Kerberos client does not support the supportedCMSTypes
field. However, the SHA-1 algorithm has been deprecated in RHEL 9 and therefore the user authentication fails.
To work around this problem, enable support for the SHA-1 algorithm on your RHEL 9 clients with the following command:
# update-crypto-policies --set DEFAULT:SHA1
As a result, PKINIT authentication works between the Kerberos client and Heimdal KDC.
For more details about supported backup signature algorithms, see Kerberos Encryption Types Defined for CMS Algorithm Identifiers.
The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL 9 Kerberos agent
If a RHEL 9 Kerberos agent interacts with another, non-RHEL 9 Kerberos agent in your environment, the Public Key Cryptography for initial authentication (PKINIT) authentication of a user fails. To work around the problem, perform one of the following actions:
Set the RHEL 9 agent’s crypto-policy to
DEFAULT:SHA1
to allow the verification of SHA-1 signatures:# update-crypto-policies --set DEFAULT:SHA1
Update the non-RHEL 9 agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos packages to the versions that use SHA-256 instead of SHA-1:
- CentOS 9 Stream: krb5-1.19.1-15
- RHEL 8.7: krb5-1.18.2-17
- RHEL 7.9: krb5-1.15.1-53
- Fedora Rawhide/36: krb5-1.19.2-7
- Fedora 35/34: krb5-1.19.2-3
You must perform one of these actions regardless of whether the non-patched agent is a Kerberos client or the Kerberos Distribution Center (KDC).
As a result, the PKINIT authentication of a user works correctly.
Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1.
The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against older RHEL KDCs and AD KDCs
The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm.
While SHA-256 is used by default starting with RHEL 7.9 and RHEL 8.7, older Kerberos Key Distribution Centers (KDCs) on RHEL 7.8 and RHEL 8.6 and earlier still use the SHA-1 digest algorithm to sign CMS messages. So does the Active Directory (AD) KDC.
As a result, RHEL 9 Kerberos clients fail to authenticate users using PKINIT against the following:
- KDCs running on RHEL 7.8 and earlier
- KDCs running on RHEL 8.6 and earlier
- AD KDCs
To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command:
# update-crypto-policies --set DEFAULT:SHA1
See also RHEL 9 Kerberos client fails to authenticate a user using PKINIT against Heimdal KDC.
Directory Server terminates unexpectedly when started in referral mode
Due to a bug, global referral mode does not work in Directory Server. If you start the ns-slapd
process with the refer
option as the dirsrv
user, Directory Server ignores the port settings and terminates unexpectedly. Trying to run the process as the root
user changes SELinux labels and prevents the service from starting in future in normal mode. There are no workarounds available.
Configuring a referral for a suffix fails in Directory Server
If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral
command fails with the following error:
Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state
As a consequence, configuring a referral for suffixes fail. To work around the problem:
Set the
nsslapd-referral
parameter manually:# ldapmodify -D "cn=Directory Manager" -W -H ldap://server.example.com dn: cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config changetype: modify add: nsslapd-referral nsslapd-referral: ldap://remote_server:389/dc=example,dc=com
Set the back-end state:
# dsconf <instance_name> backend suffix set --state referral
As a result, with the workaround, you can configure a referral for a suffix.
The dsconf
utility has no option to create fix-up tasks for the entryUUID
plug-in
The dsconf
utility does not provide an option to create fix-up tasks for the entryUUID
plug-in. As a result, administrators cannot not use dsconf
to create a task to automatically add entryUUID
attributes to existing entries. As a workaround, create a task manually:
# ldapadd -D "cn=Directory Manager" -W -H ldap://server.example.com -x dn: cn=entryuuid_fixup___<time_stamp__,cn=entryuuid task,cn=tasks,cn=config objectClass: top objectClass: extensibleObject basedn: __<fixup base tree>__ cn: entryuuid_fixup___<time_stamp>__ filter: __<filtered_entry>__
After the task has been created, Directory Server fixes entries with missing or invalid entryUUID
attributes.
Potential risk when using the default value for ldap_id_use_start_tls
option
When using ldap://
without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search.
Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls
, defaults to false
. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap
. Note id_provider = ad
and id_provider = ipa
are not affected as they use encrypted connections protected by SASL and GSSAPI.
If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls
option to true
in the /etc/sssd/sssd.conf
file. The default behavior is planned to be changed in a future release of RHEL.
(JIRA:RHELPLAN-155168)
8.14. Desktop
Firefox add-ons are disabled after upgrading to RHEL 9
If you upgrade from RHEL 8 to RHEL 9, all add-ons that you previously enabled in Firefox are disabled.
To work around the problem, manually reinstall or update the add-ons. As a result, the add-ons are enabled as expected.
VNC is not running after upgrading to RHEL 9
After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled.
To work around the problem, manually enable the vncserver
service after the system upgrade:
# systemctl enable --now vncserver@:port-number
As a result, VNC is now enabled and starts after every system boot as expected.
8.15. Graphics infrastructures
Matrox G200e shows no output on a VGA display
Your display might show no graphical output if you use the following system configuration:
- The Matrox G200e GPU
- A display connected over the VGA controller
As a consequence, you cannot use or install RHEL on this configuration.
To work around the problem, use the following procedure:
- Boot the system to the boot loader menu.
-
Add the
module_blacklist=mgag200
option to the kernel command line.
As a result, RHEL boots and shows graphical output as expected, but the maximum resolution is limited to 1024x768 at the 16-bit color depth.
(BZ#1960467)
X.org configuration utilities do not work under Wayland
X.org utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr
utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout.
(JIRA:RHELPLAN-121049)
NVIDIA drivers might revert to X.org
Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the X.org display server:
- If the version of the NVIDIA driver is lower than 470.
- If the system is a laptop that uses hybrid graphics.
- If you have not enabled the required NVIDIA driver options.
Additionally, Wayland is enabled but the desktop session uses X.org by default if the version of the NVIDIA driver is lower than 510.
(JIRA:RHELPLAN-119001)
Night Light is not available on Wayland with NVIDIA
When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light.
(JIRA:RHELPLAN-119852)
8.16. The web console
Removing USB host devices using the web console does not work as expected
When you attach a USB device to a virtual machine (VM), the device number and bus number of the USB device might change after they are passed to the VM. As a consequence, using the web console to remove such devices fails due to the incorrect correlation of the device and bus numbers. To workaround this problem, remove the <hostdev>
part of the USB device, from the VM’s XML configuration.
(JIRA:RHELPLAN-109067)
Attaching multiple host devices using the web console does not work
When you select multiple devices to attach to a virtual machine (VM) using the web console, only a single device is attached and the rest are ignored. To work around this problem, attach only one device at a time.
(JIRA:RHELPLAN-115603)
8.17. Virtualization
Installing a virtual machine over https in some cases fails
Currently, the virt-install
utility fails when attempting to install a guest operating system from an ISO source over a https connection - for example using virt-install --cdrom https://example/path/to/image.iso
. Instead of creating a virtual machine (VM), the described operation terminates unexpectedly with an internal error: process exited while connecting to monitor
message.
To work around this problem, install qemu-kvm-block-curl
on the host to enable https protocol support. Alternatively, use a different connection protocol or a different installation source.
Using NVIDIA drivers in virtual machines disables Wayland
Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios:
- When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM)
- When you assign an NVIDIA vGPU mediated device to a RHEL VM
(JIRA:RHELPLAN-117234)
The Milan
VM CPU type is sometimes not available on AMD Milan systems
On certain AMD Milan systems, the Enhanced REP MOVSB (erms
) and Fast Short REP MOVSB (fsrm
) feature flags are disabled in the BIOS by default. Consequently, the 'Milan' CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms
and fsrm
in the BIOS of your host.
(BZ#2077767)
Network traffic performance in virtual machines might be reduced
In some cases, RHEL 9.0 guest virtual machines (VMs) have somewhat decreased performance when handling high levels of network traffic.
Disabling AVX causes VMs to become unbootable
On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM.
(BZ#2005173)
Failover virtio NICs are not assigned an IP address on Windows virtual machines
Currently, when starting a Windows virtual machine (VM) with only a failover virtio NIC, the VM fails to assign an IP address to the NIC. Consequently, the NIC is unable to set up a network connection. Currently, there is no workaround.
A hostdev
interface with failover settings cannot be hot-plugged after being hot-unplugged
After removing a hostdev
network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM.
Live post-copy migration of VMs with failover VFs fails
Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration.
8.18. RHEL in cloud environments
SR-IOV performs suboptimally in ARM 64 RHEL 9 virtual machines on Azure
Currently, SR-IOV networking devices have significantly lower throughout and higher latency than expected in ARM 64 RHEL 9 virtual machines VMs running on a Microsoft Azure platform.
(BZ#2068432)
Mouse is not usable in RHEL 9 VMs on XenServer 7 with console proxy
When running a RHEL 9 virtual machine (VM) on a XenServer 7 platform with a console proxy, it is not possible to use the mouse in the VM’s GUI. To work around this problem, disable the Wayland compositor protocol in the VM as follows:
-
Open the
/etc/gdm/custom.conf
file. -
Uncomment the
WaylandEnable=false
line. - Save the file.
In addition, note that Red Hat does not support XenServer as a platform for running RHEL VMs, and discourages using XenServer with RHEL in production environments.
(BZ#2019593)
Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear
When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur:
- After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode.
- A VM created by cloning cannot boot, and instead enters emergency mode.
To work around these problems, do the following in emergency mode of the VM:
-
Remove the LVM system devices file:
rm /etc/lvm/devices/system.devices
-
Recreate LVM device settings:
vgimportdevices -a
- Reboot the VM
This makes it possible for the cloned or restored VM to boot up correctly.
(BZ#2059545)
The SR-IOV functionality of a network adapter attached to a Hyper-V virtual machine might not work
Currently, when attaching a network adapter with single-root I/O virtualization (SR-IOV) enabled to a RHEL 9 virtual machine (VM) running on Microsoft Hyper-V hypervisor, the SR-IOV functionality in some cases does not work correctly.
To work around this problem, disable SR-IOV in the VM configuration, and then enable it again.
- In the Hyper-V Manager window, right-click the VM.
-
In the contextual menu, navigate to
Settings/Network Adapter/Hardware Acceleration
. - Uncheck .
- Click .
- Repeat steps 1 and 2 to navigate to the option again.
- Check .
- Click .
(BZ#2030922)
Customizing RHEL 9 guests on ESXi sometimes causes networking problems
Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway.
For details and workaround instructions, see the VMware Knowledge Base.
(BZ#2037657)
8.19. Supportability
Timeout when running sos report
on IBM Power Systems, Little Endian
When running the sos report
command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu
directory. As a workaround, increase the plugin’s timeout accordingly:
- For one-time setting, run:
# sos report -k processor.timeout=1800
-
For a permanent change, edit the
[plugin_options]
section of the/etc/sos/sos.conf
file:
[plugin_options] # Specify any plugin options and their values here. These options take the form # plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800
The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin’s timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command:
# time sos report -o processor -k processor.timeout=0 --batch --build
(BZ#1869561)
8.20. Containers
Container images signed with a Beta GPG key can not be pulled
Currently, when you try to pull RHEL 9 Beta container images, podman
exits with the error message: Error: Source image rejected: None of the signatures were accepted
. The images fail to be pulled due to current builds being configured to not trust the RHEL Beta GPG keys by default.
As a workaround, ensure that the Red Hat Beta GPG key is stored on your local system and update the existing trust scope with the podman image trust set
command for the appropriate beta namespace.
If you do not have the Beta GPG key stored locally, you can pull it by running the following command:
sudo wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta https://www.redhat.com/security/data/f21541eb.txt
To add the Beta GPG key as trusted to your namespace, use one of the following commands:
$ sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.access.redhat.com/namespace
and
$ sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.redhat.io/namespace
Replace namespace with ubi9-beta or rhel9-beta.
Podman fails to pull a container "X509: certificate signed by unknown authority"
If you have your own internal registry signed by our own CA certificate, then you have to import the certificate onto your host machine. Otherwise, an error occurs:
x509: certificate signed by unknown authority
Import the CA certificates on your host:
# cd /etc/pki/ca-trust/source/anchors/ [anchors]# curl -O <your_certificate>.crt [anchors]# update-ca-trust
Then you can pull container images from the internal registry.
Running systemd within an older container image does not work
Running systemd within an older container image, for example, centos:7
, does not work:
$ podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.
To work around this problem, use the following commands:
# mkdir /sys/fs/cgroup/systemd # mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd # podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd
(JIRA:RHELPLAN-96940)
podman system connection add
and podman image scp
fails
Podman uses SHA-1 hashes for the RSA key exchange. The regular SSH connection among machines using RSA keys works, while the podman system connection add
and podman image scp
commands do not work using the same RSA keys, because the SHA-1 hashes are not accepted for key exchange on RHEL 9:
$ podman system connection add --identity ~/.ssh/id_rsa test_connection $REMOTE_SSH_MACHINE Error: failed to connect: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
To work around this problem, use the ED25519 keys:
Connect to the remote machine:
$ ssh -i ~/.ssh/id_ed25519 $REMOTE_SSH_MACHINE
Record ssh destination for the Podman service:
$ podman system connection add --identity ~/.ssh/id_ed25519 test_connection $REMOTE_SSH_MACHINE
Verify that the ssh destination was recorded:
$ podman system connection list
Note that with the release of the RHBA-2022:5951 advisory, the problem has been fixed.
(JIRA:RHELPLAN-121180)