Chapter 5. Bug fixes
This part describes bugs fixed in Red Hat Enterprise Linux 9.6 that have a significant impact on users.
5.1. Security Copy linkLink copied to clipboard!
shlibsign
now works in FIPS mode
Before this update, the shlibsign
program did not work in FIPS mode. Consequently, when you rebuilt an NSS library in FIPS mode, you had to leave FIPS mode to sign the library. The program has been fixed, and you can now use shlibsign
in FIPS mode.
Audit now loads rules referencing /usr/lib/modules/
Before this update, when the ProtectKernelModules
option was set to true
for the auditd.service
, the Audit subsystem did not load rules that reference files in the /usr/lib/modules/
directory with the error message Error sending add rule data request (No such file or directory)
. With this update, Audit loads also these rules, and you no longer have to reload the rules by using the auditctl -R
or augenrules --load
commands.
Jira:RHEL-59570[1]
update-ca-trust extract
no longer fails to extract certificates with long names
When extracting certificates from the trust store, the trust
tool internally derives the file name from the certificates' object label. For long enough labels, the resulting path might previously have exceeded the system’s maximum file name length. As a consequence, the trust
tool failed to create a file with a name that exceeded the maximum file name length of a system. With this update, the derived name is always truncated to within 255 characters. As a result, file creation does not fail when the object label of a certificate is too long.
Jira:RHEL-58899[1]
Rule to allow dac_override
and dac_read_search
for qemu-guest-agent
added to the SELinux policy
Previously, the SELinux policy did not have rules to allow qemu-guest-agent
the dac_override
and dac_read_search
capabilities. As a consequence, freezing and thawing virtual machine file systems did not work properly when the file system mount point DAC permission did not grant access to the user root. This update has added the missing rule to the policy. As a result, fsfreeze
, which is an important qemu-ga
command for creating consistent snapshots, works correctly.
OpenSSL cipher suites no longer enable cipher suites with disabled hashes or MACs
Previously, applying custom cryptographic policies could leave certain TLS 1.3 cipher suites enabled even if their hashes or MACs were disabled, because the OpenSSL TLS 1.3-specific Ciphersuites
option values were controlled only by the ciphers
option of the cryptographic policy. With this update, crypto-policies
takes more algorithms into account when deciding whether to enable a cipher suite. As a result, OpenSSL on systems with custom cryptographic policies might refuse to negotiate some of the previously enabled TLS 1.3 cipher suites in better accordance with the system configuration.
Jira:RHEL-76528[1]
5.2. Subscription management Copy linkLink copied to clipboard!
RHEL web console Subscription Manager plugin now detects Insights data upload failures
Before this update, the systemd
service manager automatically restarted insights-client
service in the case of failure, which broke the built-in detection in the Subscription Manager plugin in the RHEL web console. This detection checked for the failed service state, which was correctly prevented. Consequently, the RHEL web console did not display any warning when an Insights data upload failed. With this update, the detection of the status of insights-client
has been improved to account for the new status when an upload fails. As a result, Subscription Manager in the web console detects Insights data upload failures correctly.
subscription-manager
no longer retains nonessential text in the terminal
Starting with RHEL 9.1, subscription-manager
displays progress information while processing any operation. Previously, for some languages, typically non-Latin, progress messages did not clean up after the operation finished. With this update, all the messages are cleaned up properly when the operation finishes.
If you have disabled the progress messages before, you can re-enable them by entering the following command:
subscription-manager config --rhsm.progress_messages=1
# subscription-manager config --rhsm.progress_messages=1
Jira:RHELPLAN-137234[1]
5.3. Software management Copy linkLink copied to clipboard!
dnf needs-restarting --reboothint
now correctly reports whether a reboot is needed on systems with a real-time clock not running in UTC
Before this update, if you updated a package which required a system reboot to fully apply the update on a system with a real time clock not running in UTC, the dnf needs-restarting --reboothint
command might not have reported that the reboot was needed. With this update, the systemd
UnitsLoadStartTimestamp property is added as a preferred source of a boot time. As a result, dnf needs-restarting --reboothint
is now more reliable outside of containers on systems with a real-time clock running in local time.
The repository metadata is now stored directly in the requested directory when using dnf reposync
Before this update, the dnf reposync
command did not respect the --norepopath
option for downloading a repository metadata. Consequently, this metadata was stored in a subdirectory named after the repository. With this update, the dnf reposync
command now respects the --norepopath
option, and the repository metadata is stored directly in the requested directory.
%patch N
no longer applies a patch number 0
Before this update, when you used the %patch N
syntax, where N
is the number of a patch, the syntax also applied the patch number 0 (Patch0
) in addition to the patch specified by N
. With this update, the %patch N
syntax has been fixed to only apply the patch number N
.
If you use the %patch
directive without a patch number specified, as a shorthand for %patch 0
, Patch0
is applied. However, a warning is printed that suggests you to use the explicit syntax, for example, %patch 0
or %patch -P 0
instead of %patch
to apply the zero-th
patch.
5.4. Shells and command-line tools Copy linkLink copied to clipboard!
lparstat -E
now displays correct values for busy and idle states
Previously, some values were not taken into account while calculating the idle ticks. As a consequence, the lparstat -E
command displayed the busy and idle states under the Normalized
and Actual
sections, for example, lparstat -E 4 4
. With this update, the calculation of the idle tick has been fixed. As a result, the lparstat -E
utility now displays the correct values for busy and idle states.
Jira:RHEL-61089[1]
Traceroute now defaults to IPv6
Previously, traceroute defaulted to IPv4 addresses even when IPv6 addresses were available. With this enhancement, traceroute now defaults to IPv6 if available.
5.5. Networking Copy linkLink copied to clipboard!
Networking interface configuration persists after lldpad
termination
Before this update, the networking interface configuration was removed when Link Layer Discovery Protocol Agent Daemon (lldpad
) was terminated by systemd
or manually. This update fixes the lldpad
source-code so that the service does not reset interface configuration after termination.
RHEL displays the correct number of CPUs on a system of 512 CPUs
The rps_default_mask
configuration setting controls the default Receive Packet Steering (rps
) mechanism to direct incoming network packets towards specific CPUs. The flow_limit_cpu_bitmap
parameter enables or disables flow control per CPU. With this fix, RHEL displays total CPUs along with its parameter values on the console correctly.
The netdev device attributes are removed from ethtool output
Due to changes in network device feature flags stored in the kernel, the following features will no longer appear in the output of the ethtool -k
command:
-
tx-lockless
-
netns-local
-
fcoe-mtu
Note that these flags were not features, but rather device attributes or properties that could not be changed by using the ethtool -K
command in any driver.
Jira:RHEL-59091[1]
NetworkManager can mitigate the impact of CVE-2024-3661 (TunnelVision) in VPN connection profiles
VPN connections rely on routes to redirect traffic through a tunnel. However, if a DHCP server uses the classless static route option (121) to add routes to a client’s routing table, and the routes propagated by the DHCP server overlap with the VPN, traffic can be transmitted through the physical interface instead of the VPN. CVE-2024-3661 describes this vulnerability, which is also know as TunnelVision. As a consequence, an attacker can access traffic that the user expects to be protected by the VPN.
On RHEL, this problem affects LibreSwan IPSec and WireGuard VPN connections. Only LibreSwan IPSec connections with profiles in which both the ipsec-interface
and vt-interface
properties are undefined or set to no
are not affected.
The CVE-2024-3661 document describes steps to mitigate the impact of TunnelVision by configuring VPN connection profiles to place the VPN routes in a dedicated routing table with a high priority. The steps work for both LibreSwan IPSec and WireGuard connections.
The xdp-loader features
command now works as expected
The xdp-loader
utility was compiled against the previous version of libbpf
. As a consequence, xdp-loader features
failed with an error:
Cannot display features, because xdp-loader was compiled against an old version of libbpf without support for querying features.
Cannot display features, because xdp-loader was compiled against an old version of libbpf without support for querying features.
The utility is now compiled against the correct libbpf
version. As a result, the command now works as expected.
Mellanox ConnectX-5
adapter works in the DMFS
mode
Previously, while using the Ethernet switch device driver model (switchdev
) mode, the mlx5
driver failed if configured in the device managed flow steering (DMFS
) mode on the ConnectX-5
adapter. Consequently, the following error message appeared:
mlx5_core 0000:5e:00.0: mlx5_cmd_out_err:780:(pid 980895): DELETE_FLOW_TABLE_ENTRY(0x938) op_mod(0x0) failed, status bad resource(0x5), syndrome (0xabe70a), err(-22)
mlx5_core 0000:5e:00.0: mlx5_cmd_out_err:780:(pid 980895): DELETE_FLOW_TABLE_ENTRY(0x938) op_mod(0x0) failed, status bad resource(0x5), syndrome (0xabe70a), err(-22)
As a result, when you update the firmware version of the ConnectX-5
adapter to 16.35.3006 or later, the error message will not appear.
Jira:RHEL-9897[1]
5.6. File systems and storage Copy linkLink copied to clipboard!
multipathd
no longer crashes because of errors encountered by the ontap prioritizer
Before this update, multipathd
crashed when it was configured to use the ontap prioritizer on an unsupported path, because the prioritizer only works with NetApp storage arrays. This failure occurred due to a bug in the prioritizer’s error logging code, which caused it to overflow the error message buffer. With this update, the error logging code has been fixed, and multipathd
no longer crashes because of errors encountered by the ontap prioritizer.
Jira:RHEL-58920[1]
Native NVMe multipathing no longer causes a memory leak when enable_foreign
is set to monitor natively multipathed NVMe devices
Before this update, enabling native NVMe multipathing caused a memory leak if the enable_foreign
configuration parameter was set to monitor natively multipathed NVMe devices. With this update, the memory leak was fixed in multipathd
monitoring code. As a result, multipathd
can now monitor natively multipathed NVMe devices without increasing memory usage.
Jira:RHEL-73413[1]
RHEL installation program now discovers and uses iSCSI devices as boot devices on aarch64
Previously, the absence of the iscsi_ibft
kernel module in RHEL installation programs running on aarch64
prevented the automatic discovery of iSCSI devices defined in firmware. As a result, these devices were not automatically visible nor selectable as boot devices in the installer during manual addition GUI.
This issue has been resolved by including the iscsi_ibft
kernel module in newer aarch64
builds of RHEL. As a result, the iSCSI devices are now automatically detected and available as boot options during installation.
Jira:RHEL-56135[1]
fstrim
enabled by default on LUKS2 root in ostree-based new installations done by Anaconda
Previously, installing ostree-based systems, such as Image Mode, by using ostreesetup
or ostreecontainer
Kickstart commands with LUKS2 encryption enabled on the /
(root) mount point resulted in systems where fstrim
was not enabled. This could cause issues such as unresponsive systems or broken file chooser dialogs. With this fix, fstrim
(discards) is now enabled by default in the LUKS2 metadata on newly installed systems.
To fix this issue in the existing installations, run the following command: …. cryptsetup --allow-discards --persistent refresh <luks device>
…. <luks device>
is the path to the root LUKS2 device.
Systems with NVMe over TCP controllers no longer crash because of a data transfer failure
Before this update, on 64-bit ARM architecture systems with NVMe over TCP storage controllers where optimal IO size is bigger than the PAGE_SIZE and an MD device uses a bitmap, the system could crash with the following error message:
usercopy: Kernel memory exposure attempt detected from SLUB object 'kmalloc-512' (offset 440, size 24576)!
usercopy: Kernel memory exposure attempt detected from SLUB object 'kmalloc-512' (offset 440, size 24576)!
With this update, kernel checks that the final IO size does not exceed the bitmap length. As a result, the system no longer crashes.
Jira:RHEL-46615[1]
System boots correctly when adding a NVMe-FC device as a mount point in /etc/fstab
Previously, due to a known issue in the nvme-cli nvmf-autoconnect systemd
services, systems failed to boot while adding the Non-volatile Memory Express over Fibre Channel (NVMe-FC) devices as a mount point in the /etc/fstab
file. Consequently, the system entered into an emergency mode. With this update, a system boots without any issue when mounting an NVMe-FC device.
Jira:RHEL-8171[1]
5.7. High availability and clusters Copy linkLink copied to clipboard!
Resource constraints with expired rules no longer display
Before this update, the pcs constraint location config resources
command displayed resource constraints with expired rules in the output. With this update, the command no longer displays constraints with expired rules if you do not specify the --all
option.
Jira:RHEL-46293[1]
Status of a cloned resource running with only one instance now displays properly
Before this update, when you queried the status of the instances of a cluster resource clone with only one running instance, the pcs status query
command displayed an error message. With this update, the command reports the resource status properly.
Successful recovery of an interrupted Pacemaker remote connection
Before this update, when network communication was interrupted between a Pacemaker remote node and the cluster node hosting its connection during the TLS handshake portion of the initial connection, the connection in some cases blocked and could not be recovered on another cluster node. With this update, the TLS handshake is asynchronous and a remote connection is successfully recovered elsewhere.
Cluster status of a disaster recovery site now displays correctly
Before this update, when you configured a disaster recovery site and ran the pcs dr status
command to display the status of the local and remote cluster sites, the command displayed an error instead of the cluster status. With this update, the cluster status of the local and remote sites displays correctly when you run this command.
Cluster alerts now take immediate effect
Before this update, when you configured an alert in a Pacemaker cluster, the alert did not immediately take effect without a cluster restart. With this update, the cluster detects updates to cluster alerts immediately.
5.8. Compilers and development tools Copy linkLink copied to clipboard!
The glibc
getenv
function provides a limited form of thread safety
The glibc
getenv
function is not thread safe. Previously, if an application still called the functions getenv
, setenv
, unsetenv
, putenv
, and clearenv
at the same time, the application they could terminate unexpectedly or getenv
could return incorrect values. With this bug fix, the getenv
function provides a limited form of thread safety. As a result, applications no longer crash if they call the functions concurrently. Additionally, getenv
returns only environment values that have been using setenv
or which were present at the start of the program, and reports previously-set environment variables as unset only if there has been a potentially unordered unsetenv
call.
This fix is not applicable if you directly modify the environ
array.
The pcp
package now sets the correct owner and group for the /var/lib/pcp/config/pmie/config.default
file
Previously, if you installed the pcp
package for the first time, the package installation process incorrectly set the ownership of the /var/lib/pcp/config/pmie/config.default
file to root:root
. As a consequence, the pmie
service failed to start because the pmieconf
utility, which is executed by this service, requires pcp:pcp
ownership for this file. When the service failed to start, it logged the following error in the /var/log/pcp/pmie/pmie_check.log
file:
Warning: no write access to pmieconf file "/var/lib/pcp/config/pmie/config.default", skip reconfiguration
Warning: no write access to pmieconf file "/var/lib/pcp/config/pmie/config.default", skip reconfiguration
With this update, the pcp
package sets the correct ownership for the /var/lib/pcp/config/pmie/config.default
file during the first installation and fixes it on existing installations. As a result, the pmie
service starts correctly.
pcp-xsos
provides a rapid summary of a system
The large number of configurable components in the Performance Co-Pilot (PCP) toolkit means that you often use several different tools to understand a system’s performance. Many of these tools require additional processing time when you work with large, compressed time series data volumes. This enhancement adds the pcp-xsos
utility to PCP. This utility can perform a fast, high-level analysis of an individual point in time from a PCP archive. As a result, pcp-xsos
can help you gain insight into high level performance issues and also identify further targeted performance analysis tasks.
Jira:RHEL-30590[1]
iconv
in-place conversions no longer result in corrupted output
The iconv
utility can write the converted output to the same file. Previously, when you performed an in-place conversion and the source file exceeded a certain size, iconv
overwrite the file before the processing was completed. Consequently, the file was corrupted. With this update, the utility creates a temporary file if the source and the output file are the same and overrides the source file after the conversion is complete. As a result, in-place conversions can no longer result in corrupted files.
Improvements in the glibc
stub resolver and getaddrinfo()
API calls
Previously, the glibc
stub resolver and getaddrinfo()
API calls could result in longer than expected delays in the following cases:
- The server was inaccessible.
- The server refused the query.
- The network packet with the query was lost.
With this update, the delays in failure cases are reduced and a new resolver option, RES_STRICTERR
, was added. With this option, getaddrinfo()
API calls report more DNS errors. Additionally, you can now use options with a -
negative prefix in configuration files.
Jira:RHEL-50662[1]
The glibc exit
function no longer crashes on simultaneous calls
Previously, multiple simultaneous calls to the exit
function and calls to this function with simultaneous <stdio.h>
stream operations were not synchronized. As a consequence, applications could terminate unexpectedly and data streams could be corrupted if a concurrent exit
function call occurred in a multi-threaded application. With this update, exit
now locks <stdio.h>
streams when flushing them, and simultaneous calls to exit
and quick_exit
select one call to proceed. As a result, applications no longer crash in this scenario.
As a consequence of the fix, applications that perform a blocking read operation on a <stdio.h>
stream, such as the getchar
function, or that have a locked stream that uses the flockfile
function cannot exit until the read operation returns or the lock is released. Such blocking is required by the POSIX standard.
Jira:RHEL-65358[1]
The implementation of POSIX thread condition variables in glibc
to wake waiting threads has been improved
Previously, a defect in the POSIX thread condition variable implementation could allow a pthread_signal()
API call to fail to wake a waiting thread. Consequently, a thread could wait indefinitely for a next signal or broadcast. With this bug fix, the implementation of POSIX thread condition variables now includes a sequence-relative algorithm to avoid the missed signal condition and to provide stronger guarantees that waiting threads are woken correctly.
The Boost.Asio
no longer shows an exception when reusing a moved TCP socket
Previously, if an application used the Boost.Asio
library and reused a moved TCP socket, the application failed with an bad_executor
exception. This update fixes the issue, and the Boost.Asio
library no longer fails in the described scenario.
Jira:RHEL-67973[1]
5.9. Identity Management Copy linkLink copied to clipboard!
Migrating an IdM deployment no longer results in duplicate HBAC rules
Previously, migrating from one Identity Management (IdM) deployment to another by using the ipa-migrate
utility sometimes led to duplicate host-based access control (HBAC) rules on the destination server. Consequently, the "allow_all" and "allow_systemd-user" HBAC rules appeared twice when running the "ipa hbacrule-find" command on that server.
The problem has been fixed and migrating IdM deployments no longer results in duplicate HBAC rules.
Bypassing two-factor authentication by using an expired token is no longer possible
Previously, it was possible to bypass two-factor authentication by creating an OTP token with a specific end-validity period.
In cases where two-factor authentication is enforced, a user without an OTP token could use their password to log in once and configure an OTP token. Subsequently, they would be required to use both their password and the OTP token for authentication. However, if a user created an OTP token with an expired end-validity date, IdM would incorrectly fall back to password-only authentication, effectively bypassing two-factor authentication. This was due to IdM not differentiating between non-existent and expired OTP tokens.
With this update, IdM now correctly differentiates between these scenarios. Consequently, two-factor authentication is now correctly enforced, preventing this bypass.
When starting an instance with a sub suffix, an incorrect error is no longer logged
Before this update, when starting an instance with a sub suffix, you could see the following incorrect message in the error log:
[time_stamp] - ERR - id2entry - Could not open id2entry err 0 [time_stamp] - ERR - dn2entry_ext - The dn "dc=example,dc=com" was in the entryrdn index, but it did not exist in id2entry of instance userRoot.
[time_stamp] - ERR - id2entry - Could not open id2entry err 0
[time_stamp] - ERR - dn2entry_ext - The dn "dc=example,dc=com" was in the entryrdn index, but it did not exist in id2entry of instance userRoot.
The root cause of the message was that during backend initialization, a subtree search was performed on the backend to determine if the subtree contained smart referrals. In addition, the issue had a minor performance impact on search operations for the first ten minutes after the server started.
With this update, the incorrect message is no longer logged and no performance impact occurs when the server starts.
Jira:RHEL-71218[1]
ldapsearch
now respects the NETWORK_TIMEOUT
setting as expected
Previously, an ldapsearch
command ignored the timeout when the server was unreachable and, as a consequence, the search hung indefinitely instead of timing out. With this update, the logic error in TLS handling was fixed by adjusting connection retries and socket options.
As a consequence, the ldapsearch
command no longer ignores the NETWORK_TIMEOUT setting and returns the following error when the timeout is reached:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1).
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1).
Jira:RHEL-78297[1]
A race condition with paged result searches no longer closes the connection with a T3
error code
Previously, Directory Server did not use the proper thread protection when checking the connection’s paged result data for a timeout event. As a consequence, the paged result timeout value changed unexpectedly and triggered a false timeout when a new operation arrived. This caused a time out error and the connection was closed with the following T3
error code:
The server closed the connection because the specified time limit for a paged result search has been exceeded.
With this update, the proper thread protection is used, and paged result searches no longer close the connection with a T3
error code.
Jira:RHEL-76019[1]
Directory Server no longer fails when reindexing a VLV index with the sort attribute indexed with an extended matching rule
Before this update, a race condition was triggered during reindexing a virtual list views (VLV) index that had the sort attribute indexed with an extended matching rule. As a result, memory leaked and Directory Server failed. With this update, Directory Server serializes the VLV index key generation and no longer fails.
OpenLDAP library no longer fails when trying to free resources
Before this update, the OpenLDAP library tried to release memory by using the SSL_CTX_free()
function in its destructor when an application had already cleaned up these resources by invoking the OPENSSL_cleanup()
function, either directly or via the atexit()
function. As a consequence, users experienced failures or undefined behavior when the invalid SSL_CTX_free()
call tried to release already-cleaned-up SSL context resources.
With this update, a safe cleanup function has been added to skip SSL context cleanup in the OpenLDAP’s destructor. As a result, the SSL context now leaks if not explicitly freed, ensuring a stable application shutdown.
High connection load no longer overloads a single thread in Directory Server
Before this update, even though Directory Server supports multiple listening threads, the first incoming connections were assigned to the same listener thread overloading it. As a result, some requests created high wtime
and poor performance. With this update, Directory Server distributes the connection load across all listening threads.
VLV index cache now matches the VLV index as expected when using LMDB
Before this update, the virtual list views (VLV) index cache did not match the VLV index itself on instances with Lightning Memory-Mapped Database (LMDB), which caused the VLV index to return invalid values. With this update, the VLV index cache matches the VLV index, and the correct values are returned.
Jira:RHEL-64438[1]
The online backup no longer fails
Before this update, the online backup task could hang sporadically because of the incorrect lock order. With this update, the online backup works as expected and no longer fails.
Jira:RHEL-67005[1]
cleanAllRUV
no longer blocks itself
Before this update, when you ran the cleanAllRUV
task after a replica deletion from replication topology, the task was trying to update the replication configuration entry while the same task was purging the replication changelog of the old replica ID (rid
). As a result, the server was unresponsive.
With this update, cleanAllRUV
cleans up the replication configuration only after the changelog purging is complete.
Reindexing no longer fails when an entry RDN have the same value as the suffix DN
Before this update, if an entry’s relative distinguished name (RDN) had the same value as the suffix distinguished name (DN) in the directory, then the entryrdn
index got broken. As a result, Directory Server could perform slow search requests, get invalid results, and write alarming messages in the error log.
With this update, reindexing works as expected.
Jira:RHEL-74158[1]
The Account Policy plug-in now uses a proper flag for an update in a replication topology
Before this update, the Account Policy plugin did not use the proper flag for an update. As a result, in a replication topology, the Account Policy plugin updated the login history, but this update failed on a consumer server logging the following error message:
{{ERR - acct_update_login_history - Modify error 10 on entry }}
{{ERR - acct_update_login_history - Modify error 10 on entry
}}
With this update, the internal update succeeds and no errors are logged.
Jira:RHEL-74168[1]
On a supplier with LMDB, an offline import no longer generates duplicates of nsuniqueid
Before this update, an offline import of basic entries (with no replicated data) on a supplier with Lightning Memory-Mapped Database (LMDB) generated duplicates of the nsuniqueid
operational attribute that must be unique. As a result, problems with replication occurred. With this update, the offline import no longer generates duplicates on the supplier.
Jira:RHEL-78344[1]
TLS 1.3 can now be used to connect to an LDAP server running in FIPS mode
Before this update, when you tried to explicitly set TLS 1.3 when connecting to an LDAP server in FIPS mode, the used TLS version still remained 1.2. As a result, an attempt to connect to the LDAP server by using TLS 1.3 failed. With this update, the upper limit of the TLS version in FIPS mode was changed to 1.3, and the attempt to connect to an LDAP server with TLS 1.3 no longer fails.
Directory Server backup no longer fails after the previous unsuccessful attempt
Before this update, if an initial backup attempt was unsuccessful, the next Directory Server backup failed because backends stayed busy trying to complete the previous backup. As a result, the instance restart was required. With this update, Directory Server backup no longer fails after the previous unsuccessful attempt and the instance restart is no longer needed.
Failed replication between suppliers when using certificate-based authentication now has a more descriptive error message
Before this update, when a required CA certificate file was missing and a TLS connection setup was failing during replication between supplies, the error message was unclear to identify the problem. With this update, when a TLS setup error occurs, Directory Server logs more detailed error information. It now includes messages, such as No such file or directory
for a missing certificate, making the problem solving easier.
Jira:RHEL-65662[1]
dsconf config replace
can now handle multivalued attributes as expected
Before this update, the dsconf config replace
command could set only one value for an attribute, such as nsslapd-haproxy-trusted-ip
. With this release, you can set several values by using the following command:
dsconf <instace_name> config replace nsslapd-haproxy-trusted-ip=<ip_address_1> nsslapd-haproxy-trusted-ip=<ip_address_2> nsslapd-haproxy-trusted-ip=<ip_address_3>
# dsconf <instace_name> config replace nsslapd-haproxy-trusted-ip=<ip_address_1> nsslapd-haproxy-trusted-ip=<ip_address_2> nsslapd-haproxy-trusted-ip=<ip_address_3>
Directory Server now returns the correct set of entries when compound filters with OR (|
) and NOT (!
) operators are used
Before this update, when LDAP searches with OR (|
) and NOT (!
) operators were used, Directory Server did not return the correct set of entries. The reason was that Directory Server incorrectly evaluated access rights and entry matching and performed these steps in one phase. With this update, Directory Server performs access rights evaluation and entry matching in two separated phases and searches with compound filters with OR (|
) and NOT (!
) operators return the correct set of entries.
Jira:RHEL-65776[1]
Consumer status in a replication agreement on a supplier is displayed correctly after the Directory Server restart
Before this update, on a supplier in a replication topology, the status of the consumer in a replication agreement was reset during the Directory Server restart. As a result, the displayed consumer initialization time and the replication status were incorrect.
With this update, the replication agreement entry displays the correct status of the consumer and when it was initialized.
5.10. Red Hat Enterprise Linux System Roles Copy linkLink copied to clipboard!
The new sshd_allow_restart
variable enables the sshd
service to be restarted when needed
Before this update, the sshd
RHEL system role was not restarting the sshd
service on a managed node when required. As a consequence, some changes related to configuration files from the`/etc/sysconfig/` directory and environment files were not applied. To fix the problem, the sshd_allow_restart
(boolean, defaults to true
) variable has been introduced to restart the sshd
service on the managed node when necessary. As a result, the sshd
RHEL system role now correctly applies all changes and ensures the sshd
service actually uses those changes.
The podman
RHEL system role no longer fails to process secrets when using the run_as_user
variable
Before this update, the podman
RHEL system role failed to process secrets that were specified for a particular user using the run_as_user
variable due to missing user information. This caused errors when attempting to process secrets which have run_as_user
set. The issue has been fixed, and the podman
RHEL system role correctly handles secrets which are specified for a particular user using the run_as_user
variable.
The firewall
RHEL system role reports changed: True
when there were changes applied
During playbook processing, the firewall_lib.py
module from the firewall
RHEL system role was replacing the changed
message with False
when using the interface
variable in the playbook and a pre-existing networking interface on the managed node. As a consequence, firewall
reported the changed: False
message even when there had been changes done, and the contents from the forward_port
variable were not saved as permanent. With this update, the firewall
RHEL system role ensures the changed
value is not reset to False
. As a result, the role reports changed: True
when there are changes, and forward_port
contents are saved as persistent.
The certificate
RHEL system role correctly reports an error when an issued certificate is missing the private key
When the private key of a certificate was removed, the certmonger
utility on a managed node entered an infinite loop. Consequently, the certificate
RHEL system role on the control node became unresponsive when re-issuing a certificate that had the private key deleted. With this update, the certificate
RHEL system role stops processing and provides an error message with instructions for remedy. As a result, certificate
no longer becomes unresponsive in the described scenario.
Jira:RHEL-13333[1]
The postgresql
RHEL system role no longer fails to set the paths to a TLS certificate and private key
The postgresql_cert_name
variable of the postgresql
RHEL system role defines the base path to the TLS certificate and private key without suffix on the managed node. Before this update, the role did not define internal variables for the certificate and private key. As a consequence, if you set postgresql_cert_name
, the Ansible task failed with the following error message:
The task includes an option with an undefined variable. The error was: '__pg_server_crt' is undefined. '__pg_server_crt' is undefined
The task includes an option with an undefined variable. The error was: '__pg_server_crt' is undefined. '__pg_server_crt' is undefined
With this update, the role correctly defines these internal variables, and the task sets the paths to the certificate and private key in the PostgreSQL configuration files.
The network
RHEL system role prioritizes permanent MAC address matching
When all of the following conditions were met:
- A network connection specified both an interface name and a media access control (MAC) address for configuring a parent and a virtual local area network (VLAN) connection.
- The physical interface had the same permanent and current MAC address.
- The networking configuration was applied multiple times.
The network
RHEL system role compared the user-specified MAC address against either the permanent MAC or the current MAC address from the sysfs
virtual filesystem. The role then treated a match with the current MAC as valid even if the interface name was different from what the user specified. As a consequence, the "no such interface exists" error occurred. With this update, the link_info_find()
method prioritizes matching links by permanent MAC address when it is valid and available. If the permanent MAC is unavailable (None or "00:00:00:00:00:00"), the method falls back to matching the current MAC address. As a result, this change improves the robustness of MAC address matching by ensuring that permanent addresses are prioritized while maintaining a reliable fallback mechanism for interfaces with no permanent address.
The ansible-doc
command provides the documentation again for the redhat.rhel_system_roles
collection
Before this update, the vpn
RHEL system role did not include documentation for the internal Ansible filter vpn_ipaddr
. Consequently, using the ansible-doc
command to list documentation for the redhat.rhel_system_roles
collection would trigger an error. With this update the vpn
RHEL system role includes the correct documentation in the correct format for the vpn_ipaddr
filter. As a result, ansible-doc
does not trigger any error and provides the correct documentation.
The storage
RHEL system role correctly resizes logical volumes
The physical volume was not resized to its maximum size when using the grow_to_fill
feature in the storage
RHEL system role to automatically resize LVM physical volumes after resizing the underlying virtual disks. Consequently, not all of the storage free space was available when resizing existing or creating new additional logical volumes; and the storage
RHEL system role failed. This update fixes the problem in the source code to ensure the role always resizes the physical volumes to their maximum size when using grow_to_fill
.
Jira:RHEL-73244[1]
The storage
RHEL system role now runs as expected on RHEL managed nodes with VDO
Before this update, the blivet
module required the kmod-kvdo
package on RHEL managed nodes by using Virtual Data Optimizer (VDO). However, kmod-kvdo
failed to install, and as a consequence caused even the storage
RHEL system role to fail. The fix to this problem ensures that kmod-kvdo
is not a required package for managed nodes with RHEL. As a result, storage
no longer fails when managed nodes with RHEL use VDO.
Jira:RHEL-82160[1]
5.11. Virtualization Copy linkLink copied to clipboard!
High-memory VMs now report their state correctly
Previously, live migrating a virtual machine (VM) that was using 1 TB of memory or more caused libvirt
to report the state of the VM incorrectly. This problem has been fixed and the status of live-migrated VMs with high amounts of memory is now reported accurately by libvirt
.
Jira:RHEL-28819[1]
Network boot for VMs now works correctly without an RNG device
Previously, when a virtual machine (VM) did not have an RNG device configured and its CPU model did not support the RDRAND feature, it was not possible to boot the VM from the network. With this update, the problem has been fixed, and VMs that do not support RDRAND can boot from the network even without an RNG device configured.
Note, however, that to increase security when booting from the network, adding an RNG device is highly encouraged for VMs that use a CPU model that does not support RDRAND.
Jira:RHEL-58631, Jira:RHEL-65725
vGPU live migration no longer reports excessive amount of dirty pages
Previously, when performing virtual machine (VM) live migration with an attached NVIDIA vGPU, an excessive amount of dirty pages could have been incorrectly reported during the migration. This problem could have increased the required VM downtime during the migration and the migration could have potentially failed.
With this update, the underlying problem has been fixed and the correct amount of dirty pages is reported during the migration, which can reduce the required VM downtime during vGPU live migration in some cases.
Jira:RHEL-64307[1]
vGPU live migration no longer fails if vGPU driver versions are different on source and destination hosts
Previously, virtual machine (VM) live migration with an attached NVIDIA vGPU would fail if driver versions on source and destination hosts were different.
With this update, the underlying code has been fixed and live migrating VMs with NVIDIA vGPUs now works correctly even if the driver versions are different on the source and destination host.
Jira:RHEL-33795[1]
Virtual machines no longer incorrectly report an AMD SRSO vulnerability
Previously, virtual machines (VMs) running on a RHEL 9 host with the AMD Zen 3 and 4 CPU architecture incorrectly reported a vulnerability to a Speculative Return Stack Overflow (SRSO) attack.
The problem was caused by a missing cpuid flag, which was fixed with this update. Any reports of an AMD SRSO vulnerability reported by a VM should be now treated as being correct.
Jira:RHEL-26152[1]
The installation program shows the expected system disk to install RHEL on VM
Previously, when installing RHEL on a VM using virtio-scsi
devices, it was possible that these devices did not appear in the installation program because of a device-mapper-multipath
bug. Consequently, during installation, if some devices had a serial set and some did not, the multipath
command was claiming all the devices that had a serial. Due to this, the installation program was unable to find the expected system disk to install RHEL in the VM.
With this update, multipath
correctly sets the devices with no serial as having no World Wide Identifier (WWID) and ignores them. On installation, multipath
only claims devices that multipathd
uses to bind a multipath device, and the installation program shows the expected system disk to install RHEL in the VM.
Jira:RHELPLAN-66975[1]
Windows guests boot more reliably after a v2v conversion on hosts with AMD EPYC CPUs
After using the virt-v2v
utility to convert a virtual machine (VM) that uses Windows 11 or a Windows Server 2022 as the guest OS, the VM previously failed to boot. This occurred on hosts that use AMD EPYC series CPUs. Now, the underlying code has been fixed and VMs boot as expected in the described circumstances.
Jira:RHELPLAN-147926[1]
nodedev-dumpxml
lists attributes correctly for certain mediated devices
Before this update, the nodedev-dumpxml
utility did not list attributes correctly for mediated devices that were created by using the nodedev-create
command. This has been fixed, and nodedev-dumpxml
now displays the attributes of the affected mediated devices properly.
Jira:RHELPLAN-139536[1]
virtiofs
devices can now be attached after restarting virtqemud
or libvirtd
Previously, restarting the virtqemud
or libvirtd
services prevented virtiofs
storage devices from being attached to virtual machines (VMs) on your host. This bug has been fixed, and you can now attach virtiofs
devices in the described scenario as expected.
Jira:RHELPLAN-119912[1]
blob
resources now work correctly for virtio-gpu
on IBM Z
Previously, the virtio-gpu
device was incompatible with blob
memory resources on IBM Z systems. As a consequence, if you configured a virtual machine (VM) with virtio-gpu
on an IBM Z host to use blob
resources, the VM did not have any graphical output.
With this update, virtio
devices have an optional blob
attribute. Setting blob
to on
enables the use of blob
resources in the device. This prevents the described problem in virtio-gpu
devices, and can also accelerate the display path by reducing or eliminating copying of pixel data between the guest and host. Note that blob
resource support requires QEMU version 6.1 or later.
Reinstalling virtio-win
drivers no longer causes DNS configuration to reset on the guest
In virtual machines (VMs) that use a Windows guest operating system, reinstalling or upgrading virtio-win
drivers for the network interface controller (NIC) previously caused DNS settings in the guest to reset. As a consequence, your Windows guest in some cases lost network connectivity.
With this update, the described problem has been fixed. As a result, if you reinstall or upgrade from the latest version of virtio-win
, the problem no longer occurs. Note, however, that upgrading from a prior version of virtio-win
will not fix the problem, and DNS resets might still occur in your Windows guests.
Jira:RHEL-1860[1]
VNC viewer correctly initializes a VM display after live migration of ramfb
This update enhances the ramfb
framebuffer device, which you can configure as a primary display for a virtual machine (VM). Previously, ramfb
was unable to migrate, which resulted in VMs that use ramfb
showing a blank screen after live migration. Now, ramfb
is compatible with live migration. As a result, you see the VM desktop display when the migration completes.
5.12. Supportability Copy linkLink copied to clipboard!
The sos clean
on an existing archive no longer fails
Previously, an existing archive could not be cleaned by running sos clean
due to a regression in the sos
code that incorrectly detected the root directory of a tarball and prevented it from cleaning data. As a consequence, sos clean
running on an existing sosreport tarball does not clean anything within the tarball. This update adds an implementation of a proper detection of the root directory in the reordered tarball content. As a result, sos clean
performs sensitive data obfuscation on an existing sosreport tarball correctly.
The sos stops collecting user’s .ssh
configuration
Previously, the sos
utility collected the .ssh
configuration by default from a user. As a consequence, this action caused a broken system for users that are mounted by using automount utility. With this update, the sos
utility no longer collects the .ssh
configuration.
The sos
now obfuscates proxy passwords in several places
Previously, the sos
utility did not obfuscate passwords from proxy links. For example HTTP_PROXY
and HTTPS_PROXY
in the /etc/environment
file. As a consequence, the sos
utility could collect sosreports with customer proxy passwords unless cleaned up before submitting. This might pose a security concern. Several of those places were discovered and fixed to obfuscate the passwords.
Red Hat continually improves the sos utility to enhance obfuscation capabilities; however, the complete removal of sensitive information is not guaranteed. Users are responsible for reviewing and manually cleaning up any confidential data before sharing it with Red Hat.
Jira:RHEL-67712[1]