Este conteúdo não está disponível no idioma selecionado.
Chapter 5. Kernel Module Management Operator release notes
Use the release notes to learn what is new or changed in Kernel Module Management (KMM).
5.1. Release notes for Kernel Module Management Operator 2.2 Copiar o linkLink copiado para a área de transferência!
5.1.1. New features Copiar o linkLink copiado para a área de transferência!
- KMM is now using the CRI-O container engine to pull container images in the worker pod instead of using HTTP calls directly from the worker container. For more information, see Example Module CR.
-
The Kernel Module Management (KMM) Operator images are now based on
rhel-els-minimalcontainer images instead of therhel-elsimages. This change results in a greatly reduced image footprint, while still maintaining FIPS compliance. - In this release, the firmware search path has been updated to copy the contents of the specified path into the path specified in worker.setFirmwareClassPath (default: /var/lib/firmware). For more information, see Example Module CR.
- For each node running a kernel matching the regular expression, KMM now checks if you have included a tag or a digest. If you have not specified a tag or digest in the container image, then the validation webhook returns an error and does not apply the module. For more information, see Example Module CR.
5.2. Release notes for Kernel Module Management Operator 2.3 Copiar o linkLink copiado para a área de transferência!
5.2.1. New features Copiar o linkLink copiado para a área de transferência!
- In this release, KMM uses version 1.23 of the Golang programming language to ensure test continuity for partners.
- You can now schedule KMM pods by defining taints and tolerations. For more information, see Using tolerations for kernel module scheduling.
5.3. Release notes for Kernel Module Management Operator 2.4 Copiar o linkLink copiado para a área de transferência!
5.3.1. New features and enhancements Copiar o linkLink copiado para a área de transferência!
- In this release, you now have the option to configure the Kernel Module Management (KMM) module to not load an out-of-tree kernel driver and use the in-tree driver instead, and run only the device plugin. For more information, see Using in-tree modules with the device plugin.
In this release, KMM configurations are now persistent after cluster and KMM Operator upgrades and redeployments of KMM.
In earlier releases, a cluster or KMM upgrade, or any other action, such as upgrading a non-default configuration like the firmware path that redeploys KMM, could create the need to reconfigure KMM. In this release, KMM configurations now remain persistent regardless of any of such actions.
For more information, see Configuring the Kernel Module Management Operator.
- Improvements have been added to KMM so that GPU Operator vendors do not need to replicate KMM functionality in their code, but instead use KMM as is. This change greatly improves Operators' code size, tests, and reliability.
-
In this release, KMM no longer uses HTTP(S) direct requests to check if a kmod image exists. Instead, CRI-O is used internally to check for the images. This mitigates the need to access container image registries directly from HTTP(S) requests and manually handle tasks such as reading
/etc/containers/registries.conffor mirroring configuration, accessing the image cluster resource for TLS configuration, mounting the CAs from the node, and maintaining your own cache in Hub & Spoke.
- The KMM and KMM-hub Operators have been assigned the "Meets Best Practices" label in the Red Hat Catalog.
-
You can now install KMM on compute nodes, if needed. Previously, it was not possible to deploy workloads on the control-plane nodes. Because the compute nodes do not have the
node-role.kubernetes.io/control-planeornode-role.kubernetes.io/masterlabels, the Kernel Module Management Operator might need further configurations. An internal code change has resolved this issue.
In this release, the heartbeat filter for the NMC reconciler has been updated to filter the following events on nodes:
-
node.spec -
metadata.labels -
status.nodeInfo -
status.conditions[](NodeReadyonly) and still filtering heartbeats
-
5.3.2. Notable technical changes Copiar o linkLink copiado para a área de transferência!
- In this release, the preflight validation resource in the cluster has been modified. You can use the preflight validation to verify kernel modules to be installed on the nodes after cluster upgrades and possible kernel upgrades. Preflight validation also reports on the status and progress of each module in the cluster that it attempts or has attempted to validate. For more information, see Preflight validation for Kernel Module Management (KMM) Modules.
-
A requirement when creating a kmod image is that both the
.kokernel module files and thecpbinary must be included, which is required for copying files during the image loading process. For more information, see Creating a kmod image.
-
The
capabilitiesfield that refers to the Operator maturity level has been changed fromBasic InstalltoSeamless upgrades.Basic Installindicates that the Operator does not have an upgrade option. This is not the case for KMM, where seamless upgrades are supported.
5.3.3. Bug fixes Copiar o linkLink copiado para a área de transferência!
Webhook deployment has been renamed from
webhook-servertowebhook.-
Cause: Generating files with
controller-gengenerated a service calledwebhook-servicethat is not configurable. And, when deploying KMM with Operator Lifecycle Manager (OLM), OLM deploys a service for the webhook called-service. -
Consequence: Two services were generated for the same deployment. One generated by
controller-genand added to the bundle manifests and the other that the OLM created. -
Fix: Make OLM find an already existing service called
webhook-servicein the cluster because the deployment is calledwebhook. - Result: A second service is no longer created.
-
Cause: Generating files with
Using
imageRepoSecretobject in conjunction with DTK as the image stream results inauthorization requirederror.-
Cause: On the Kernel Module Management (KMM) Operator, when you set
imageRepoSecretobject in the KMM module, and the build’s resulting container image is defined to be stored in the cluster’s internal registry, the build fails to push the final image and generate anauthorization requirederror. - Consequence: The KMM Operator does not work as expected.
-
Fix: When the
imageRepoSecretobject is user-defined, it is used as both a pull and push secret by the build process. To support using the cluster’s internal registry, you must add the authorization token for that registry to theimageRepoSecretobject. You can obtain the token from the "build" service account of the KMM module’s namespace. - Result: The KMM Operator works as expected.
-
Cause: On the Kernel Module Management (KMM) Operator, when you set
Creating or deleting the image or creating an MCM module does not load the module on the spoke.
-
Cause: In a hub and spoke environment, when creating or deleting the image in registry, or when creating a
ManagedClusterModule(MCM), the module on the spoke cluster is not loaded. - Consequence: The module on the spoke is not created.
- Fix: Remove the cache package and image translation from the hub and spoke environment.
- Result: The module on the spoke is created for the second time the MCM object is created.
-
Cause: In a hub and spoke environment, when creating or deleting the image in registry, or when creating a
KMM cannot pull images from the private registry while doing in-cluster builds.
- Cause: The Kernel Module Management (KMM) Operator cannot pull images from private registry while doing in-cluster builds.
- Consequence: Images in private registries that are used in the build process can not be pulled.
-
Fix: The
imageRepoSecretobject configuration is now also used in the build process. TheimageRepoSecretobject specified must include all registries that are being used. - Result: You can now use private registries when doing in-cluster builds.
KMM worker pod is orphaned when deleting a module with a container image that can not be pulled.
- Cause: A Kernel Module Management (KMM) Operator worker pod is orphaned when deleting a module with a container image that can not be pulled.
- Consequence: Failing worker pods are left on the cluster and at no point being collected for garbage.
- Fix: KMM, now collects orphaned failing pods upon the modules deletion for garbage.
- Result: The module is successfully deleted, and all associated orphaned failing pods are also deleted.
The KMM Operator tries to create a MIC even when the node selector does not match.
- Cause: The Kernel Module Management (KMM) Operator tries to create a 'ModuleImagesConfig' (MIC) resource even when the node selector does not match with any actual nodes and fails.
- Consequence: The KMM Operator reports an error when reconciling a module that does not target any node.
-
Fix: The
Imagesfield in the MIC resource is now optional. - Result: The KMM Operator can successfully create the MIC resource even when there are no images in it.
KMM does not reload the kernel module in case the node reboot sequence is too quick.
- Cause: The Kernel Module Management (KMM) Operator does not reload the kernel module in case the node reboot sequence is too quick. The reboot is determined based on the timestamp of the status condition being later than the timestamp in the Node Machine Configuration (NMC) status.
- Consequence: When the reboot happens quickly, in less time than the grace period, the node state does not change. After the node reboots, KMM does not load the kernel module again.
-
Fix: Instead of relying on the condition state, NMC can rely on the
Status.NodeInfo.BootIDfield. This field is set by kubelet based on the/proc/sys/kernel/random/boot_idfile of the server node, so it is updated after each reboot. - Result: The more accurate timestamps enable the Kernel Module Management (KMM) Operator to reload the kernel module after the node reboot sequence.
Filtering out node heartbeats events for the Node Machine Configuration (NMC) controller.
- Cause: The NMC controller gets spammed with events from node heartbeats. The node heartbeats let the Kubernetes API server know that the node is still connected and functional.
- Consequence: The spamming causes a constant reconciliation even when there is no module, and therefore no NMC, are applied to the cluster.
- Fix: The NMC controller now filter the node’s heartbeat from its reconciliation loop.
- Result: The NMC controller only gets real events and filters out node heartbeats.
NMC status contains toleration values, even though there are no tolerations in the
NMC.specor in the module.-
Cause: The Node Machine Configuration (NMC) status contains toleration values, even though there are no tolerations in the
NMC.specor in the module. - Consequence: Tolerations other than Kernel Module Management-specific tolerations can appear in the status.
- Fix: The NMC status now gets its toleration from a dedicated annotation rather than from the worker pod.
- Result: The NMC status only contains the module’s tolerations.
-
Cause: The Node Machine Configuration (NMC) status contains toleration values, even though there are no tolerations in the
The KMM Operator version 2.4 fails to start properly and cannot list the
\modulebuildsignconfigs\resource.- Cause: On the Kernel Module Management (KMM) Operator, when the Operator is installed using Red Hat Konflux, it does not start properly because the log files contain errors.
- Consequence: The KMM Operator does not work as expected.
-
Fix: The Cluster Service Version (CSV) file is updated to list the
\modulebuildsignconfigs\and themoduleimagesconfigresources . - Result: The KMM Operator works as expected.
The Red Hat Konflux build does not include version and git commit ID in the Operator logs.
- Cause: On the Kernel Module Management (KMM) Operator, when the Operator was built using Communications Platform as a Service (CPaas), the build included the Operator version and git commit ID in the log files. However, with Red Hat Konflux these details are not included in the log files.
- Consequence: Important information is missing from the log files.
- Fix: Some modifications are introduced in Konflux to resolve this issue.
- Result: The KMM Operator build now includes the Operator version and git commit ID in the log files.
The KMM Operator does not load the module after node with taint is rebooted.
- Cause: The Kernel Module Management (KMM) Operator does not reload the kernel module in case the node reboot sequence is too quick. The reboot is determined based on the timestamp of the status condition being later than the timestamp in the Node Machine Configuration (NMC) status.
- Consequence: When the reboot happens quickly, in less time than the grace period, the node state does not change. After the node reboots, KMM does not load the kernel module again.
-
Fix: Instead of relying on the condition state, NMC can rely on the
Status.NodeInfo.BootIDfield. This field is set by kubelet based on the /proc/sys/kernel/random/boot_id file of the server node, so it is updated after each reboot. - Result: The more accurate timestamps enable the Kernel Module Management (KMM) Operator to reload the kernel module after the node reboot sequence.
Redeploying a module that uses in-cluster builds fails with the
ImagePullBackOffpolicy.- Cause: On the Kernel Module Management (KMM) Operator, the image pull policy for the puller pod and the worker pod is different.
- Consequence: An image can be considered as existing while, in fact, it is not.
- Fix: Make the image pull policy of the pull pod the same as the pull policy defined in the KMM module since its the same policy that is used by the worker pod.
- Result: The MIC represents the state of the image in the same way the worker pod accesses it.
The MIC controller creates two pull-pods when it should create just one.
-
Cause: On the Kernel Module Management (KMM) Operator, the
ModuleImagesConfig(MIC) controller may create multiple pull-pods for the same image. - Consequence: Resources are not used appropriately or as intended.
-
Fix: The
CreateOrPatchMIC API receives a slice ofImageSpecs, as the input is created by going over the target nodes and adding their images to the slice, so any duplicateImageSpecs, are now filtered out. - Result: The KMM Operator works as expected.
-
Cause: On the Kernel Module Management (KMM) Operator, the
The
job.dcDelayexample in the documentation should specify0sinstead of0.-
Cause: The Kernel Module Management (KMM) Operator default
job.gcDelayduration field is0sbut the documentation mentions the value as0. -
Consequence: Entering a custom value of
60instead of60sor1mmight result in an error due to the wrong input type. -
Fix: The
job.gcDelayfield in the documentation is updated to default value of0s. - Result: Users are less likely to get confused.
-
Cause: The Kernel Module Management (KMM) Operator default
The KMM Operator Hub environment does not work because of missing MIC and MBSC CRDs.
-
Cause: The Kernel Module Management (KMM) Operator hub environment only generates Custom Resource Definitions (CRD) files based on the
api-hub/directory. As a result, this does not contain some CRDs that are required for the KMM Operator Hub environment, such as,ModuleImagesConfig(MIC) resource and Managed Kubernetes Service (MBSC). - Consequence: The KMM Operator hub environment cannot work because it tries to start controllers reconciling CRDs that do not exist in the cluster.
-
Fix: The fix generates all CRD files into the
config/crd-hub/basesdirectory, but only applies the resources to the cluster that it actually needs. - Result: The KMM Operator hub environment works as expected.
-
Cause: The Kernel Module Management (KMM) Operator hub environment only generates Custom Resource Definitions (CRD) files based on the
The KMM OperatorHub environment cannot build when finalisers are not set on a resource.
-
Cause: The Kernel Module Management (KMM) Operator displays an error with the
ManagedClusterModulecontroller failing to build. This is due to the missingModuleImagesConfig(MIC) resource finalizers and Role-based Action Control (RBAC) permissions for the KMM OperatorHub environment. - Consequence: The KMM OperatorHub environment cannot build images.
- Fix: The RBAC permissions are updated to allow updating finalizers on the MIC resource, and then the appropriate rules created.
-
Result: The KMM OperatorHub environment builds images without errors with the
ManagedClusterModulecontroller.
-
Cause: The Kernel Module Management (KMM) Operator displays an error with the
The
PreflightValidationOCPcustom resource, with akernelVersion: tesdtcauses the KMM Operator to panic.-
Cause: Creating a
PreflightValidationOCPcustom resource (CR), with akernelVersionflag that is set totesdt, causes the Kernel Module Management (KMM) Operator to generate a panic runtime error. - Consequence: Entering invalid kernel versions causes the KMM Operator to panic.
-
Fix: A webhook - a method for one application to automatically send real-time data to another application when a specific event occurs - is now added to the
PreflightValidationOCPCR. -
Result: The
PreflightValidationOCPCR with invalid kernel versions can no longer be applied to the cluster, therefore, preventing the Operator from generating a panic runtime error.
-
Cause: Creating a
The
PreFflightValidationOCPcustom resource, with akernelVersionflag that is different that the one of the cluster, does not work.-
Cause: Creating a
PreflightValidationOCPcustom resource (CR), with akernelVersionflag that is different from the one of the cluster, does not work. - Consequence: The Kernel Module Management (KMM) Operator is unable to find the Driver Toolkit (DTK) input image for the new kernel version.
-
Fix: You must use the
PreflightValidationOCPCR and explicitly set thedtkImagefield in the CR. -
Result: Using the fields
kernelVersionanddtkImagethe feature can build installed modules for target OpenShift Container Platform versions.
-
Cause: Creating a
The KMM Operator version 2.4 documentation is updated with
PreflightValidationOCPinformation.-
Cause: Previously, when creating an
PreflightValidationOCPCR, you were required to supply the release-image. This has now changed and you need to set thekernelVersionthedtkImagefields. - Consequence: The documentation was outdated and required an update.
- Fix: The documentation is updated with the new support details.
- Result: The KMM preflight feature is documented as expected.
-
Cause: Previously, when creating an
5.3.4. Known issues Copiar o linkLink copiado para a área de transferência!
The
ModuleUnloadedevent does not appear when a module isUnloaded.-
Cause: When a module is
Loaded(using the create aModuleLoadevent) orUnloaded ` (using the create a `ModuleUnloadedevent) the events might not appear. This happens when you load and unload the kernel module in a quick succession. -
Consequence: The
ModuleLoadand theModuleUnloadedevents might not appear in OpenShift Container Platform. - Fix: Introduce an alerting mechanism for this potential behavior and for awareness when working with modules.
- Result: Not yet available.
-
Cause: When a module is
5.4. Release notes for Kernel Module Management Operator 2.4.1 Copiar o linkLink copiado para a área de transferência!
5.4.1. Known issues Copiar o linkLink copiado para a área de transferência!
If you are running KMM-hub version 2.3.0 or earlier and you are not running KMM, the upgrade to KMM-hub 2.4.0 is not reliable. Instead, you must upgrade to KMM-hub 2.4.1. KMM is not affected by this issue. For more information, see RHEA-2025:10778 - Product Enhancement Advisory.
5.5. Release notes for Kernel Module Management Operator 2.5 Copiar o linkLink copiado para a área de transferência!
5.5.1. New features and enhancements Copiar o linkLink copiado para a área de transferência!
-
Starting with this version, you can use the KMM Operator to manage the lifecycle of kmod images that you installed by using the Day 1 utility. When a Day 1 kmod image is transitioned to the KMM Operator by using a
Module, aBootMachineConfig(BMC) CRD is also created in the cluster. The BMC CRD fixes sudden reboot issues by ensuring that theMachineConfiggets updated with the correct values without triggering a node reboot. For more information, see Managing Day 1 kmod images.
-
The Kernel Module Management Operator (KMMO) 2.5 provides a
version.readylabel to indicate that the new version of the kernel module is loaded and is ready to use. For more information, see Customizing upgrades for kernel modules.
KMM Operator support on IBM Power architecture
RHEL does not provide a real-time Kernel for IBM Power, so do not deploy or validate any real-time features for KMM 2.5 on IBM Power compute nodes.
KMM Operator support on IBM Z architecture
Kernel Module Management (KMM) 2.5 is now supported on IBM Z architecture. However, RHEL does not provide a real-time Kernel for IBM Z. Therefore, you should not deploy or validate any real-time features for KMM 2.5 on IBM Z worker nodes.
5.5.2. Bug fixes Copiar o linkLink copiado para a área de transferência!
PreflightValidationOCPfrom KMM 2.4 does not synchronize status between v1beta1 and v1beta2.-
Cause: This happens because the
v1beta1-→v1beta2conversion webhook was not well defined in the CRD. - Consequence: The status only shows in v1beta2 and not in v1beta1.
-
Fix: A conversion between v1beta1 and v1beta2 has been added into the
PreflightValidationOCPCRD. -
Result: The
PreflightValidationOCPstatus is now shown in v1beta1 and v1beta2.
-
Cause: This happens because the
PreflightValidationOCPin KMM 2.4 incorrectly pushes images to registry.-
Cause:
PreflightValidationOCPin KMM 2.4 version is pushing images to the registry despite thepushBuiltImage: falsesetting. -
Consequence: The new
PreflightValidationOCPfor KMM 2.4 ignores thepushBuiltImage: falseand pushes it to registry. - Fix: The internal logic has been updated to ensure the push behavior for built images is correctly propagated across all relevant workflows.
-
Result: Built images are no longer incorrectly pushed to the registry when
pushBuiltImage: false.
-
Cause:
The
kmm-operator-controllerpod encountersOOMKillederrors.-
Cause: The
kmm-operator-controllerpod repeatedly encountersOOMKillederrors despite having a 1Gi memory limit. This issue occurs during cycles of installing and uninstalling the x100 Operator and associated kernel modules, suggesting a potential memory leak related to module management. -
Consequence: This issue occurs even though the container has both
resources.limits.memoryandresources.requests.memoryset to 1Gi. After several cycles, the manager container is repeatedly terminated byOOMKilled, despite having a 1Gi memory limit. The pod status showsRunningwith a 0/1readystate, and the restart count grows continuously. - Fix: A fix that completely removes all code and configuration related to managing Cluster and Service CA ConfigMaps has been implemented. Update to KMM 2.4 or higher.
- Result: Installing and uninstalling the x100 operator and associated kernel modules runs as expected.
-
Cause: The
OpenShift 4.20 includes Kernel Module Management (KMM) Operator version 2.3.0
- Cause: The OpenShift 4.20 catalog includes Kernel Module Management (KMM) Operator version 2.3.0 as the latest version instead of the required KMM versions 2.4.0 and 2.4.1.
- Consequence: The documentation was outdated and required an update.
- Fix: The 4.20 catalogues have been updated to include KMM Operator versions 2.4.0 and 2.4.1.
- Result: The catalogues are now up-to-date.
HashAnnotationDifferfunction could produce unexpected results-
Cause: A potential bug in the
HashAnnotationDifferfunction within the Kernel Module Management (KMM) could be exposed by a change in implementation. - Consequence: While this potential bug is currently mitigated by the NMC logic, a change in implementation could expose this bug in real-time.
-
Fix: The
HashAnnotationDiffermethod ininternal/pod/workerpodmanager.gowas updated to correctly handle cases where both input pods are nil or only one is nil. The logic now returnsfalse(no difference) when both are nil, andtrue(difference) when only one is nil. -
Result: The
HashAnnotationDifferfunction runs as expected.
-
Cause: A potential bug in the
5.6. Release notes for Kernel Module Management Operator 2.5.1 Copiar o linkLink copiado para a área de transferência!
5.6.1. Known issues Copiar o linkLink copiado para a área de transferência!
Kernel Module Management (KMM) version 2.5 does not run on Red Hat OpenShift Service on AWS (ROSA) clusters or any other cluster that doesn’t install the
MachineConfigCRD.-
Cause: This happens because the BMC controller that monitors
MachineConfigobjects on clusters cannot find these objects on ROSA clusters because they do not exist. - Consequence: Causes the BMC controller to fail and the KMM controller pods to continually restart.
-
Fix: In this version, the Operator verifies that the
MachineConfigCRD is present on a cluster and runs the BMC controller on a cluster only when theMachineConfigCRD is present. - Result: ROSA controller pods start successfully.
-
Cause: This happens because the BMC controller that monitors
5.7. Release notes for Kernel Module Management Operator 2.6 Copiar o linkLink copiado para a área de transferência!
5.7.1. New features and enhancements Copiar o linkLink copiado para a área de transferência!
In this release, wildcard and GLOB pattern support for the
filesToSignfield has been added for signing kernel modules in a specific folder. Previously, you had to specify the exact path to each.kofile you wanted to sign. Now, you can add the full path to explicit files, as previously required, or any GLOB patterns supported by theAshshell.Additionally, this commit propagates
DirNamestring from the API into the sign image template, and adds the signed files to the validation webhook.
In this release, you can use the
AutomountServiceAccountTokenparameter to disable the auto-mounting of the projected volume. You can setAutomountServiceAccountTokentofalseto disable auto-mounting and mount the configmaps and tokens necessary for theDevicePluginapplication.Kubernetes automatically mounts the service account token and root Certificate Authoritys (CAs) into the
/var/run/secrets/kubernetes.io/serviceaccountof the device-plugin pods using projected volumes. In some cases, you may want to use additional custom CAs or tokens for the device-plugin but Kubernetes does not allow mounting them to the same path unless the auto-mount is disabled.
- In this release, new security settings have been added to prevent filesystem tampering and remove privileged kernel operations. The new settings enhance container security by restricting system capabilities and enabling read-only root filesystems across manager, webhook-server, and Operator components to strengthen security posture and reduce attack surface. For more information, see KMM RapiDAST "Root file system is not read-only" while checking DAST.
In this release, when installing KMM using OLM, you can add additional tolerations to the Operator where worker nodes have custom taints or no accessible control-plane nodes.
By default, the KMM Operator is installed on control plane nodes when possible and includes tolerations that allow for the KMM Operator to be scheduled on the nodes. In environments where the control plane is not accessible, the KMM Operator is installed on worker nodes.
-
In this release, a new optional
imageRebuildTriggercounter field has been added in theModule,ManagedClusterModule, andModuleImagesConfigCRDs that allows you to force Kernel Module Management (KMM) to reverify and rebuild module images when using ephemeral image registries. When this counter’s value changes, the system automatically clears cached image statuses and reverifies the image existence, potentially triggering rebuilds.
5.7.2. Bug fixes Copiar o linkLink copiado para a área de transferência!
Unwarranted kernel module removal due to memory and disk limitations.
- Cause: Memory and disk limitations can cause the removal of kernel modules.
- Consequence: The kernel module is removed.
-
Fix: This release contains a new set of internal tolerations such as
DiskPressure,MemoryPressure, andPIDPressurethat are propagated through the module loader data flow and node selection process. These internal tolerations are appended to module-specific tolerations when handling module scheduling. - Result: The internal tolerations improve module scheduling to account for additional system resource pressure conditions. Modules can now be deployed across a broader range of cluster nodes, enhancing resource utilization and deployment flexibility.
Kernel Module Management (KMM) cannot delete module names that contain a
..-
Cause: The Kernel Module Management (KMM) Operator cannot delete modules that contain a
.in their name. - Consequence: The module intended for deletion hangs indefinitely.
-
Fix: The finalizer
regexpused to remove the node label has been modified to complete deletion. Result: When the node label has been correctly deleted, the module deletes successfully.
For more information, see MGMT-19647.
-
Cause: The Kernel Module Management (KMM) Operator cannot delete modules that contain a
Build and sign pods do not inherit Module tolerations.
-
Cause: Build and sign pods created through the
ModuleImagesConfig(MIC) andModuleBuildSignConfig(MBSC) flows do not inherit tolerations from the parent Module. -
Consequence: Build pods fail scheduling on nodes with custom taints. Build pods should have the same tolerations as defined in the
Module.spec.tolerationsparameter, allowing them to schedule on tainted nodes where kernel headers are available. -
Fix: Added a
Tolerationsfield to module build and image configurations that specifies the tolerations for build and sign pods. - Result: You can now specify tolerations for build and sign pods, enabling control over pod scheduling on nodes with taints. This fix supports standard toleration properties including effect, key, operator, and duration settings.
-
Cause: Build and sign pods created through the
5.7.3. Known issues Copiar o linkLink copiado para a área de transferência!
- Issues have been encountered when loading out-of-tree (OOT) drivers for QDU x100 DU PCIe cards from v4.14. For more information, see Case 04245147.