Questo contenuto non è disponibile nella lingua selezionata.

Chapter 6. Resolved issues


The following notable issues are resolved in Red Hat OpenShift AI 2.22.1. Security updates, bug fixes, and enhancements for Red Hat OpenShift AI 2.22 are released as asynchronous errata. All OpenShift AI errata advisories are published on the Red Hat Customer Portal.

6.1. Security updates in Red Hat OpenShift AI 2.22.1 (August 2025)

This release provides security updates. For a complete list of updates, see the associated errata advisory on the Red Hat Customer Portal.

6.2. Issues resolved in Red Hat OpenShift AI 2.22

RHOAIENG-26537 - Users cannot access the dashboard after installing OpenShift AI 2.21

After you installed OpenShift AI 2.21 and created a DataScienceCluster on a new cluster, you could not access the dashboard because the Auth custom resource was created without the default group configuration. This issue is now resolved.

RHOAIENG-26464 - InstructLab training phase1 pods restart when using default value due to insufficient memory in RHOAI 2.21

When you ran the InstructLab pipeline using the default value for the train_memory_per_worker input parameter (100 GiB), the phase1 training task failed because of insufficient pod memory. This issue is now resolved.

RHOAIENG-26263 - Node selector not cleared when changing the hardware profile for a workbench or model deployment

If you edited an existing workbench or model deployment to change the hardware profile from one that included a node selector to one that did not, the previous node placement settings could not be removed. With this release, the issue is resolved.

RHOAIENG-26099 - Environment variable HTTP_PROXY and HTTPS_PROXY added to notebooks

Previously, the notebook controller injected a cluster-wide OpenShift Proxy configuration to all newly created and restarted workbenches. With this release, proxy configurations are not injected unless a cluster administrator enables proxy configuration through the ConfigMap.

To enable proxy configuration, run the following command:

$ oc create configmap notebook-controller-setting-config --from-literal=INJECT_CLUSTER_PROXY_ENV=true -n redhat-ods-applications
Copy to Clipboard Toggle word wrap
Important

Any change to the config map INJECT_CLUSTER_PROXY_ENV key is propagated only after the odh-notebook-controller pod is recreated. To update the behavior, you need to either delete the relevant pod or perform a deployment rollout.

To delete the pod, run the following command:

$ oc delete pod -l app=odh-notebook-controller -A
Copy to Clipboard Toggle word wrap

To perform a deployment rollout, run the following command:

$ oc rollout restart -n redhat-ods-applications deployment/odh-notebook-controller-manager
Copy to Clipboard Toggle word wrap

RHOAIENG-23475 - Inference requests on IBM Power in a disconnected environment fail with a timeout error

Previously, when you used the IBM Power architecture to send longer prompts of more than 100 input tokens to the inference service, there was no response from the inference service. With this release, the issue is resolved.

RHOAIENG-20595 - Pipelines tasks fail to run when defining an http_proxy environment variable

The pipeline tasks failed to run if you attempted to set the http_proxy or https_proxy environment variables in a pipeline task. With this release, the issue is resolved.

RHOAIENG-16568 - Unable to download notebook as a PDF from JupyterLab Workbenches

Previously, you could not download a notebook as a PDF file in Jupyter. With this release, the issue is resolved.

RHOAIENG-14271 - Compatibility errors occur when using different Python versions in Ray clusters with Jupyter notebooks

Previously, when you used Python version 3.11 in a Jupyter notebook and then created a Ray cluster, the cluster defaulted to a workbench image that contained both Ray version 2.35 and Python version 3.9, which caused compatibility errors. With this release, the issue is resolved.

RHOAIENG-7947 - Model serving fails during query in KServe

Previously, if you initially installed the ModelMesh component and enabled the multi-model serving platform, but later installed the KServe component and enable the single-model serving platform, inference requests to models deployed on the single-model serving platform could have failed. This issue no longer occurs.

RHOAIENG-580 (previously documented as RHODS-9412) - Elyra pipeline fails to run if workbench is created by a user with edit permissions

If you were granted edit permissions for a project and created a project workbench, you saw the following behavior:

  • During the workbench creation process, you received an Error creating workbench message related to the creation of Kubernetes role bindings.
  • Despite the preceding error message, OpenShift AI still created the workbench. However, the error message meant that you were not able to use the workbench to run Elyra data science pipelines.
  • If you tried to use the workbench to run an Elyra pipeline, Jupyter showed an Error making request message that described failed initialization.

    With this release, these issues are resolved.

RHOAIENG-24682 - [vLLM-Cuda] Unable to deploy model on FIPS enabled cluster

Previously, if you deployed a model by using the vLLM NVIDIA GPU ServingRuntime for KServe or vLLM ServingRuntime Multi-Node for KServe runtimes on NVIDIA accelerators in a FIPS-enabled cluster, the deployment could fail. This issue is now resolved.

RHOAIENG-23596 - Inference requests on IBM Power with longer prompts to the inference service fail with a timeout error

Previously, when using the IBM Power architecture to send longer prompts of more than 100 input tokens to the inference service, there was no response from the inference service. This issue no longer occurs.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat