Chapter 8. Known issues


This section describes known issues in Red Hat OpenShift AI 2.19 and any known methods of working around these issues.

RHOAIENG-24886 - Cannot deploy OCI model when Model URI field includes prefix

When deploying an OCI model, if you paste the complete URI in the Model URI field and then move the cursor to another field, the URL prefix (for example, http://) is removed from the Model URI field, but it is included in the storageUri value in the InferenceService resource. As a result, you cannot deploy the OCI model.

Workaround

To prevent the issue, manually remove the URI prefix from the Model URI field before you move the cursor from that field, or copy and paste the URI text excluding the prefix.

To recover from the issue, open the InferenceService resource and edit the storageUri value to change the prefix from http to oci.

RHOAIENG-24104 - KServe reconciler should only deploy certain resources when Authorino is installed

When Authorino is not installed, Red Hat OpenShift AI applies the AuthorizationPolicy and EnvoyFilter resources to the KServe serverless deployment mode. This could block some inference requests.

Workaround
Install Authorino, then restart the OpenShift AI operator, KServe controller, and odh-model-controller by deleting the existing pods.

RHOAIENG-23596 - Inference requests on IBM Power with longer prompts to the inference service fail with a timeout error

When using the IBM Power architecture to send longer prompts of more than 100 input tokens to the inference service, there is no response from the inference service. An error message similar to the following appears:

Copy to Clipboard Toggle word wrap
504 Gateway Time-out - The server didn't respond in time.
Workaround

There are two options for working around this issue:

  • When creating an inference service, set the environment variable OMP_NUM_THREADS to 16.
  • Use more CPUs.

RHOAIENG-23562 - TrustyAIService TLS handshake error in FIPS clusters

When using a FIPS cluster that uses an external route to send a request to the TrustyAIService, a TLS handshake error appears in the logs and the request is not processed.

Workaround
Use the UI to send a request to the TrustyAIService.

RHOAIENG-23475 - Inference requests on IBM Power in a disconnected environment fail with a timeout error

When using the IBM Power architecture, if prompts sent to the inference service are created in a disconnected environment, there is no response from the inference service. An error message similar to the following appears:

Copy to Clipboard Toggle word wrap
504 Gateway Time-out - The server didn't respond in time.
Workaround

There are two options for working around this issue:

  • When creating an inference service, set the environment variable OMP_NUM_THREADS to 16.
  • Use more CPUs.

RHOAIENG-23169 - StorageInitializer fails to download models from Hugging Face repository

Deploying models from Hugging Face in a KServe environment using the hf:// protocol fails when the cluster lacks built-in support for this protocol. Additionally, the storage initializer InitContainer in KServe can encounter a PermissionError because of insufficient write permissions in the default cache directory (/.cache).

Workaround

Install a ClusterStorageContainer resource in the KServe cluster with the following values:

  • Enable support for the hf:// URI format in the supportedUriFormats section.
  • Set the HF_HOME environment variable in the storage-initializer container to a writable directory, such as /tmp/hf_home.

The following code sample shows a ClusterStorageContainer resource with these values set:

Copy to Clipboard Toggle word wrap
apiVersion: "serving.kserve.io/v1alpha1"
kind: ClusterStorageContainer
metadata:
  name: default
spec:
  container:
    name: storage-initializer
    env:
      - name: HF_HOME
        value: /tmp/hf_home
  supportedUriFormats:
    - prefix: hf://
    - prefix: gs://
    - prefix: s3://
    - prefix: hdfs://
    - prefix: webhdfs://
    - regex: "https://(.+?).blob.core.windows.net/(.+)"
    - regex: "https://(.+?).file.core.windows.net/(.+)"
    - regex: "https?://(.+)/(.+)"

RHOAIENG-22965 - Data science pipeline task fails when optional input parameters are not set

When a pipeline has optional input parameters, creating a pipeline run and tasks with unset parameters fails with the following error:

Copy to Clipboard Toggle word wrap
failed: failed to resolve inputs: resolving input parameter optional_input with spec component_input_parameter:"parameter_name": parent DAG does not have input parameter

This issue also affects the InstructLab pipeline.

Workaround
Provide values for all optional parameters.

RHOAIENG-22439 - cuda-rstudio-rhel9 cannot be built

When building the RStudio Server workbench images, building the cuda-rstudio-rhel9 fails with the following error:

Copy to Clipboard Toggle word wrap
Package supervisor-4.2.5-6.el9.noarch is already installed.
Dependencies resolved.
Nothing to do.
Complete!
.M...U...    /run/supervisor
error: build error: building at STEP "RUN yum -y module enable nginx:$NGINX_VERSION &&     INSTALL_PKGS="nss_wrapper bind-utils gettext hostname nginx nginx-mod-stream nginx-mod-http-perl fcgiwrap initscripts chkconfig supervisor" &&     yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS &&     rpm -V $INSTALL_PKGS &&     nginx -v 2>&1 | grep -qe "nginx/$NGINX_VERSION\." && echo "Found VERSION $NGINX_VERSION" &&     yum -y clean all --enablerepo='*'": while running runtime: exit status 1
Workaround

Run the following command to re-tag cuda-rstudio-rhel9:

Copy to Clipboard Toggle word wrap
oc tag --source=imagestreamtag redhat-ods-applications/cuda-rhel9:latest redhat-ods-applications/cuda-rstudio-rhel9:latest

Output similar to the following appears:

Copy to Clipboard Toggle word wrap
Tag redhat-ods-applications/cuda-rstudio-rhel9:latest set to redhat-ods-applications/cuda-rhel9@sha256:c03a3017364f311ad41f3a3677e0a532020cdd9f57fd98578288eb789cffbf4f.

RHOAIENG-21274 - Connection type changes back to S3 or URI when deploying a model with an OCI connection type

If you deploy a model that is using the S3 or URI connection type in a project with no matching connections, the Create new connection section is pre-populated using data from the model location of the S3 or URI storage location. If you change the connection type to OCI and enter a value in the Model URI field, the connection type changes back to S3 or URI.

Workaround
To deploy a model with OCI connection type, either register a model with OCI connections or use a model that has an OCI connection. Do not change the connection type while deploying from the model registry.

RHOAIENG-20595 - Pipeline fails to run if the http_proxy or https_proxy environment variable is set

If you set the http_proxy or https_proxy environment variable in a pipeline task, the pipeline fails with the following error:

Copy to Clipboard Toggle word wrap
Connecting to cache endpoint ds-pipeline-dspa.project-name.svc.cluster.local:8887 panic: runtime error: invalid memory address or nil pointer dereference
Workaround
Set the no_proxy environment variable to the following value and the pipeline will run as expected: my-task.set_env_variable("no_proxy", "localhost,127.0.0.1,svc.cluster.local,0,1,2,3,4,5,6,7,8,9")

RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold

When you click Distributed workloads Project metrics and view the Requested resources section, the charts show the requested resource values and the total shared quota value for each resource (CPU and Memory). However, when the Requested by all projects value exceeds the Warning threshold value for that resource, the expected warning message is not displayed.

Workaround
None.

RHOAIENG-18590 - The KnativeServing resource fails after disabling and enabling KServe

After disabling and enabling the kserve component in the DataScienceCluster, the KnativeServing resource might fail.

Workaround

Delete all ValidatingWebhookConfiguration and MutatingWebhookConfiguration webhooks related to Knative:

  1. Get the webhooks:

    Copy to Clipboard Toggle word wrap
    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
  2. Ensure KServe is disabled.
  3. Get the webhooks:

    Copy to Clipboard Toggle word wrap
    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
  4. Delete the webhooks.
  5. Enable KServe.
  6. Verify that the KServe pod can successfully spawn, and that pods in the knative-serving namespace are active and operational.

RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard

When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp of object storage.

When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.

This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid is always added to the folder used in object storage. For more information about storage locations used in data science pipelines, see Storing data with data science pipelines.

Workaround
When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.

OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment

This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.

Workaround
None.

RHOAIENG-15982 - Replicas not created correctly in multinode deployment when pipelineParallelSize parameter updated

When you update the pipelineParallelSize parameter in a multinode deployment, new replicas are not created correctly. Instead, the existing ReplicaSet’s replicas are modified, causing the new deployment to malfunction.

Workaround
Remove the existing InferenceService CR and create an InferenceService CR with the updated pipelineParallelSize value.

RHOAIENG-14271 - Compatibility errors occur when using different Python versions in Ray clusters with Jupyter notebooks

When you use Python version 3.11 in a Jupyter notebook, and then create a Ray cluster, the cluster defaults to a workbench image that contains Ray version 2.35 and Python version 3.9. This mismatch can cause compatibility errors, as the Ray Python client requires a Python version that matches your workbench configuration.

Workaround
Use a workbench image with your Ray cluster that contains a matching Python version with the ClusterConfiguration argument.

RHOAIENG-12516 - fast releases are available in unintended release channels

Due to a known issue with the stream image delivery process, fast releases are currently available on unintended streaming channels, for example, stable, and stable-x.y. For accurate release type, channel, and support lifecycle information, refer to the Life-cycle Dates table on the Red Hat OpenShift AI Self-Managed Life Cycle page.

Workaround
None.

RHOAIENG-12233 - The chat_template parameter is required when using the /v1/chat/completions endpoint

When configuring the vLLM ServingRuntime for KServe runtime and querying a model using the /v1/chat/completions endpoint, the model fails with the following 400 Bad Request error:

Copy to Clipboard Toggle word wrap
As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one

Transformers v4.44.2 is shipped with vLLM v0.5.5. As of vLLM v0.5.5, you must provide a chat template while querying a model using the /v1/chat/completions endpoint.

Workaround

If your model does not include a predefined chat template, you can use the chat-template command-line parameter to specify a chat template in your custom vLLM runtime, as shown in the example. Replace <CHAT_TEMPLATE> with the path to your template.

Copy to Clipboard Toggle word wrap
containers:
  - args:
      - --chat-template=<CHAT_TEMPLATE>

You can use the chat templates that are available as .jinja files in https://github.com/opendatahub-io/vllm/tree/main/examples or with the vLLM image under /apps/data/template. For more information, see Chat templates.

RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later

If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper custom resource definition (CRD) version.

Copy to Clipboard Toggle word wrap
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
Workaround
  1. Delete the existing AppWrapper CRD:

    Copy to Clipboard Toggle word wrap
    $ oc delete crd appwrappers.workload.codeflare.dev
  2. Wait for about 20 seconds, and then ensure that a new AppWrapper CRD is automatically applied, as shown in the following example:

    Copy to Clipboard Toggle word wrap
    $ oc get crd appwrappers.workload.codeflare.dev
    NAME                                 CREATED AT
    appwrappers.workload.codeflare.dev   2024-11-22T18:35:04Z

RHOAIENG-7947 - Model serving fails during query in KServe

If you initially install the ModelMesh component and enable the multi-model serving platform, but later install the KServe component and enable the single-model serving platform, inference requests to models deployed on the single-model serving platform might fail. In these cases, inference requests return a 404 - Not Found error and the logs for the odh-model-controller deployment object show a Reconciler error message.

Workaround
In OpenShift, restart the odh-model-controller deployment object.

RHOAIENG-7716 - Pipeline condition group status does not update

When you run a pipeline that has condition groups, for example, dsl.lf, the UI displays a Running status for the groups, even after the pipeline execution is complete.

Workaround

You can confirm if a pipeline is still running by checking that no child tasks remain active.

  1. From the OpenShift AI dashboard, click Data Science Pipelines Runs.
  2. From the Project list, click your data science project.
  3. From the Runs tab, click the pipeline run that you want to check the status of.
  4. Expand the condition group and click a child task.

    A panel that contains information about the child task is displayed

  5. On the panel, click the Task details tab.

    The Status field displays the correct status for the child task.

RHOAIENG-6435 - Distributed workloads resources are not included in Project metrics

When you click Distributed workloads > Project metrics and view the Requested resources section, the Requested by all projects value currently excludes the resources for distributed workloads that have not yet been admitted to the queue.

Workaround
None.

RHOAIENG-6409 - Cannot save parameter errors appear in pipeline logs for successful runs

When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.

Workaround
None.

RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics

In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.

Workaround
None.

RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer usage of this installation of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Migrating to data science pipelines 2.0.

Workaround
Remove the existing Argo Workflows installation or set datasciencepipelines to Removed, and then proceed with the installation or upgrade.

RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded condition of False with an error

If you have enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but have not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady condition in the DSC object correctly shows that KServe is not ready. However, the Degraded condition incorrectly shows a value of False.

Workaround
Install the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators, and then recreate the DSC.

RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/ directory, while KServe places them in the /<mnt>/models/ directory.

Workaround

Perform the following actions:

  1. In your S3-compatible storage bucket, place your model files in a directory called 1/, for example, /<s3_storage_bucket>/models/1/<model_files>.
  2. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:

    • If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the /<s3_storage_bucket>/models/ format to specify the path to your model files. Do not specify the 1/ directory as part of the path.
    • If you are creating your own InferenceService custom resource to deploy your model, configure the value of the storageURI field as /<s3_storage_bucket>/models/. Do not specify the 1/ directory as part of the path.

KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/ directory in your S3-compatible storage.

RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.

Workaround
To send queries to the model, you must add the /v2/models/_<model-name>_/infer string to the end of the URL. Replace _<model-name>_ with the name of your deployed model.

RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart

The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.

Workaround
None.

RHOAIENG-2585 - UI does not display an error/warning when UWM is not enabled in the cluster

Red Hat OpenShift AI does not correctly warn users if User Workload Monitoring (UWM) is disabled in the cluster. UWM is necessary for the correct functionality of model metrics.

Workaround
Manually ensure that UWM is enabled in your cluster, as described in Enabling monitoring for user-defined projects.

RHOAIENG-2555 - Model framework selector does not reset when changing Serving Runtime in form

When you use the Deploy model dialog to deploy a model on the single-model serving platform, if you select a runtime and a supported framework, but then switch to a different runtime, the existing framework selection is not reset. This means that it is possible to deploy the model with a framework that is not supported for the selected runtime.

Workaround
While deploying a model, if you change your selected runtime, click the Select a framework list again and select a supported framework.

RHOAIENG-2468 - Services in the same project as KServe might become inaccessible in OpenShift

If you deploy a non-OpenShift AI service in a data science project that contains models deployed on the single-model serving platform (which uses KServe), the accessibility of the service might be affected by the network configuration of your OpenShift cluster. This is particularly likely if you are using the OVN-Kubernetes network plugin in combination with host network namespaces.

Workaround

Perform one of the following actions:

  • Deploy the service in another data science project that does not contain models deployed on the single-model serving platform. Or, deploy the service in another OpenShift project.
  • In the data science project where the service is, add a network policy to accept ingress traffic to your application pods, as shown in the following example:

    Copy to Clipboard Toggle word wrap
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-ingress-to-myapp
    spec:
      podSelector:
        matchLabels:
          app: myapp
      ingress:
         - {}

RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds

On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.

Workaround
None.

RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels

In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.

Workaround
None.

RHOAIENG-1919 - Model Serving page fails to fetch or report the model route URL soon after its deployment

When deploying a model from the OpenShift AI dashboard, the system displays the following warning message while the Status column of your model indicates success with an OK/green checkmark.

Copy to Clipboard Toggle word wrap
Failed to get endpoint for this deployed model. routes.rout.openshift.io"<model_name>" not found
Workaround
Refresh your browser page.

RHOAIENG-404 - No Components Found page randomly appears instead of Enabled page in OpenShift AI dashboard

A No Components Found page might appear when you access the Red Hat OpenShift AI dashboard.

Workaround
Refresh the browser page.

RHOAIENG-234 - Unable to view .ipynb files in VSCode in Insecured cluster

When you use the code-server notebook image on Google Chrome in an insecure cluster, you cannot view .ipynb files.

Workaround
Use a different browser.

RHOAIENG-1128 - Unclear error message displays when attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench

When attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench, an unclear error message is displayed.

Workaround
Verify that your PV is connected to a workbench before attempting to increase the size.

RHOAIENG-497 - Removing DSCI Results In OpenShift Service Mesh CR Being Deleted Without User Notification

If you delete the DSCInitialization resource, the OpenShift Service Mesh CR is also deleted. A warning message is not shown.

Workaround
None.

RHOAIENG-282 - Workload should not be dispatched if required resources are not available

Sometimes a workload is dispatched even though a single machine instance does not have sufficient resources to provision the RayCluster successfully. The AppWrapper CRD remains in a Running state and related pods are stuck in a Pending state indefinitely.

Workaround
Add extra resources to the cluster.

RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded

When numerous InferenceService instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService instance is Loaded, but the call to the gRPC endpoint returns with errors.

Workaround
Edit the ServiceMeshControlPlane custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.

RHOAIENG-130 - Synchronization issue when the model is just launched

When the status of the KServe container is Ready, a request is accepted even though the TGIS container is not ready.

Workaround
Wait a few seconds to ensure that all initialization has completed and the TGIS container is actually ready, and then review the request output.

RHOAIENG-3115 - Model cannot be queried for a few seconds after it is shown as ready

Models deployed using the multi-model serving platform might be unresponsive to queries despite appearing as Ready in the dashboard. You might see an “Application is not available" response when querying the model endpoint.

Workaround
Wait 30-40 seconds and then refresh the page in your browser.

RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable

When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines is not helpful.

Workaround
Verify that your data connection credentials are correct and that you have write access to the bucket you specified.

RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times

If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists error message.

Workaround
Change the metadata.name field to a unique value.

RHOAIENG-1201 (previously documented as ODH-DASHBOARD-1908) - Cannot create workbench with an empty environment variable

When creating a workbench, if you click Add variable but do not select an environment variable type from the list, you cannot create the workbench. The field is not marked as required, and no error message is shown.

Workaround
None.

RHOAIENG-432 (previously documented as RHODS-12928) - Using unsupported characters can generate Kubernetes resource names with multiple dashes

When you create a resource and you specify unsupported characters in the name, then each space is replaced with a dash and other unsupported characters are removed, which can result in an invalid resource name.

Workaround
None.

RHOAIENG-226 (previously documented as RHODS-12432) - Deletion of the notebook-culler ConfigMap causes Permission Denied on dashboard

If you delete the notebook-controller-culler-config ConfigMap in the redhat-ods-applications namespace, you can no longer save changes to the Cluster Settings page on the OpenShift AI dashboard. The save operation fails with an HTTP request has failed error.

Workaround

Complete the following steps as a user with cluster-admin permissions:

  1. Log in to your cluster by using the oc client.
  2. Enter the following command to update the OdhDashboardConfig custom resource in the redhat-ods-applications application namespace:

    Copy to Clipboard Toggle word wrap
    $ oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"notebookController.enabled": true}}}'

RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after notebook restart

If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a notebook image within the workbench, you cannot execute the pipeline, even after restarting the notebook.

Workaround
  1. Stop the running notebook.
  2. Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
  3. Restart the notebook.
  4. In the left sidebar of JupyterLab, click Runtimes.
  5. Confirm that the default runtime is selected.

RHODS-12798 - Pods fail with "unable to init seccomp" error

Pods fail with CreateContainerError status or Pending status instead of Running status, because of a known kernel bug that introduced a seccomp memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod command, the following error appears:

Copy to Clipboard Toggle word wrap
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
Workaround
Increase the value of net.core.bpf_jit_limit as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.

KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy

You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.

Workaround
None.

RHOAIENG-16568 (previously documented as NOTEBOOKS-210) - A notebook fails to export as a PDF file in Jupyter

When you export a notebook as a PDF file in Jupyter, the export process fails with an error.

Workaround
None.

RHOAIENG-1210 (previously documented as ODH-DASHBOARD-1699) - Workbench does not automatically restart for all configuration changes

When you edit the configuration settings of a workbench, a warning message appears stating that the workbench will restart if you make any changes to its configuration settings. This warning is misleading because in the following cases, the workbench does not automatically restart:

  • Edit name
  • Edit description
  • Edit, add, or remove keys and values of existing environment variables
Workaround
Manually restart the workbench.

RHOAIENG-1208 (previously documented as ODH-DASHBOARD-1741) - Cannot create a workbench whose name begins with a number

If you try to create a workbench whose name begins with a number, the workbench does not start.

Workaround
Delete the workbench and create a new one with a name that begins with a letter.

KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard

If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.

Workaround
Log out of JupyterLab before you log out of the OpenShift AI dashboard.

RHODS-9789 - Pipeline servers fail to start if they contain a custom database that includes a dash in its database name or username field

When you create a pipeline server that uses a custom database, if the value that you set for the dbname field or username field includes a dash, the pipeline server fails to start.

Workaround
Edit the pipeline server to omit the dash from the affected fields.

RHOAIENG-580 (previously documented as RHODS-9412) - Elyra pipeline fails to run if workbench is created by a user with edit permissions

If a user who has been granted edit permissions for a project creates a project workbench, that user sees the following behavior:

  • During the workbench creation process, the user sees an Error creating workbench message related to the creation of Kubernetes role bindings.
  • Despite the preceding error message, OpenShift AI still creates the workbench. However, the error message means that the user will not be able to use the workbench to run Elyra data science pipelines.
  • If the user tries to use the workbench to run an Elyra pipeline, Jupyter shows an Error making request message that describes failed initialization.

    Workaround
    A user with administrator permissions (for example, the project owner) must create the workbench on behalf of the user with edit permissions. That user can then use the workbench to run Elyra pipelines.

RHODS-7718 - User without dashboard permissions is able to continue using their running notebooks and workbenches indefinitely

When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running notebooks and workbenches indefinitely.

Workaround
When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running notebooks and workbenches for that user.

RHOAIENG-1157 (previously documented as RHODS-6955) - An error can occur when trying to edit a workbench

When editing a workbench, an error similar to the following can occur:

Copy to Clipboard Toggle word wrap
Error creating workbench
Operation cannot be fulfilled on notebooks.kubeflow.org "workbench-name": the object has been modified; please apply your changes to the latest version and try again
Workaround
None.

RHOAIENG-1152 (previously documented as RHODS-6356) - The notebook creation process fails for users who have never logged in to the dashboard

The dashboard’s notebook Administration page displays users belonging to the user group and admin group in OpenShift. However, if an administrator attempts to start a notebook server on behalf of a user who has never logged in to the dashboard, the server creation process fails and displays the following error message:

Copy to Clipboard Toggle word wrap
Request invalid against a username that does not exist.
Workaround
Request that the relevant user logs into the dashboard.

RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler

When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.

Workaround
Apply the cluster-api/accelerator label in machineset.spec.template.spec.metadata. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.

RHOAIENG-1149 (previously documented RHODS-5216) - The application launcher menu incorrectly displays a link to OpenShift Cluster Manager

Red Hat OpenShift AI incorrectly displays a link to the OpenShift Cluster Manager from the application launcher menu. Clicking this link results in a "Page Not Found" error because the URL is not valid.

Workaround
None.

RHOAIENG-1137 (previously documented as RHODS-5251) - Notebook server administration page shows users who have lost permission access

If a user who previously started a notebook server in Jupyter loses their permissions to do so (for example, if an OpenShift AI administrator changes the user’s group settings or removes the user from a permitted group), administrators continue to see the user’s notebook servers on the server Administration page. As a consequence, an administrator is able to restart notebook servers that belong to the user whose permissions were revoked.

Workaround
None.

RHODS-4799 - Tensorboard requires manual steps to view

When a user has TensorFlow or PyTorch notebook images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the notebook environment, and to import those variables for use in your code.

Workaround

When you start your notebook server, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.

Copy to Clipboard Toggle word wrap
import os
os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"

RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks

The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.

Workaround
None.

RHOAIENG-1141 (previously documented as RHODS-4502) - The NVIDIA GPU Operator tile on the dashboard displays button unnecessarily

GPUs are automatically available in Jupyter after the NVIDIA GPU Operator is installed. The Enable button, located on the NVIDIA GPU Operator tile on the Explore page, is therefore redundant. In addition, clicking the Enable button moves the NVIDIA GPU Operator tile to the Enabled page, even if the Operator is not installed.

Workaround
None.

RHODS-3984 - Incorrect package versions displayed during notebook selection

In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.

Workaround
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the !pip list command in a notebook cell.

RHODS-2956 - Error can occur when creating a notebook instance

When creating a notebook instance in Jupyter, a Directory not found error appears intermittently. This error message can be ignored by clicking Dismiss.

Workaround
None.

RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible

The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled label. As a result, the intended workflows might not be clear to the user.

Workaround
None.

RHOAIENG-1134 (previously documented as RHODS-2879) - License revalidation action appears unnecessarily

The dashboard action to revalidate a disabled application license appears unnecessarily for applications that do not have a license validation or activation system. In addition, when a user attempts to revalidate a license that cannot be revalidated, feedback is not displayed to state why the action cannot be completed.

Workaround
None.

RHOAIENG-2305 (previously documented as RHODS-2650) - Error can occur during Pachyderm deployment

When creating an instance of the Pachyderm operator, a webhook error appears intermittently, preventing the creation process from starting successfully. The webhook error is indicative that, either the Pachyderm operator failed a health check, causing it to restart, or that the operator process exceeded its container’s allocated memory limit, triggering an Out of Memory (OOM) kill.

Workaround
Repeat the Pachyderm instance creation process until the error no longer appears.

RHODS-2096 - IBM Watson Studio not available in OpenShift AI

IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.

Workaround
Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.