Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Known issues


This section describes known issues in Red Hat OpenShift AI 2.24 and any known methods of working around these issues.

RHOAIENG-35623 - Model deployment fails when using hardware profiles

Model deployments that use hardware profiles fail because the Red Hat OpenShift AI Operator does not inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService when manually creating InferenceService resources. As a result, the model deployment pods cannot be scheduled to suitable nodes and the deployment fails to enter a ready state. Workbenches that use the same hardware profile continue to deploy successfully.

Workaround
Run a script to manually inject the tolerations, nodeSelector, or identifiers from the hardware profile into the underlying InferenceService as described in the Knowledgebase solution Workaround for model deployment failure when using hardware profiles.

RHOAIENG-33995 - Deployment of an inference service for Phi and Mistral models fails

The creation of an inference service for Phi and Mistral models using vLLM runtime on IBM Power cluster with openshift-container-platform 4.19 fails due to an error related to CPU backend. As a result, deployment of these models is affected, causing inference service creation failure.

Workaround
To resolve this issue, disable the sliding_window mechanism in the serving runtime if it is enabled for CPU and Phi models. Sliding window is not currently supported in V1.

RHOAIENG-33914 - LM-Eval Tier2 task test failures

There can be some failures with LM-Eval Tier2 task tests because the Massive Multitask Language Understanding Symbol Replacement (MMLUSR) tasks are broken, if you are using an older version of the trustyai-service-operator.

Workaround
Ensure that the latest version of the trustyai-service-operator is installed.

RHOAIENG-33795 - Manual Route creation needed for gRPC endpoint verification for Triton Inference Server on IBM Z

When verifying Triton Inference Server with gRPC endpoint, Route does not get created automatically. This happens because the Operator currently defaults to creating an edge-terminated route for REST only.

Workaround

To resolve this issue, manual Route creation is needed for gRPC endpoint verification for Triton Inference Server on IBM Z.

  1. When the model deployment pod is up and running, define an edge-terminated Route object in a YAML file with the following contents:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: <grpc-route-name>                  # e.g. triton-grpc
      namespace: <model-deployment-namespace>  # namespace where your model is deployed
      labels:
        inferenceservice-name: <inference-service-name>
      annotations:
        haproxy.router.openshift.io/timeout: 30s
    spec:
      host: <custom-hostname>                  # e.g. triton-grpc.<apps-domain>
      to:
        kind: Service
        name: <service-name>                   # name of the predictor service (e.g. triton-predictor)
        weight: 100
      port:
        targetPort: grpc                       # must match the gRPC port exposed by the service
      tls:
        termination: edge
      wildcardPolicy: None
    Copy to Clipboard Toggle word wrap
  2. Create the Route object:

    oc apply -f <route-file-name>.yaml
    Copy to Clipboard Toggle word wrap
  3. To send an inference request, enter the following command:

    grpcurl -cacert <ca_cert_file>\ 
    1
    
      -protoset triton_desc.pb \
      -d '{
        "model_name": "<model_name>",
        "inputs": [
          {
            "name": "<input_tensor_name>",
            "shape": [<shape>],
            "datatype": "<data_type>",
            "contents": {
              "<datatype_specific_contents>": [<input_data_values>]
            }
          }
        ],
        "outputs": [
          {
            "name": "<output_tensor_name>"
          }
        ]
      }' \
      <grpc_route_host>:443 \
      inference.GRPCInferenceService/ModelInfer
    Copy to Clipboard Toggle word wrap
    1
    <ca_cert_file> is the path to your cluster router CA cert (for example, router-ca.crt).
Note

<triton_protoset_file> is compiled as a protobuf descriptor file. You can generate it as protoc -I. --descriptor_set_out=triton_desc.pb --include_imports grpc_service.proto.

Download grpc_service.proto and model_config.proto files from the triton-inference-service GitHub page.

RHOAIENG-33697 - Unable to Edit or Delete models unless status is "Started"

When you deploy a model on the NVIDIA NIM or single-model serving platform, the Edit and Delete options in the action menu are not available for models in the Starting or Pending states. These options become available only after the model has been successfully deployed.

Workaround
Wait until the model is in the Started state to make any changes or to delete the model.

RHOAIENG-33645 - LM-Eval Tier1 test failures

There can be failures with LM-Eval Tier1 tests because confirm_run_unsafe_code is not passed as an argument when a job is run, if you are using an older version of the trustyai-service-operator.

Workaround
Ensure that you are using the latest version of the trustyai-service-operator and that AllowCodeExecution is enabled.

RHOAIENG-32942 - Elyra pipelines fail when pipeline store is set to Kubernetes

When the pipeline store is configured to use Kubernetes, Elyra requires equality (eq) filters that are not supported by the REST API. Only substring filters are supported in this mode. As a result, pipelines created and submitted through Elyra from a workbench cannot run successfully. Submissions fail with the following error:

Invalid input error: Filter eq is not implemented for Kubernetes pipeline store.
Copy to Clipboard Toggle word wrap
Workaround

Configure the pipeline server to use the database instead of Kubernetes for storing pipelines:

  • When creating a pipeline server, set the pipeline store to database.
  • If the server is already created, update the DataSciencePipelinesApplication custom resource by setting .spec.pipelineStore to database. This triggers the dspa pod to be recreated.

After switching the pipeline store to database, Elyra pipelines can be submitted successfully from a workbench.

RHOAIENG-32897 - Pipelines defined with the Kubernetes API and invalid platformSpec do not appear in the UI or run

When a pipeline version defined with the Kubernetes API includes an empty or invalid spec.platformSpec field (for example, {} or missing the kubernetes key), the system misidentifies the field as the pipeline specification. As a result, the REST API omits the pipelineSpec, which prevents the pipeline version from being displayed in the UI and from running.

Workaround
Remove the spec.platformSpec field from the PipelineVersion object. After removing the field, the pipeline version is displayed correctly in the UI and the REST API returns the pipelineSpec as expected.

RHOAIENG-31386 - Error deploying an Inference Service with authenticationRef

When deploying an InferenceService with authenticationRef under external metrics, the authenticationRef field is removed after the first oc apply.

Workaround
Re-apply the resource to retain the field.

RHOAIENG-30493 - Error creating a workbench in a Kueue-enabled project

When using the dashboard to create a workbench in a Kueue-enabled project, the creation fails if Kueue is disabled on the cluster or if the selected hardware profile is not associated with a LocalQueue. In this case, the required LocalQueue cannot be referenced, the admission webhook validation fails, and the following error message is shown:

Error creating workbench
admission webhook "kubeflow-kueuelabels-validator.opendatahub.io" denied the request: Kueue label validation failed: missing required label "kueue.x-k8s.io/queue-name"
Copy to Clipboard Toggle word wrap
Workaround

Enable Kueue and hardware profiles on your cluster as a user with cluster-admin permissions:

  1. Log in to your cluster by using the oc client.
  2. Run the following command to patch the OdhDashboardConfig custom resource in the redhat-ods-applications namespace:
oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"disableKueue": false, "disableHardwareProfiles": false}}}'
Copy to Clipboard Toggle word wrap

RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization

When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.

Workaround

To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.

This is not required if you want to use the Technology Preview observability stack.

RHOAIENG-32145 - Llama Stack Operator deployment failures on OpenShift versions earlier than 4.17

When installing OpenShift AI on OpenShift clusters running versions earlier than 4.17, the integrated Llama Stack Operator (llamastackoperator) might fail to deploy.

The Llama Stack Operator requires Kubernetes version 1.32 or later, but OpenShift 4.15 uses Kubernetes 1.28. This version gap can cause schema validation failures when applying the LlamaStackDistribution custom resource definition (CRD), due to unsupported selectable fields introduced in Kubernetes 1.32.

Workaround
Install OpenShift AI on an OpenShift cluster running version 4.17 or later.

RHOAIENG-32242 - Failure on creating NetworkPolicies for OpenShift versions 4.15 and 4.16

When installing OpenShift AI on OpenShift clusters running versions 4.15 or 4.16, deployment of certain NetworkPolicy resources might fail. This can occur when the llamastackoperator or related components attempt to create a NetworkPolicy in a protected namespace, such as redhat-ods-applications. The request can be blocked by the admission webhook networkpolicies-validation.managed.openshift.io, which restricts modifications to certain namespaces and resources, even for cluster-admin users. This restriction can apply to both self-managed and Red Hat–managed OpenShift environments.

Workaround
Deploy OpenShift AI on an OpenShift cluster running version 4.17 or later. For clusters where the webhook restriction is enforced, contact your OpenShift administrator or Red Hat Support to determine an alternative deployment pattern or approved change to the affected namespace.

RHOAIENG-32599 - Inference service creation fails on IBM Z cluster

When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name.

Workaround
None.

RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19

When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).

Workaround
When you create an inference service, set the environment variable VLLM_CPU_OMP_THREADS_BIND to all.

RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access

When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config), which it does not have permission to access.

The following error appears in the logs:

Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
 ...
PermissionError: [Error 13] Permission denied: '/.config'
Copy to Clipboard Toggle word wrap
Workaround
To prevent this error and suppress the usage stats logging, set the VLLM_NO_USAGE_STATS=1 environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.

RHOAIENG-28910 - Unmanaged KServe resources are deleted after upgrading from 2.16 to 2.19 or later

During the upgrade from OpenShift AI 2.16 to 2.24, the FeatureTracker custom resource (CR) is deleted before its owner references are fully removed from associated KServe-related resources. As a result, resources that were originally created by the Red Hat OpenShift AI Operator with a Managed state and later changed to Unmanaged in the DataScienceCluster (DSC) custom resource (CR) might be unintentionally removed. This issue can disrupt model serving functionality until the resources are manually restored.

The following resources might be deleted in 2.24 if they were changed to Unmanaged in 2.16:

Expand
KindNamespaceName

KnativeServing

knative-serving

knative-serving

ServiceMeshMember

knative-serving

default

Gateway

istio-system

kserve-local-gateway

Gateway

knative-serving

knative-ingress-gateway

Gateway

knative-serving

knative-local-gateway

Workaround

If you have already upgraded from OpenShift AI 2.16 to 2.24, perform one of the following actions:

  • If you have an existing backup, manually recreate the deleted resources without owner references to the FeatureTracker CR.
  • If you do not have an existing backup, you can use the Operator to recreate the deleted resources:

    1. Back up any resources you have already recreated.
    2. In the DSC, set spec.components.kserve.serving.managementState to Managed, and then save the change to allow the Operator to recreate the resources.

      Wait until the Operator has recreated the resources.

    3. In the DSC, set spec.components.kserve.serving.managementState back to Unmanaged, and then save the change.
    4. Reapply any previous custom changes to the recreated KnativeServing, ServiceMeshMember, and Gateway CRs resources.

If you have not yet upgraded, perform the following actions before upgrading to prevent this issue:

  1. In the DSC, set spec.components.kserve.serving.managementState to Unmanaged.
  2. For each of the affected KnativeServing, ServiceMeshMember, and Gateway resources listed in the above table, edit its CR by deleting the FeatureTracker owner reference. This edit removes the resource’s dependency on the FeatureTracker and prevents the deletion of the resource during the upgrade process.

RHOAIENG-24545 - Runtime images are not present in the workbench after the first start

The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.

Workaround
Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.

RHOAIENG-25090 - InstructLab prerequisites-check-op task fails when the model registration option is disabled

When you start a LAB-tuning run without selecting the Add model to <model registry name> checkbox, the InstructLab pipeline starts, but the prerequisites-check-op task fails with the following error in the pod logs:

failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
Copy to Clipboard Toggle word wrap
Workaround
Select the Add model to <model registry name> checkbox when you configure the LAB-tuning run.

RHOAIENG-24786 - Upgrading the Authorino Operator from Technical Preview to Stable fails in disconnected environments

In disconnected environments, upgrading the Red Hat Authorino Operator from Technical Preview to Stable fails with an error in the authconfig-migrator-qqttz pod.

Workaround
  1. Update the Red Hat Authorino Operator to the latest version in the tech-preview-v1 update channel (v1.1.2).
  2. Run the following script:

    #!/bin/bash
    set -xe
    
    # delete the migrator job
    oc delete job -n openshift-operators authconfig-migrator || true
    
    # run the migrator script
    authconfigs=$(oc get authconfigs -A -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name' --no-headers)
    
    if [[ ! -z "${authconfigs}" ]]; then
        while IFS=" " read -r namespace name; do
            oc get authconfig "$name" -n "$namespace" -o yaml > "/tmp/${name}.${namespace}.authconfig.yaml"
            oc apply -f "/tmp/${name}.${namespace}.authconfig.yaml"
        done <<< "${authconfigs}"
    fi
    
    oc patch crds authconfigs.authorino.kuadrant.io --type=merge --subresource status --patch '{"status":{"storedVersions":["v1beta2"]}}'
    Copy to Clipboard Toggle word wrap
  3. Update the Red Hat Authorino Operator subscription to use the stable update channel.
  4. Select the update option for Authorino 1.2.1.

RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold

When you click Distributed workloads Project metrics and view the Requested resources section, the charts show the requested resource values and the total shared quota value for each resource (CPU and Memory). However, when the Requested by all projects value exceeds the Warning threshold value for that resource, the expected warning message is not displayed.

Workaround
None.

SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing resource fails after disabling and enabling KServe

After disabling and enabling the kserve component in the DataScienceCluster, the KnativeServing resource might fail.

Workaround

Delete all ValidatingWebhookConfiguration and MutatingWebhookConfiguration webhooks related to Knative:

  1. Get the webhooks:

    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
    Copy to Clipboard Toggle word wrap
  2. Ensure KServe is disabled.
  3. Get the webhooks:

    oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
    Copy to Clipboard Toggle word wrap
  4. Delete the webhooks.
  5. Enable KServe.
  6. Verify that the KServe pod can successfully spawn, and that pods in the knative-serving namespace are active and operational.

RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard

When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp of object storage.

When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.

This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid is always added to the folder used in object storage. For more information about storage locations used in data science pipelines, see Storing data with data science pipelines.

Workaround
When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.

OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment

This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.

Workaround
None.

RHOAIENG-12516 - fast releases are available in unintended release channels

Due to a known issue with the stream image delivery process, fast releases are currently available on unintended streaming channels, for example, stable, and stable-x.y. For accurate release type, channel, and support lifecycle information, refer to the Life-cycle Dates table on the Red Hat OpenShift AI Self-Managed Life Cycle page.

Workaround
None.

RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later

If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper custom resource definition (CRD) version.

ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
Copy to Clipboard Toggle word wrap
Workaround
  1. Delete the existing AppWrapper CRD:

    $ oc delete crd appwrappers.workload.codeflare.dev
    Copy to Clipboard Toggle word wrap
  2. Wait for about 20 seconds, and then ensure that a new AppWrapper CRD is automatically applied, as shown in the following example:

    $ oc get crd appwrappers.workload.codeflare.dev
    NAME                                 CREATED AT
    appwrappers.workload.codeflare.dev   2024-11-22T18:35:04Z
    Copy to Clipboard Toggle word wrap

RHOAIENG-7716 - Pipeline condition group status does not update

When you run a pipeline that has loops (dsl.ParallelFor) or condition groups (dsl.lf), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.

Workaround

You can confirm if a pipeline is still running by checking that no child tasks remain active.

  1. From the OpenShift AI dashboard, click Data Science Pipelines Runs.
  2. From the Project list, click your data science project.
  3. From the Runs tab, click the pipeline run that you want to check the status of.
  4. Expand the condition group and click a child task.

    A panel that contains information about the child task is displayed

  5. On the panel, click the Task details tab.

    The Status field displays the correct status for the child task.

RHOAIENG-6409 - Cannot save parameter errors appear in pipeline logs for successful runs

When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.

Workaround
None.

RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics

In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.

Workaround
None.

RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade

Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Migrating to data science pipelines 2.0.

Workaround
Remove the existing Argo Workflows installation or set datasciencepipelines to Removed, and then proceed with the installation or upgrade.

RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/ directory, while KServe places them in the /<mnt>/models/ directory.

Workaround

Perform the following actions:

  1. In your S3-compatible storage bucket, place your model files in a directory called 1/, for example, /<s3_storage_bucket>/models/1/<model_files>.
  2. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:

    • If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the /<s3_storage_bucket>/models/ format to specify the path to your model files. Do not specify the 1/ directory as part of the path.
    • If you are creating your own InferenceService custom resource to deploy your model, configure the value of the storageURI field as /<s3_storage_bucket>/models/. Do not specify the 1/ directory as part of the path.

KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/ directory in your S3-compatible storage.

RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard

When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.

Workaround
To send queries to the model, you must add the /v2/models/_<model-name>_/infer string to the end of the URL. Replace _<model-name>_ with the name of your deployed model.

RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart

The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.

Workaround
None.

RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds

On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.

Workaround
None.

RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels

In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.

Workaround
None.

RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded

When numerous InferenceService instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService instance is Loaded, but the call to the gRPC endpoint returns with errors.

Workaround
Edit the ServiceMeshControlPlane custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.

RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable

When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines is not helpful.

Workaround
Verify that your data connection credentials are correct and that you have write access to the bucket you specified.

RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times

If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists error message.

Workaround
Change the metadata.name field to a unique value.

RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart

If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.

Workaround
  1. Stop the running workbench.
  2. Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
  3. Restart the workbench.
  4. In the left sidebar of JupyterLab, click Runtimes.
  5. Confirm that the default runtime is selected.

RHODS-12798 - Pods fail with "unable to init seccomp" error

Pods fail with CreateContainerError status or Pending status instead of Running status, because of a known kernel bug that introduced a seccomp memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod command, the following error appears:

runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
Copy to Clipboard Toggle word wrap
Workaround
Increase the value of net.core.bpf_jit_limit as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.

KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy

You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.

Workaround
None.

KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard

If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.

Workaround
Log out of JupyterLab before you log out of the OpenShift AI dashboard.

RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely

When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.

Workaround
When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.

RHOAIENG-1152 (previously documented as RHODS-6356) - The basic-workbench creation process fails for users who have never logged in to the dashboard

The dashboard’s Administration page for basic workbenches displays users who belong to the user group and admin group in OpenShift. However, if an administrator attempts to start a basic workbench on behalf of a user who has never logged in to the dashboard, the basic-workbench creation process fails and displays the following error message:

Request invalid against a username that does not exist.
Copy to Clipboard Toggle word wrap
Workaround
Request that the relevant user logs into the dashboard.

RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler

When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.

Workaround
Apply the cluster-api/accelerator label in machineset.spec.template.spec.metadata. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.

RHODS-4799 - Tensorboard requires manual steps to view

When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.

Workaround

When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.

import os
os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
Copy to Clipboard Toggle word wrap

RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks

The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.

Workaround
None.

RHODS-3984 - Incorrect package versions displayed during notebook selection

In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.

Workaround
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the !pip list command in a notebook cell.

RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible

The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled label. As a result, the intended workflows might not be clear to the user.

Workaround
None.

RHODS-2096 - IBM Watson Studio not available in OpenShift AI

IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.

Workaround
Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat