此内容没有您所选择的语言版本。
Chapter 8. Known issues
This section describes known issues in Red Hat OpenShift AI 3.0 and any known methods of working around these issues.
RHOAIENG-37228 - Manual DNS configuration required on OpenStack and private cloud environments
When deploying OpenShift AI 3.0 on OpenStack, CodeReady Containers (CRC), or other private cloud environments without integrated external DNS, external access to components such as the dashboard and workbenches might fail after installation. This occurs because the dynamically provisioned LoadBalancer Service does not automatically register its IP address in external DNS.
- Workaround
- To restore access, manually create the required A or CNAME records in your external DNS system. For instructions, see the Configuring External DNS for RHOAI 3.x on OpenStack and Private Clouds Knowledgebase article.
RHOAIENG-38658 - TrustyAI service issues during model inference with token authentication on IBM Z (s390x)
On IBM Z (s390x) architecture, the TrustyAI service encounters errors during model inference when token authentication is enabled. A JsonParseException displays while logging to the TrustyAI service logger, causing the bias monitoring process to fail or behave unexpectedly.
- Workaround
- Run the TrustyAI service without authentication. The issue occurs only when token authentication is enabled.
RHOAIENG-38333 - Code generated by the Generative AI Playground is invalid and required packages are missing from workbenches
Code automatically generated by the Generative AI Playground might cause syntax errors when run in OpenShift AI workbenches. Additionally, the LlamaStackClient package is not currently included in standard workbench images.
RHOAIENG-38263 - Intermittent failures with Guardrails Detector model on Hugging Face runtime for IBM Z
On IBM Z platforms, the Guardrails Detector model running on the Hugging Face runtime might intermittently fail to process identical requests. In some cases, a request that previously returned valid results fails with a parse error similar to the following example:
Invalid numeric literal at line 1, column 20
Invalid numeric literal at line 1, column 20
This error can cause the serving pod to temporarily enter a CrashLoopBackOff state, although it typically recovers automatically.
- Workaround
- None. The pod restarts automatically and resumes normal operation.
RHOAIENG-38253 - Distributed Inference Server with llm-d not listed on the Serving Runtimes page
While Distributed Inference Server with llm-d appears as an available option when deploying a model, it is not listed on the Serving Runtimes page under the Settings section.
This occurs because Distributed Inference Server with llm-d is a composite deployment type that includes additional components beyond a standard serving runtime. It therefore does not appear in the list of serving runtimes visible to administrators and cannot currently be hidden from end users.
- Workaround
- None. The Distributed Inference Server with llm-d option can still be used for model deployments, but it cannot be managed or viewed from the Serving Runtimes page.
RHOAIENG-38252 - Model Registry Operator does not work with BYOIDC mode on OpenShift 4.20
On OpenShift 4.20 clusters configured with Bring Your Own Identity Provider (BYOIDC) mode, deploying the Model Registry Operator fails.
When you create a ModelRegistry custom resource, it does not reach the available: True state. Instead, the resource shows a status similar to the following example:
- Workaround
- None.
You cannot create or deploy a Model Registry instance when using BYOIDC mode on OpenShift 4.20.
RHOAIENG-38180 - Workbench requests to Feature Store service result in certificate errors
When using the default configuration, the Feature Store (Feast) deployment is missing required certificates and a service endpoint. As a result, workbenches cannot send requests to the Feature Store by using the Feast SDK.
- Workaround
-
Delete the existing
FeatureStorecustom resource (CR), then create a new one with the following configuration:
registry:
local:
server:
restAPI: false
registry:
local:
server:
restAPI: false
After the Feature Store pod starts running, edit the same CR to set registry.local.server.restAPI: true and save it without deleting the CR. Verify that both REST and gRPC services are created in your namespace, and wait for the pod to restart and become ready.
RHOAIENG-37916 - LLM-D deployed model shows failed status on the Deployments page
Models deployed using the {llm-d} initially display a Failed status on the Deployments page in the OpenShift AI dashboard, even though the associated pod logs report no errors or failures.
To confirm the status of the deployment, use the OpenShift console to monitor the pods in the project. When the model is ready, the OpenShift AI dashboard updates the status to Started.
- Workaround
- Wait for the model status to update automatically, or check the pod statuses in the OpenShift console to verify that the model has started successfully.
RHOAIENG-37882 - Custom workbench (AnythingLLM) fails to load
Deploying a custom workbench such as AnythingLLM 1.8.5 might fail to finish loading. Starting with OpenShift AI 3.0, all workbenches must be compatible with the Kubernetes Gateway API’s path-based routing. Custom workbench images that do not support this requirement fail to load correctly.
- Workaround
-
Update your custom workbench image to support path-based routing by serving all content from the
${NB_PREFIX}path (for example,/notebook/<namespace>/<workbench-name>). Requests to paths outside this prefix (such as/index.htmlor/api/data) are not routed to the workbench container.
To fix existing workbenches:
-
Update your application to handle requests at
${NB_PREFIX}/...paths. -
Configure the base path in your framework, for example:
FastAPI(root_path=os.getenv('NB_PREFIX', '')) - Update nginx to preserve the prefix in redirects.
-
Implement health endpoints returning HTTP 200 at:
${NB_PREFIX}/api,${NB_PREFIX}/api/kernels, and${NB_PREFIX}/api/terminals. -
Use relative URLs and remove any hardcoded absolute paths such as
/menu.
For more information, see the migration guide: Gateway API migration guide.
RHOAIENG-37855 - Model deployment from Model Catalog fails due to name length limit
When deploying certain models from the Model Catalog, the deployment might fail silently and remain in the Starting state. This issue occurs because KServe cannot create a deployment from the InferenceService when the resulting object name exceeds the 63-character limit.
- Example
-
Attempting to deploy the model
RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamicresults in KServe trying to create a deployment namedisvc.redhataimistral-small-31-24b-instruct-2503-fp8-dynamic-predictor, which has 69 characters and exceeds the maximum allowed length. - Workaround
-
Use shorter model names or rename the
InferenceServiceto ensure the generated object name stays within the 63-character limit.
RHOAIENG-37842 - Ray workloads requiring ray.init() cannot be triggered outsid OpenShift AI
Ray workloads that require ray.init() cannot be triggered outside the OpenShift AI environment. These workloads must be submitted from within a workbench or pipeline running on OpenShift AI in OpenShift. Running these workloads externally is not supported and results in initialization failures.
- Workaround
-
Run Ray workloads that call
ray.init()only within an OpenShift AI workbench or pipeline context.
RHOAIENG-37743 - No progress bar displayed when starting workbenches
When starting a workbench, the Progress tab in the Workbench Status screen does not display step-by-step progress. Instead, it shows a generic message stating that “Steps may repeat or occur in a different order.”
- Workaround
- To view detailed progress information, open the Event Log tab or use the OpenShift console to view the pod details associated with the workbench.
RHOAIENG-37667 - Model-as-a-Service (MaaS) available only for LLM-D runtime
Model-as-a-Service (MaaS) is currently supported only for models deployed with the Distributed Inference Server with llm-d runtime. Models deployed with the vLLM runtime cannot be served by MaaS at this time.
- Workaround
-
None. Use the
llm-druntime for deployments that require Model-as-a-Service functionality.
RHOAIENG-37561 - Dashboard console link fails to access OpenShift AI on IBM Z clusters in 3.0.0
When attempting to access the OpenShift AI 3.0.0 dashboard using the console link on an IBM Z cluster, the connection fails.
- Workaround
- Create a route to the Gateway link by applying the following YAML file:
RHOAIENG-37259 - Elyra Pipelines not supported on IBM Z (s390x)
Elyra Pipelines depend on Data Science Pipelines (DSP) for orchestration and validation. Because DSP is not currently available on IBM Z, Elyra pipeline-related functionality and tests are skipped.
- Workaround
- None. Elyra Pipelines will function correctly once DSP support is enabled and validated on IBM Z.
RHOAIENG-37015 - TensorBoard reporting fails in PyTorch 2.8 training image
When using TensorBoard reporting for training jobs that use the SFTTrainer with the image registry.redhat.io/rhoai/odh-training-cuda128-torch28-py312-rhel9:rhoai-3.0, or when the report_to parameter is omitted from the training configuration, the training job fails with a JSON serialization error.
- Workaround
-
Install the latest versions of the
transformersandtrlpackages and update thetorch_dtypeparameter todtypein the training configuration.
If you are using the Training Operator SDK, you can specify the packages to install by using the packages_to_install parameter in the create_job function:
packages_to_install=[
"transformers==4.57.1",
"trl==0.24.0"
]
packages_to_install=[
"transformers==4.57.1",
"trl==0.24.0"
]
RHOAIENG-36757 - Existing cluster storage option missing during model deployment when no connections exist
When creating a model deployment in a project that has no data connections defined, the Existing cluster storage option is not displayed, even if suitable Persistent Volume Claims (PVCs) exist in the project. This prevents you from selecting an existing PVC for model deployment.
- Workaround
- Create at least one connection of type URI in the project to make the Existing cluster storage option appear.
RHOAIENG-31071 - Parquet datasets not supported on IBM Z (s390x)
Some built-in evaluation tasks, such as arc_easy and arc_challenge, use datasets provided by Hugging Face in Parquet format. Parquet is not supported on IBM Z.
- Workaround
- None. To evaluate models on IBM Z, use datasets in a supported format instead of Parquet.
RHAIENG-1795 - CodeFlare with Ray does not work with Gateway
When running the following commands, the output indicates that the Ray cluster has been created and is running, but the cell never completes because the Gateway route does not respond correctly:
cluster.up() cluster.wait_ready()
cluster.up()
cluster.wait_ready()
As a result, subsequent operations such as fetching the Ray cluster or obtaining the job client fail, preventing job submission to the cluster.
- Workaround
- None. The Ray Dashboard Gateway route does not function correctly when created through CodeFlare.
RHAIENG-1796 - Pipeline name must be DNS compliant when using Kubernetes pipeline storage
When using Kubernetes as the storage backend for pipelines, Elyra does not automatically convert pipeline names to DNS-compliant values. If a non-DNS-compliant name is used when starting an Elyra pipeline, an error similar to the following appears:
[TIP: did you mean to set 'https://ds-pipeline-dspa-robert-tests.apps.test.rhoai.rh-aiservices-bu.com/pipeline' as the endpoint, take care not to include 's' at end]
[TIP: did you mean to set 'https://ds-pipeline-dspa-robert-tests.apps.test.rhoai.rh-aiservices-bu.com/pipeline' as the endpoint, take care not to include 's' at end]
- Workaround
- Use DNS-compliant names when creating or running Elyra pipelines.
RHAIENG-1139 - Cannot deploy LlamaStackDistribution with the same name in multiple namespaces
If you create two LlamaStackDistribution resources with the same name in different namespaces, the ReplicaSet for the second resource fails to start the Llama Stack pod. The Llama Stack Operator does not correctly assign security constraints when duplicate names are used across namespaces.
- Workaround
-
Use a unique name for each
LlamaStackDistributionin every namespace. For example, include the project name or add a suffix such asllama-stack-distribution-209342.
RHAIENG-1624 - Embeddings API timeout on disconnected clusters
On disconnected clusters, calls to the embeddings API might time out when using the default embedding model (ibm-granite/granite-embedding-125m-english) included in the default Llama Stack distribution image.
- Workaround
Add the following environment variables to the
LlamaStackDistributioncustom resource to use the embedded model offline:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-34923 - Runtime configuration missing when running a pipeline from JupyterLab
The runtime configuration might not appear in the Elyra pipeline editor when you run a pipeline from the first active workbench in a project. This occurs because the configuration fails to populate for the initial workbench session.
- Workaround
- Restart the workbench. After restarting, the runtime configuration becomes available for pipeline execution.
RHAIENG-35055 - Model catalog fails to initialize after upgrading from OpenShift AI 2.24
After upgrading from OpenShift AI 2.24, the model catalog might fail to initialize and load. The OpenShift AI dashboard displays a Request access to model catalog error.
- Workaround
Delete the existing model catalog ConfigMap and deployment by running the following commands:
oc delete configmap model-catalog-sources -n rhoai-model-registries --ignore-not-found $ oc delete deployment model-catalog -n rhoai-model-registries --ignore-not-found
$ oc delete configmap model-catalog-sources -n rhoai-model-registries --ignore-not-found $ oc delete deployment model-catalog -n rhoai-model-registries --ignore-not-foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow
RHAIENG-35529 - Reconciliation issues in Data Science Pipelines Operator when using external Argo Workflows
If you enable the embedded Argo Workflows controllers (argoWorkflowsControllers: Managed) before deleting an existing external Argo Workflows installation, the workflow controller might fail to start and the Data Science Pipelines Operator (DSPO) might not reconcile its custom resources correctly.
- Workaround
- Before enabling the embedded Argo Workflows controllers, delete any existing external Argo Workflows instance from the cluster.
RHAIENG-36756 - Existing cluster storage option missing during model deployment when no connections exist
When creating a model deployment in a project with no defined data connections, the Existing cluster storage option does not appear, even if Persistent Volume Claims (PVCs) are available. As a result, you cannot select an existing PVC for model storage.
- Workaround
-
Create at least one connection of type
URIin the project. Afterward, the Existing cluster storage option becomes available.
RHOAIENG-36817 - Inference server fails when Model server size is set to small
When creating an inference service via the dashboard, selecting a small Model server size causes subsequent inferencing requests to fail. As a result, the deployment of the inference service itself succeeds, but the inferencing requests fail with a timeout error.
- Workaround
-
To resolve this issue, select the Model server size as
largefrom the dropdown.
RHOAIENG-33995 - Deployment of an inference service for Phi and Mistral models fails
The creation of an inference service for Phi and Mistral models using vLLM runtime on IBM Power cluster with openshift-container-platform 4.19 fails due to an error related to CPU backend. As a result, deployment of these models is affected, causing inference service creation failure.
- Workaround
-
To resolve this issue, disable the
sliding_windowmechanism in the serving runtime if it is enabled for CPU and Phi models. Sliding window is not currently supported in V1.
RHOAIENG-33795 - Manual Route creation needed for gRPC endpoint verification for Triton Inference Server on IBM Z
When verifying Triton Inference Server with gRPC endpoint, Route does not get created automatically. This happens because the Operator currently defaults to creating an edge-terminated route for REST only.
- Workaround
To resolve this issue, manual Route creation is needed for gRPC endpoint verification for Triton Inference Server on IBM Z.
When the model deployment pod is up and running, define an edge-terminated
Routeobject in a YAML file with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Routeobject:oc apply -f <route-file-name>.yaml
oc apply -f <route-file-name>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To send an inference request, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- <ca_cert_file> is the path to your cluster router CA cert (for example, router-ca.crt).
<triton_protoset_file> is compiled as a protobuf descriptor file. You can generate it as protoc -I. --descriptor_set_out=triton_desc.pb --include_imports grpc_service.proto.
Download grpc_service.proto and model_config.proto files from the triton-inference-service GitHub page.
RHOAIENG-33697 - Unable to Edit or Delete models unless status is "Started"
When you deploy a model on the NVIDIA NIM or single-model serving platform, the Edit and Delete options in the action menu are not available for models in the Starting or Pending states. These options become available only after the model has been successfully deployed.
- Workaround
- Wait until the model is in the Started state to make any changes or to delete the model.
RHOAIENG-33645 - LM-Eval Tier1 test failures
There can be failures with LM-Eval Tier1 tests because confirm_run_unsafe_code is not passed as an argument when a job is run, if you are using an older version of the trustyai-service-operator.
- Workaround
-
Ensure that you are using the latest version of the
trustyai-service-operatorand thatAllowCodeExecutionis enabled.
RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade
After upgrading from OpenShift AI version 2.22 or earlier to version 2.23 or later with the model registry component enabled, the model registry Operator might enter a restart loop. This is due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager pod.
- Workaround
To resolve this issue, you must trigger a reconciliation for the
model-registry-operator-controller-managerdeployment. Adding theopendatahub.io/managed='true'annotation to the deployment will accomplish this and apply the correct memory limit. You can add the annotation by running the following command:oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwrite
oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command overwrites custom values in the
model-registry-operator-controller-managerdeployment. For more information about custom deployment values, see Customizing component deployment resources.After the deployment updates and the memory limit increases from 128Mi to 256Mi, the container memory usage will stabilize and the restart loop will stop.
RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization
When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.
- Workaround
To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.
This is not required if you want to use the Technology Preview observability stack.
RHOAIENG-32599 - Inference service creation fails on IBM Z cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name.
- Workaround
- None.
RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19
When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).
- Workaround
-
When you create an inference service, set the environment variable
VLLM_CPU_OMP_THREADS_BINDtoall.
RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access
When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config), which it does not have permission to access.
The following error appears in the logs:
Exception in thread Thread-2 (_report_usage_worker): Traceback (most recent call last): ... PermissionError: [Error 13] Permission denied: '/.config'
Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
...
PermissionError: [Error 13] Permission denied: '/.config'
- Workaround
-
To prevent this error and suppress the usage stats logging, set the
VLLM_NO_USAGE_STATS=1environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.
RHOAIENG-24545 - Runtime images are not present in the workbench after the first start
The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.
- Workaround
- Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.
RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold
When you click Distributed workloads
- Workaround
- None.
SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing resource fails after disabling and enabling KServe
After disabling and enabling the kserve component in the DataScienceCluster, the KnativeServing resource might fail.
- Workaround
Delete all
ValidatingWebhookConfigurationandMutatingWebhookConfigurationwebhooks related to Knative:Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knativeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure KServe is disabled.
Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knativeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the webhooks.
- Enable KServe.
-
Verify that the KServe pod can successfully spawn, and that pods in the
knative-servingnamespace are active and operational.
RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard
When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp of object storage.
When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.
This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid is always added to the folder used in object storage. For more information about storage locations used in AI pipelines, see Storing data with pipelines.
- Workaround
- When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.
OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment
This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.
- Workaround
- None.
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has loops (dsl.ParallelFor) or condition groups (dsl.lf), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
-
From the OpenShift AI dashboard, click Develop & train
Pipelines Runs. - From the Project list, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
-
From the OpenShift AI dashboard, click Develop & train
RHOAIENG-6409 - Cannot save parameter errors appear in pipeline logs for successful runs
When you run a pipeline more than once, Cannot save parameter errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/ directory, while KServe places them in the /<mnt>/models/ directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/, for example,/<s3_storage_bucket>/models/1/<model_files>. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/format to specify the path to your model files. Do not specify the1/directory as part of the path. -
If you are creating your own
InferenceServicecustom resource to deploy your model, configure the value of thestorageURIfield as/<s3_storage_bucket>/models/. Do not specify the1/directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/ directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/inferstring to the end of the URL. Replace_<model-name>_with the name of your deployed model.
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService instance is Loaded, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlanecustom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists error message.
- Workaround
-
Change the
metadata.namefield to a unique value.
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart
If you use the Elyra JupyterLab extension to create and run pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.
- Workaround
- Stop the running workbench.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the workbench.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError status or Pending status instead of Running status, because of a known kernel bug that introduced a seccomp memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limitas described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/acceleratorlabel inmachineset.spec.template.spec.metadata. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.
- Workaround
When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact the Red Hat Customer Portal for assistance with manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.