このコンテンツは選択した言語では利用できません。
Chapter 7. Known issues
This section describes known issues in Red Hat OpenShift AI 2.23 and any known methods of working around these issues.
RHOAIENG-29729 - Model registry Operator in a restart loop after upgrade
After upgrading to OpenShift AI version 2.23 from version 2.22 or earlier with the model registry component enabled, the model registry Operator might enter a restart loop. This is due to an insufficient memory limit for the manager container in the model-registry-operator-controller-manager
pod.
- Workaround
To resolve this issue, you must trigger a reconciliation for the
model-registry-operator-controller-manager
deployment. Adding theopendatahub.io/managed='true'
annotation to the deployment will accomplish this and apply the correct memory limit. You can add the annotation by running the following command:oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwrite
oc annotate deployment model-registry-operator-controller-manager -n redhat-ods-applications opendatahub.io/managed='true' --overwrite
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis command overwrites custom values in the
model-registry-operator-controller-manager
deployment. For more information about custom deployment values, see Customizing component deployment resources.After the deployment updates and the memory limit increases from 128Mi to 256Mi, the container memory usage will stabilize and the restart loop will stop.
RHOAIENG-31248 - KServe http: TLS handshake error
The OpenShift CA auto-injection in the localmodelcache
validation webhook configuration is missing the necessary annotation, leading to repeated TLS handshake errors.
- Workaround
To resolve this issue, enter the following command:
oc patch validatingwebhookconfiguration localmodelcache.serving.kserve.io --type='merge' -p='{"metadata":{"annotations":{"service.beta.openshift.io/inject-cabundle":"true"}}}' validatingwebhookconfiguration.admissionregistration.k8s.io/localmodelcache.serving.kserve.io patched
oc patch validatingwebhookconfiguration localmodelcache.serving.kserve.io --type='merge' -p='{"metadata":{"annotations":{"service.beta.openshift.io/inject-cabundle":"true"}}}' validatingwebhookconfiguration.admissionregistration.k8s.io/localmodelcache.serving.kserve.io patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command replaces the previously used
cert-manager.io/inject-ca-from
and adds theservice.beta.openshift.io/inject-cabundle
annotation to thelocalmodelcache.serving.kserve.io
webhook configuration. This change ensures secure connections in KServe by eliminating TLS handshake errors.
RHOAIENG-31498 - Incorrect inference URL in LlamaStack LMEval provider
When you run evaluations on Llama Stack using the LMEval provider, the evaluation jobs erroneously use the model server endpoint as v1/openai/v1/completions
. This results in a job failure becuase the correct model server endpoint is v1/completions
.
- Workaround
- None.
RHOAIENG-31238 - New observability stack enabled when creating DSCInitialization
When you remove a DSCInitialization resource and create a new one using OpenShift AI console form view, it enables a Technology Preview observability stack. This results in the deployment of an unwanted observability stack when recreating a DSCInitialization resource.
- Workaround
To resolve this issue, manually remove the "metrics" and "traces" fields when recreating the DSCInitiliazation resource using the form view.
This is not required if you want to use the Technology Preview observability stack.
RHOAIENG-31377 - Inference service creation fails on IBM Power cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Power cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name
.
- Workaround
- None.
RHOAIENG-31376 - Inference service creation using vLLM runtime fails on IBM Power cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Power cluster, it fails with the following error: OpNamespace' '_C_utils' object has no attribute 'init_cpu_threads_env
error.
- Workaround
- None.
RHOAIENG-31536 - Prometheus configuration not reconciled properly
After upgrading to or installing 2.23, Monitoring resource is not reconciled properly and shows a "Not Ready" status. This issue occurs because the resource requires the OpenTelemetry and Cluster Observability Operators to be installed, even if no new monitoring or tracing configurations are added to the DSCInitialization
resource. As a result, Prometheus configuration is not reconciled and leads to empty or outdated alert configurations.
- Workaround
To resolve this issue, install the OpenTelemetry and Cluster Observability operators.
When the OpenTelemetry and Cluster Observability Operators are installed, the Prometheus configuration will reconcile correctly, and your monitoring resource should return to a ready state.
RHAIENG-496 - Error creating LlamaStackDistribution as a non-administrator user
Non-administrator requests fail due to insufficient role-based access control (RBAC) as the deployed role definitions are outdated or incomplete for the current Llama Stack resources (for example, the LlamaStackDistribution
CRD). When creating a LlamaStackDistribution
as non-administrator user, the following error appears:
error (forbidden): error when retrieving current configuration of: Resource: "llamastack.io/v1alpha1, Resource=llamastackdistributions", GroupVersionKind: "llamastack.io/v1alpha1, Kind=LlamaStackDistribution" Name: "lsd-llama-milvus", Namespace: "my-project" from server for: "example-llamastackdistribution.yaml": llamastackdistributions.llamastack.io "lsd-llama-milvus" is forbidden: User "my-non-admin-user" cannot get resource "llamastackdistributions" in API group "llamastack.io" in the namespace "my-project"
error (forbidden): error when retrieving current configuration of: Resource: "llamastack.io/v1alpha1, Resource=llamastackdistributions", GroupVersionKind: "llamastack.io/v1alpha1, Kind=LlamaStackDistribution" Name: "lsd-llama-milvus", Namespace: "my-project" from server for: "example-llamastackdistribution.yaml": llamastackdistributions.llamastack.io "lsd-llama-milvus" is forbidden: User "my-non-admin-user" cannot get resource "llamastackdistributions" in API group "llamastack.io" in the namespace "my-project"
- Workaround
To resolve this issue, perform the following steps:
- Login to the cluster with a cluster-admin.
Deploy the upstream
ClusterRole
definition for OpenShift AI by using the following command:oc apply -f https://raw.githubusercontent.com/llamastack/llama-stack-k8s-operator/main/config/rbac/llsd_editor_role.yaml
oc apply -f https://raw.githubusercontent.com/llamastack/llama-stack-k8s-operator/main/config/rbac/llsd_editor_role.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the role to users that need permission to manage the
LLamaStackDistribution
:oc create rolebinding llamastack-editor --clusterrole=llsd-editor-role --user=my-non-admin-user -n my-project
oc create rolebinding llamastack-editor --clusterrole=llsd-editor-role --user=my-non-admin-user -n my-project
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-32145 - Llama Stack Operator deployment failures on OpenShift versions earlier than 4.17
When installing OpenShift AI on OpenShift clusters running versions earlier than 4.17, the integrated Llama Stack Operator (llamastackoperator
) might fail to deploy.
The Llama Stack Operator requires Kubernetes version 1.32 or later, but OpenShift 4.15 uses Kubernetes 1.28. This version gap can cause schema validation failures when applying the LlamaStackDistribution
custom resource definition (CRD), due to unsupported selectable fields introduced in Kubernetes 1.32.
- Workaround
- Install OpenShift AI on an OpenShift cluster running version 4.17 or later.
RHOAIENG-32242 - Failure on creating NetworkPolicies for OpenShift versions 4.15 and 4.16
When installing OpenShift AI on OpenShift clusters running versions 4.15 or 4.16, deployment of certain NetworkPolicy
resources might fail. This can occur when the llamastackoperator
or related components attempt to create a NetworkPolicy
in a protected namespace, such as redhat-ods-applications
. The request can be blocked by the admission webhook networkpolicies-validation.managed.openshift.io
, which restricts modifications to certain namespaces and resources, even for cluster-admin
users. This restriction can apply to both self-managed and Red Hat–managed OpenShift environments.
- Workaround
- Deploy OpenShift AI on an OpenShift cluster running version 4.17 or later. For clusters where the webhook restriction is enforced, contact your OpenShift administrator or Red Hat Support to determine an alternative deployment pattern or approved change to the affected namespace.
RHOAIENG-32599 - Inference service creation fails on IBM Z cluster
When you attempt to create an inference service using the vLLM runtime on an IBM Z cluster, it fails with the following error: ValueError: 'aimv2' is already used by a Transformers config, pick another name
.
- Workaround
- None.
RHOAIENG-29731 - Inference service creation fails on IBM Power cluster with OpenShift 4.19
When you attempt to create an inference service by using the vLLM runtime on an IBM Power cluster on OpenShift Container Platform version 4.19, it fails due to an error related to Non-Uniform Memory Access (NUMA).
- Workaround
-
When you create an inference service, set the environment variable
VLLM_CPU_OMP_THREADS_BIND
toall
.
RHOAIENG-29352 - Missing Documentation and Support menu items
In the OpenShift AI top navigation bar, when you click the help icon (
), the menu contains only the About menu item. The Documentation and Support menu items are missing.
- Workaround
- None.
RHOAIENG-29292 - vLLM logs permission errors on IBM Z due to usage stats directory access
When running vLLM on the IBM Z architecture, the inference service starts successfully, but logs an error in a background thread related to usage statistics reporting. This happens because the service tries to write usage data to a restricted location (/.config
), which it does not have permission to access.
The following error appears in the logs:
Exception in thread Thread-2 (_report_usage_worker): Traceback (most recent call last): ... PermissionError: [Error 13] Permission denied: '/.config'
Exception in thread Thread-2 (_report_usage_worker):
Traceback (most recent call last):
...
PermissionError: [Error 13] Permission denied: '/.config'
- Workaround
-
To prevent this error and suppress the usage stats logging, set the
VLLM_NO_USAGE_STATS=1
environment variable in the inference service deployment. This disables automatic usage reporting, avoiding permission issues when you write to system directories.
RHOAIENG-28910 - Unmanaged KServe resources are deleted after upgrading from 2.16 to 2.19 or later
During the upgrade from OpenShift AI 2.16 to 2.23, the FeatureTracker
custom resource (CR) is deleted before its owner references are fully removed from associated KServe-related resources. As a result, resources that were originally created by the Red Hat OpenShift AI Operator with a Managed
state and later changed to Unmanaged
in the DataScienceCluster
(DSC) custom resource (CR) might be unintentionally removed. This issue can disrupt model serving functionality until the resources are manually restored.
The following resources might be deleted in 2.23 if they were changed to Unmanaged
in 2.16:
Kind | Namespace | Name |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Workaround
If you have already upgraded from OpenShift AI 2.16 to 2.23, perform one of the following actions:
-
If you have an existing backup, manually recreate the deleted resources without owner references to the
FeatureTracker
CR. If you do not have an existing backup, you can use the Operator to recreate the deleted resources:
- Back up any resources you have already recreated.
In the DSC, set
spec.components.kserve.serving.managementState
toManaged
, and then save the change to allow the Operator to recreate the resources.Wait until the Operator has recreated the resources.
-
In the DSC, set
spec.components.kserve.serving.managementState
back toUnmanaged
, and then save the change. -
Reapply any previous custom changes to the recreated
KnativeServing
,ServiceMeshMember
, andGateway
CRs resources.
If you have not yet upgraded, perform the following actions before upgrading to prevent this issue:
-
In the DSC, set
spec.components.kserve.serving.managementState
toUnmanaged
. -
For each of the affected
KnativeServing
,ServiceMeshMember
, andGateway
resources listed in the above table, edit its CR by deleting theFeatureTracker
owner reference. This edit removes the resource’s dependency on theFeatureTracker
and prevents the deletion of the resource during the upgrade process.
-
If you have an existing backup, manually recreate the deleted resources without owner references to the
NVPE-302, NVPE-303 - Missing storage classes for NIM models
When you try to deploy a NVIDIA Inference Microservice (NIM) model on the NVIDIA NIM model serving platform in a newly-installed OpenShift AI cluster, you might observe that the Storage class drop-down menu is not populated or is missing on the Model deployment page. This is because the storage classes are not loaded or cached in the user interface in new installations of OpenShift AI. As a result, you cannot configure storage for your deployment.
- Workaround
-
From the OpenShift AI dashboard, click Settings
Storage classes. Do not make any changes. -
Click Models
Model deployments to view your NIM model deployment. - Click Deploy model.
- On the Model deployment page, the Storage class drop-down menu is visible and populated with the available storage class options.
-
From the OpenShift AI dashboard, click Settings
RHOAIENG-25734 - Duplicate name issue with notebook images
When you delete a workbench after you have created a workbench, deployment, or model server and use the same name for both the product-scoped and global-scoped Imagrestreams, the workbench displays an incorrect name in the workbench table and in the Edit workbench form.
- Workaround
- Do not use the same name for your project-scoped and global-scoped Accelerator profiles.
RHOAIENG-24545 - Runtime images are not present in the workbench after the first start
The list of runtime images does not properly populate the first running workbench instance in the namespace, therefore no image is shown for selection in the Elyra pipeline editor.
- Workaround
- Restart the workbench. After restarting the workbench, the list of runtime images populates both the workbench and the select box for the Elyra pipeline editor.
RHOAIENG-25090 - InstructLab prerequisites-check-op
task fails when the model registration option is disabled
When you start a LAB-tuning run without selecting the Add model to <model registry name> checkbox, the InstructLab pipeline starts, but the prerequisites-check-op
task fails with the following error in the pod logs:
failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
failed: failed to resolve inputs: the resolved input parameter is null: output_model_name
- Workaround
- Select the Add model to <model registry name> checkbox when you configure the LAB-tuning run.
RHOAIENG-25056 - Data science pipeline task fails when optional input parameters used in nested pipelines are not set
When a pipeline has optional input parameters, if values for those parameters are not provided and they are used in a nested pipeline, the tasks using them fail with the following error:
failed: failed to resolve inputs: resolving input parameter with spec component_input_parameter:"optional_input": parent DAG does not have input parameter optional_input
failed: failed to resolve inputs: resolving input parameter with spec component_input_parameter:"optional_input": parent DAG does not have input parameter optional_input
- Workaround
- Provide values for all optional parameters when using nested pipeline tasks.
RHOAIENG-24786 - Upgrading the Authorino Operator from Technical Preview to Stable fails in disconnected environments
In disconnected environments, upgrading the Red Hat Authorino Operator from Technical Preview to Stable fails with an error in the authconfig-migrator-qqttz
pod.
- Workaround
-
Update the Red Hat Authorino Operator to the latest version in the
tech-preview-v1
update channel (v1.1.2). Run the following script:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the Red Hat Authorino Operator subscription to use the
stable
update channel. - Select the update option for Authorino 1.2.1.
-
Update the Red Hat Authorino Operator to the latest version in the
RHOAIENG-20209 - Warning message not displayed when requested resources exceed threshold
When you click Distributed workloads
- Workaround
- None.
SRVKS-1301 (previously documented as RHOAIENG-18590) - The KnativeServing
resource fails after disabling and enabling KServe
After disabling and enabling the kserve
component in the DataScienceCluster, the KnativeServing
resource might fail.
- Workaround
Delete all
ValidatingWebhookConfiguration
andMutatingWebhookConfiguration
webhooks related to Knative:Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure KServe is disabled.
Get the webhooks:
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration | grep -i knative
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the webhooks.
- Enable KServe.
-
Verify that the KServe pod can successfully spawn, and that pods in the
knative-serving
namespace are active and operational.
RHOAIENG-16247 - Elyra pipeline run outputs are overwritten when runs are launched from OpenShift AI dashboard
When a pipeline is created and run from Elyra, outputs generated by the pipeline run are stored in the folder bucket-name/pipeline-name-timestamp
of object storage.
When a pipeline is created from Elyra and the pipeline run is started from the OpenShift AI dashboard, the timestamp value is not updated. This can cause pipeline runs to overwrite files created by previous pipeline runs of the same pipeline.
This issue does not affect pipelines compiled and imported using the OpenShift AI dashboard because runid
is always added to the folder used in object storage. For more information about storage locations used in data science pipelines, see Storing data with data science pipelines.
- Workaround
- When storing files in an Elyra pipeline, use different subfolder names on each pipeline run.
OCPBUGS-49422 - AMD GPUs and AMD ROCm workbench images are not supported in a disconnected environment
This release of OpenShift AI does not support AMD GPUs and AMD ROCm workbench images in a disconnected environment because installing the AMD GPU Operator requires internet access to fetch dependencies needed to compile GPU drivers.
- Workaround
- None.
RHOAIENG-12516 - fast
releases are available in unintended release channels
Due to a known issue with the stream image delivery process, fast
releases are currently available on unintended streaming channels, for example, stable
, and stable-x.y
. For accurate release type, channel, and support lifecycle information, refer to the Life-cycle Dates table on the Red Hat OpenShift AI Self-Managed Life Cycle page.
- Workaround
- None.
RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later
If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper
custom resource definition (CRD) version.
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
- Workaround
Delete the existing
AppWrapper
CRD:oc delete crd appwrappers.workload.codeflare.dev
$ oc delete crd appwrappers.workload.codeflare.dev
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for about 20 seconds, and then ensure that a new
AppWrapper
CRD is automatically applied, as shown in the following example:oc get crd appwrappers.workload.codeflare.dev
$ oc get crd appwrappers.workload.codeflare.dev NAME CREATED AT appwrappers.workload.codeflare.dev 2024-11-22T18:35:04Z
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has loops (dsl.ParallelFor
) or condition groups (dsl.lf
), the UI displays a Running status for the loops and groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
-
From the OpenShift AI dashboard, click Data Science Pipelines
Runs. - From the Project list, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
-
From the OpenShift AI dashboard, click Data Science Pipelines
RHOAIENG-6409 - Cannot save parameter
errors appear in pipeline logs for successful runs
When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter
errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics
In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.
- Workaround
- None.
RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade
Data science pipelines 2.0 contains an installation of Argo Workflows. Red Hat does not support direct customer use of this instance of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Migrating to data science pipelines 2.0.
- Workaround
-
Remove the existing Argo Workflows installation or set
datasciencepipelines
toRemoved
, and then proceed with the installation or upgrade.
RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded
condition of False
with an error
If you have enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but have not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady
condition in the DSC object correctly shows that KServe is not ready. However, the Degraded
condition incorrectly shows a value of False
.
- Workaround
- Install the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators, and then recreate the DSC.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/
directory, while KServe places them in the /<mnt>/models/
directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/
, for example,/<s3_storage_bucket>/models/1/<model_files>
. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/
format to specify the path to your model files. Do not specify the1/
directory as part of the path. -
If you are creating your own
InferenceService
custom resource to deploy your model, configure the value of thestorageURI
field as/<s3_storage_bucket>/models/
. Do not specify the1/
directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/
directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/infer
string to the end of the URL. Replace_<model-name>_
with the name of your deployed model.
RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart
The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.
- Workaround
- None.
RHOAIENG-2585 - UI does not display an error/warning when UWM is not enabled in the cluster
Red Hat OpenShift AI does not correctly warn users if User Workload Monitoring (UWM) is disabled in the cluster. UWM is necessary for the correct functionality of model metrics.
- Workaround
- Manually ensure that UWM is enabled in your cluster, as described in Enabling monitoring for user-defined projects.
RHOAIENG-2555 - Model framework selector does not reset when changing Serving Runtime in form
When you use the Deploy model dialog to deploy a model on the single-model serving platform, if you select a runtime and a supported framework, but then switch to a different runtime, the existing framework selection is not reset. This means that it is possible to deploy the model with a framework that is not supported for the selected runtime.
- Workaround
- While deploying a model, if you change your selected runtime, click the Select a framework list again and select a supported framework.
RHOAIENG-2468 - Services in the same project as KServe might become inaccessible in OpenShift
If you deploy a non-OpenShift AI service in a data science project that contains models deployed on the single-model serving platform (which uses KServe), the accessibility of the service might be affected by the network configuration of your OpenShift cluster. This is particularly likely if you are using the OVN-Kubernetes network plugin in combination with host network namespaces.
- Workaround
Perform one of the following actions:
- Deploy the service in another data science project that does not contain models deployed on the single-model serving platform. Or, deploy the service in another OpenShift project.
In the data science project where the service is, add a network policy to accept ingress traffic to your application pods, as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-1919 - Model Serving page fails to fetch or report the model route URL soon after its deployment
When deploying a model from the OpenShift AI dashboard, the system displays the following warning message while the Status column of your model indicates success with an OK/green checkmark.
Failed to get endpoint for this deployed model. routes.rout.openshift.io"<model_name>" not found
Failed to get endpoint for this deployed model. routes.rout.openshift.io"<model_name>" not found
- Workaround
- Refresh your browser page.
RHOAIENG-404 - No Components Found page randomly appears instead of Enabled page in OpenShift AI dashboard
A No Components Found page might appear when you access the Red Hat OpenShift AI dashboard.
- Workaround
- Refresh the browser page.
RHOAIENG-234 - Unable to view .ipynb files in VSCode in Insecured cluster
When you use the code-server workbench image on Google Chrome in an insecure cluster, you cannot view .ipynb files.
- Workaround
- Use a different browser.
RHOAIENG-1128 - Unclear error message displays when attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench
When attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench, an unclear error message is displayed.
- Workaround
- Verify that your PV is connected to a workbench before attempting to increase the size.
RHOAIENG-497 - Removing DSCI Results In OpenShift Service Mesh CR Being Deleted Without User Notification
If you delete the DSCInitialization
resource, the OpenShift Service Mesh CR is also deleted. A warning message is not shown.
- Workaround
- None.
RHOAIENG-282 - Workload should not be dispatched if required resources are not available
Sometimes a workload is dispatched even though a single machine instance does not have sufficient resources to provision the RayCluster successfully. The AppWrapper
CRD remains in a Running
state and related pods are stuck in a Pending
state indefinitely.
- Workaround
- Add extra resources to the cluster.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService
instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService
instance is Loaded
, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlane
custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-130 - Synchronization issue when the model is just launched
When the status of the KServe container is Ready
, a request is accepted even though the TGIS container is not ready.
- Workaround
- Wait a few seconds to ensure that all initialization has completed and the TGIS container is actually ready, and then review the request output.
RHOAIENG-3115 - Model cannot be queried for a few seconds after it is shown as ready
Models deployed using the multi-model serving platform might be unresponsive to queries despite appearing as Ready in the dashboard. You might see an “Application is not available" response when querying the model endpoint.
- Workaround
- Wait 30-40 seconds and then refresh the page in your browser.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines
is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists
error message.
- Workaround
-
Change the
metadata.name
field to a unique value.
RHOAIENG-1201 (previously documented as ODH-DASHBOARD-1908) - Cannot create workbench with an empty environment variable
When creating a workbench, if you click Add variable but do not select an environment variable type from the list, you cannot create the workbench. The field is not marked as required, and no error message is shown.
- Workaround
- None.
RHOAIENG-432 (previously documented as RHODS-12928) - Using unsupported characters can generate Kubernetes resource names with multiple dashes
When you create a resource and you specify unsupported characters in the name, then each space is replaced with a dash and other unsupported characters are removed, which can result in an invalid resource name.
- Workaround
- None.
RHOAIENG-226 (previously documented as RHODS-12432) - Deletion of the notebook-culler ConfigMap causes Permission Denied on dashboard
If you delete the notebook-controller-culler-config
ConfigMap in the redhat-ods-applications
namespace, you can no longer save changes to the Cluster Settings page on the OpenShift AI dashboard. The save operation fails with an HTTP request has failed
error.
- Workaround
Complete the following steps as a user with
cluster-admin
permissions:-
Log in to your cluster by using the
oc
client. Enter the following command to update the
OdhDashboardConfig
custom resource in theredhat-ods-applications
application namespace:oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"notebookController.enabled": true}}}'
$ oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"notebookController.enabled": true}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Log in to your cluster by using the
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after workbench restart
If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a workbench image within the workbench, you cannot execute the pipeline, even after restarting the workbench.
- Workaround
- Stop the running workbench.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the workbench.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError
status or Pending
status instead of Running
status, because of a known kernel bug that introduced a seccomp
memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod
command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limit
as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
RHOAIENG-1208 (previously documented as ODH-DASHBOARD-1741) - Cannot create a workbench whose name begins with a number
If you try to create a workbench whose name begins with a number, the workbench does not start.
- Workaround
- Delete the workbench and create a new one with a name that begins with a letter.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-9789 - Pipeline servers fail to start if they contain a custom database that includes a dash in its database name or username field
When you create a pipeline server that uses a custom database, if the value that you set for the dbname field or username field includes a dash, the pipeline server fails to start.
- Workaround
- Edit the pipeline server to omit the dash from the affected fields.
RHODS-7718 - User without dashboard permissions is able to continue using their running workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running workbenches for that user.
RHOAIENG-1157 (previously documented as RHODS-6955) - An error can occur when trying to edit a workbench
When editing a workbench, an error similar to the following can occur:
Error creating workbench Operation cannot be fulfilled on notebooks.kubeflow.org "workbench-name": the object has been modified; please apply your changes to the latest version and try again
Error creating workbench
Operation cannot be fulfilled on notebooks.kubeflow.org "workbench-name": the object has been modified; please apply your changes to the latest version and try again
- Workaround
- None.
RHOAIENG-1152 (previously documented as RHODS-6356) - The basic-workbench creation process fails for users who have never logged in to the dashboard
The dashboard’s Administration page for basic workbenches displays users who belong to the user group and admin group in OpenShift. However, if an administrator attempts to start a basic workbench on behalf of a user who has never logged in to the dashboard, the basic-workbench creation process fails and displays the following error message:
Request invalid against a username that does not exist.
Request invalid against a username that does not exist.
- Workaround
- Request that the relevant user logs into the dashboard.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/accelerator
label inmachineset.spec.template.spec.metadata
. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHOAIENG-1149 (previously documented RHODS-5216) - The application launcher menu incorrectly displays a link to OpenShift Cluster Manager
Red Hat OpenShift AI incorrectly displays a link to the OpenShift Cluster Manager from the application launcher menu. Clicking this link results in a "Page Not Found" error because the URL is not valid.
- Workaround
- None.
RHOAIENG-1137 (previously documented as RHODS-5251) - Administration page for basic workbenches shows users who have lost permission access
If a user who previously started a basic workbench loses their permissions to do so (for example, if an OpenShift AI administrator changes the user’s group settings or removes the user from a permitted group), administrators continue to see the user’s basic workbench on the Administration page. As a consequence, an administrator is able to restart a basic workbench that belongs to a user whose permissions were revoked.
- Workaround
- None.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch workbench images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the workbench environment, and to import those variables for use in your code.
- Workaround
When you start your basic workbench, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHODS-3984 - Incorrect package versions displayed during notebook selection
In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.
- Workaround
-
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the
!pip list
command in a notebook cell.
RHODS-2956 - Error can occur when creating a notebook instance
When creating a notebook instance in Jupyter, a Directory not found
error appears intermittently. This error message can be ignored by clicking Dismiss.
- Workaround
- None.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled
label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHOAIENG-1134 (previously documented as RHODS-2879) - License revalidation action appears unnecessarily
The dashboard action to revalidate a disabled application license appears unnecessarily for applications that do not have a license validation or activation system. In addition, when a user attempts to revalidate a license that cannot be revalidated, feedback is not displayed to state why the action cannot be completed.
- Workaround
- None.
RHOAIENG-2305 (previously documented as RHODS-2650) - Error can occur during Pachyderm deployment
When creating an instance of the Pachyderm operator, a webhook error appears intermittently, preventing the creation process from starting successfully. The webhook error is indicative that, either the Pachyderm operator failed a health check, causing it to restart, or that the operator process exceeded its container’s allocated memory limit, triggering an Out of Memory (OOM) kill.
- Workaround
- Repeat the Pachyderm instance creation process until the error no longer appears.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.