Chapter 8. Known issues
This section describes known issues in Red Hat OpenShift AI 2-latest and any known methods of working around these issues.
RHOAIENG-15123 (also documented as RHOAIENG-10790 and RHOAIENG-14265) - Pipelines schedule fail after upgrading
When you upgrade to OpenShift AI 2.15, data science pipeline scheduled runs that existed before the upgrade fail to execute. An error message appears in the task pod.
- Workaround
- Follow the steps in the How to resolve scheduled pipeline run failures after upgrading to Red Hat OpenShift AI 2.15.0 Knowledgebase solution.
RHOAIENG-15033 - Model registry instances do not restart or update after upgrade from OpenShift AI 2.14 to 2.15
When you upgrade from OpenShift AI 2.14.z to 2.15, existing instances of the model registry component are not updated, which causes the instance pods to use older images than the ones referenced by the operator pod.
- Workaround
- After upgrading to OpenShift AI 2.15, for each existing model registry, create a new model registry instance that uses the same database configuration, and delete the old model registry instance. Your new model registry instance will contain all existing registered models and their metadata. You can perform these steps from the OpenShift AI dashboard or from the OpenShift command-line interface (CLI).
From the OpenShift AI dashboard:
For each existing model registry, create a new instance that uses the same database configuration.
-
From the OpenShift AI dashboard, click Settings
Model registry settings. - Check the database connection details for an existing registry by clicking the action menu (⋮) beside the model registry on the Model registry settings page, and then clicking View database configuration.
- Create a new model registry with the same database connection details as the existing model registry instance.
- In OpenShift, verify that the REST and GRPC images defined in your model registry instance pod match the ones in the model registry operator pod.
-
From the OpenShift AI dashboard, click Settings
After creating the new model registry, delete the original registry:
- On the Model registry settings page, click the action menu (⋮) beside the model registry, and then click Delete model registry.
- In the Delete model registry? dialog that appears, enter the name of the model registry in the text field to confirm that you intend to delete it.
- Click Delete model registry.
Alternatively, from the OpenShift command-line interface (CLI):
Run the following command, replacing
<model_registry_name>
with the name of your model registry:oc patch -n rhoai-model-registries mr <model_registry_name> --type=merge -p='{"spec":{"grpc":{"image":""},"rest":{"image":""}}}'
- In OpenShift, verify that the REST and GRPC images defined in your model registry instance pod match the ones in the model registry operator pod.
RHOAIENG-15008 - Error when creating a bias metric from the CLI without a request name
The user interface might display an error message when viewing bias metrics if the requestName
parameter is not set. If you use the user interface to view bias metrics, but want to configure them through the CLI, you must specify a requestName
parameter within your payload:
curl -sk -H "Authorization: Bearer ${TOKEN}" -X POST --location $TRUSTY_ROUTE/metrics/group/fairness/spd/request --header 'Content-Type: application/json' --data "{ \"modelId\": \"demo-loan-nn-onnx-alpha\", \"protectedAttribute\": \"Is Male-Identifying?\", \"privilegedAttribute\": 1.0, \"unprivilegedAttribute\": 0.0, \"outcomeName\": \"Will Default?\", \"favorableOutcome\": 0, \"batchSize\": 5000, \"requestName\": \"My Request\"
- Workaround
-
Use the CLI to delete metric requests that have no specified
requestName
and then refresh the UI.
RHOAIENG-14986 - Incorrect package path causes copy_demo_nbs
to fail
In OpenShift AI SDK version 0.22.0, the copy_demo_nbs()
function of the CodeFlare SDK fails because of an incorrect path to the SDK package. Running this function results in a FileNotFound
error.
- Workaround
There are two options to work around the issue:
-
Manually create a
demo-notebooks
directory. -
From a Python shell or a blank notebook, run the following Python script to create the
demo-notebooks
directory:
-
Manually create a
import codeflare_sdk import os import pathlib import shutil nb_dir = os.path.join(os.path.dirname(codeflare_sdk.__file__), "demo-notebooks") def copy_nbs(dir: str = "./demo-notebooks", overwrite: bool = False): # does dir exist already? if overwrite is False and pathlib.Path(dir).exists(): raise FileExistsError( f"Directory {dir} already exists. Please remove it or provide a different location." ) shutil.copytree(nb_dir, dir, dirs_exist_ok=True) copy_nbs()
RHOAIENG-14552 - Workbench or notebook OAuth proxy fails with FIPS on OCP 4.16
When using OpenShift 4.16 or newer in a FIPS-enabled cluster, connecting to a running workbench fails because the connection between the internal component oauth-proxy
and the OpenShift ingress fails with a TLS handshake error. When opening a workbench, the browser shows an "Application is not available" screen without any additional diagnostics.
- Workaround
- None
RHOAIENG-14271 - Compatibility errors occur when using different Python versions in Ray clusters with Jupyter notebooks
When you use Python version 3.11 in a Jupyter notebook, and then create a Ray cluster, the cluster defaults to a workbench image that contains Ray version 2.35 and Python version 3.9. This mismatch can cause compatibility errors, as the Ray Python client requires a Python version that matches your workbench configuration.
- Workaround
-
Use a workbench image with your Ray cluster that contains a matching Python version with the
ClusterConfiguration
argument.
RHOAIENG-14197 - Tooltip text for CPU and Memory graphs is clipped and therefore unreadable
When you hover the cursor over the CPU and Memory graphs in the Top resource-consuming distributed workloads section on the Project metrics tab of the Distributed Workloads Metrics page, the tooltip text is clipped, and therefore unreadable.
- Workaround
- View the project metrics for your distributed workloads within the CPU and Memory graphs themselves.
RHOAIENG-14095 - The dashboard is temporarily unavailable after the installing OpenShift AI Operator
After you install the OpenShift AI Operator, the OpenShift AI dashboard is unavailable for approximately three minutes. As a result, a Cannot read properties of undefined page might appear.
- Workaround
- After you first access the OpenShift AI dashboard, wait for a few minutes and then refresh the page in your browser.
RHOAIENG-13633 - Cannot set a serving platform for a project without first deploying a model from outside of the model registry
You cannot set a serving platform for a project without first deploying a model from outside of the model registry.
Currently, you cannot deploy a model from a model registry to a project unless the project already has single-model or multi-model serving selected. The only way to select single-model or multi-model serving from the OpenShift AI UI is to first deploy a model or model server from outside the registry.
- Workaround
Before deploying a model from a model registry to a new project, select a serving platform for the project from the OpenShift AI dashboard, or from the OpenShift console.
From the OpenShift AI dashboard:
- From the OpenShift AI dashboard, click Data Science Projects.
- Click the Models tab.
- Deploy a model or create a model server, and then delete it. For more information about models and model servers in OpenShift AI, see Serving models.
From the OpenShift console:
-
In the OpenShift web console, select Home
Projects. - Click the name of the project.
- Click the YAML tab.
In the
metadata.labels
section, add themodelmesh-enabled
label.-
To choose multi-model serving, set the
modelmesh-enabled
value totrue
. -
To choose single-model serving, set the
modelmesh-enabled
value tofalse
.
-
To choose multi-model serving, set the
- Click Save.
-
In the OpenShift web console, select Home
After you select a serving platform for a project, you can deploy models from a model registry to that project.
RHOAIENG-12516 - fast
releases are available in unintended release channels
Due to a known issue with the stream image delivery process, fast
releases are currently available on unintended streaming channels, for example, stable
, and stable-x.y
. For accurate release type, channel, and support lifecycle information, refer to the Life-cycle Dates table on the Red Hat OpenShift AI Self-Managed Life Cycle page.
- Workaround
- None
RHOAIENG-12233 - The chat_template
parameter is required when using the /v1/chat/completions
endpoint
When configuring the vLLM ServingRuntime for KServe runtime and querying a model using the /v1/chat/completions
endpoint, the model fails with the following 400 Bad Request
error:
As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one
Transformers v4.44.2 is shipped with vLLM v0.5.5. As of vLLM v0.5.5, you must provide a chat template while querying a model using the /v1/chat/completions
endpoint.
- Workaround
If your model does not include a predefined chat template, you can use the
chat-template
command-line parameter to specify a chat template in your custom vLLM runtime, as shown in the example. Replace<CHAT_TEMPLATE>
with the path to your template.containers: - args: - --chat-template=<CHAT_TEMPLATE>
You can use the chat templates that are available as
.jinja
files in https://github.com/opendatahub-io/vllm/tree/main/examples or with the vLLM image under/apps/data/template
. For more information, see Chat templates.
RHOAIENG-11024 - Resources entries get wiped out after removing opendatahub.io/managed annotation
Manually removing the opendatahub.io/managed
annotation from any component deployment YAML file might cause resource
entry values in the file to be erased.
- Workaround
To remove the annotation from a deployment, use the following steps to delete the deployment.
The controller pod for the component will redeploy automatically with default values.
- Log in to the OpenShift console as a cluster administrator.
- In the Administrator perspective, click Workloads > Deployments.
-
From the Project drop-down list, select
redhat-ods-applications
. - Click the options menu beside the deployment for which you want to remove the annotation.
- Click Delete Deployment.
RHOAIENG-8294 - CodeFlare error when upgrading OpenShift AI 2.8 to version 2.10 or later
If you try to upgrade OpenShift AI 2.8 to version 2.10 or later, the following error message is shown for the CodeFlare component, due to a mismatch with the AppWrapper
custom resource definition (CRD) version.
ReconcileCompletedWithComponentErrors DataScienceCluster resource reconciled with component errors: 1 error occurred: * CustomResourceDefinition.apiextensions.k8s.io "appwrappers.workload.codeflare.dev" is invalid: status.storedVersions[0]: Invalid value: "v1beta1": must appear in spec.versions
- Workaround
Delete the existing
AppWrapper
CRD.$ oc delete crd appwrappers.workload.codeflare.dev
Install the latest version of the
AppWrapper
CRD.$ oc apply -f https://raw.githubusercontent.com/project-codeflare/codeflare-operator/main/config/crd/crd-appwrapper.yml
RHOAIENG-7947 - Model serving fails during query in KServe
If you initially install the ModelMesh component and enable the multi-model serving platform, but later install the KServe component and enable the single-model serving platform, inference requests to models deployed on the single-model serving platform might fail. In these cases, inference requests return a 404 - Not Found
error and the logs for the odh-model-controller
deployment object show a Reconciler
error message.
- Workaround
-
In OpenShift, restart the
odh-model-controller
deployment object.
RHOAIENG-7716 - Pipeline condition group status does not update
When you run a pipeline that has condition groups, for example, dsl.lf
, the UI displays a Running
status for the groups, even after the pipeline execution is complete.
- Workaround
You can confirm if a pipeline is still running by checking that no child tasks remain active.
-
From the OpenShift AI dashboard, click Data Science Pipelines
Runs. - From the Project drop-down menu, click your data science project.
- From the Runs tab, click the pipeline run that you want to check the status of.
Expand the condition group and click a child task.
A panel that contains information about the child task is displayed
On the panel, click the Task details tab.
The Status field displays the correct status for the child task.
-
From the OpenShift AI dashboard, click Data Science Pipelines
RHOAIENG-6486 - Pod labels, annotations, and tolerations cannot be configured when using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image
When using the Elyra JupyterLab extension with the TensorFlow 2024.1 notebook image, you cannot configure pod labels, annotations, or tolerations from an executed pipeline. This is due to a dependency conflict with the kfp and tf2onnx packages.
- Workaround
If you are working with the TensorFlow 2024.1 notebook image, after you have completed your work, change the assigned workbench notebook image to the Standard Data Science 2024.1 notebook image.
In the Pipeline properties tab in the Elyra JupyterLab extension, set the Tensorflow runtime image as the default runtime image for the pipeline node individually, along with the relevant pod label, annotation or toleration, for each pipeline node.
RHOAIENG-6435 - Distributed workloads resources are not included in Project metrics
When you click Distributed Workloads Metrics > Project metrics and view the Requested resources section, the Requested by all projects value currently excludes the resources for distributed workloads that have not yet been admitted to the queue.
- Workaround
- None.
RHOAIENG-6409 - Cannot save parameter
errors appear in pipeline logs for successful runs
When you run a pipeline more than once with data science pipelines 2.0, Cannot save parameter
errors appear in the pipeline logs for successful pipeline runs. You can safely ignore these errors.
- Workaround
- None.
RHOAIENG-12294 (previously documented as RHOAIENG-4812) - Distributed workload metrics exclude GPU metrics
In this release of OpenShift AI, the distributed workload metrics exclude GPU metrics.
- Workaround
- None.
RHOAIENG-4570 - Existing Argo Workflows installation conflicts with install or upgrade
Data science pipelines 2.0 contains an installation of Argo Workflows. OpenShift AI does not support direct customer usage of this installation of Argo Workflows. To install or upgrade OpenShift AI with data science pipelines 2.0, ensure that there is no existing installation of Argo Workflows on your cluster. For more information, see Enabling data science pipelines 2.0.
- Workaround
-
Remove the existing Argo Workflows installation or set
datasciencepipelines
toRemoved
, and then proceed with the installation or upgrade.
RHOAIENG-3913 - Red Hat OpenShift AI Operator incorrectly shows Degraded
condition of False
with an error
If you have enabled the KServe component in the DataScienceCluster (DSC) object used by the OpenShift AI Operator, but have not installed the dependent Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless Operators, the kserveReady
condition in the DSC object correctly shows that KServe is not ready. However, the Degraded
condition incorrectly shows a value of False
.
- Workaround
- Install the Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh Operators, and then recreate the DSC.
RHOAIENG-3025 - OVMS expected directory layout conflicts with the KServe StoragePuller layout
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform (which uses KServe), there is a mismatch between the directory layout expected by OVMS and that of the model-pulling logic used by KServe. Specifically, OVMS requires the model files to be in the /<mnt>/models/1/
directory, while KServe places them in the /<mnt>/models/
directory.
- Workaround
Perform the following actions:
-
In your S3-compatible storage bucket, place your model files in a directory called
1/
, for example,/<s3_storage_bucket>/models/1/<model_files>
. To use the OVMS runtime to deploy a model on the single-model serving platform, choose one of the following options to specify the path to your model files:
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
/<s3_storage_bucket>/models/
format to specify the path to your model files. Do not specify the1/
directory as part of the path. -
If you are creating your own
InferenceService
custom resource to deploy your model, configure the value of thestorageURI
field as/<s3_storage_bucket>/models/
. Do not specify the1/
directory as part of the path.
-
If you are using the OpenShift AI dashboard to deploy your model, in the Path field for your data connection, use the
-
In your S3-compatible storage bucket, place your model files in a directory called
KServe pulls model files from the subdirectory in the path that you specified. In this case, KServe correctly pulls model files from the /<s3_storage_bucket>/models/1/
directory in your S3-compatible storage.
RHOAIENG-3018 - OVMS on KServe does not expose the correct endpoint in the dashboard
When you use the OpenVINO Model Server (OVMS) runtime to deploy a model on the single-model serving platform, the URL shown in the Inference endpoint field for the deployed model is not complete.
- Workaround
-
To send queries to the model, you must add the
/v2/models/_<model-name>_/infer
string to the end of the URL. Replace_<model-name>_
with the name of your deployed model.
RHOAIENG-2759 - Model deployment fails when both secured and regular model servers are present in a project
When you create a second model server in a project where one server is using token authentication, and the other server does not use authentication, the deployment of the second model might fail to start.
- Workaround
- None.
RHOAIENG-2602 - “Average response time" server metric graph shows multiple lines due to ModelMesh pod restart
The Average response time server metric graph shows multiple lines if the ModelMesh pod is restarted.
- Workaround
- None.
RHOAIENG-2585 - UI does not display an error/warning when UWM is not enabled in the cluster
Red Hat OpenShift AI does not correctly warn users if User Workload Monitoring (UWM) is disabled in the cluster. UWM is necessary for the correct functionality of model metrics.
- Workaround
- Manually ensure that UWM is enabled in your cluster, as described in Enabling monitoring for user-defined projects.
RHOAIENG-2555 - Model framework selector does not reset when changing Serving Runtime in form
When you use the Deploy model dialog to deploy a model on the single-model serving platform, if you select a runtime and a supported framework, but then switch to a different runtime, the existing framework selection is not reset. This means that it is possible to deploy the model with a framework that is not supported for the selected runtime.
- Workaround
- While deploying a model, if you change your selected runtime, click the Select a framework list again and select a supported framework.
RHOAIENG-2468 - Services in the same project as KServe might become inaccessible in OpenShift
If you deploy a non-OpenShift AI service in a data science project that contains models deployed on the single-model serving platform (which uses KServe), the accessibility of the service might be affected by the network configuration of your OpenShift cluster. This is particularly likely if you are using the OVN-Kubernetes network plugin in combination with host network namespaces.
- Workaround
Perform one of the following actions:
- Deploy the service in another data science project that does not contain models deployed on the single-model serving platform. Or, deploy the service in another OpenShift project.
In the data science project where the service is, add a network policy to accept ingress traffic to your application pods, as shown in the following example:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-ingress-to-myapp spec: podSelector: matchLabels: app: myapp ingress: - {}
RHOAIENG-2228 - The performance metrics graph changes constantly when the interval is set to 15 seconds
On the Endpoint performance tab of the model metrics screen, if you set the Refresh interval to 15 seconds and the Time range to 1 hour, the graph results change continuously.
- Workaround
- None.
RHOAIENG-2183 - Endpoint performance graphs might show incorrect labels
In the Endpoint performance tab of the model metrics screen, the graph tooltip might show incorrect labels.
- Workaround
- None.
RHOAIENG-1919 - Model Serving page fails to fetch or report the model route URL soon after its deployment
When deploying a model from the OpenShift AI dashboard, the system displays the following warning message while the Status column of your model indicates success with an OK/green checkmark.
Failed to get endpoint for this deployed model. routes.rout.openshift.io"<model_name>" not found
- Workaround
- Refresh your browser page.
RHOAIENG-404 - No Components Found page randomly appears instead of Enabled page in OpenShift AI dashboard
A No Components Found page might appear when you access the Red Hat OpenShift AI dashboard.
- Workaround
- Refresh the browser page.
RHOAIENG-234 - Unable to view .ipynb files in VSCode in Insecured cluster
When you use the code-server notebook image on Google Chrome in an insecure cluster, you cannot view .ipynb files.
- Workaround
- Use a different browser.
RHOAIENG-1128 - Unclear error message displays when attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench
When attempting to increase the size of a Persistent Volume (PV) that is not connected to a workbench, an unclear error message is displayed.
- Workaround
- Verify that your PV is connected to a workbench before attempting to increase the size.
RHOAIENG-545 - Cannot specify a generic default node runtime image in JupyterLab pipeline editor
When you edit an Elyra pipeline in the JupyterLab IDE pipeline editor, and you click the PIPELINE PROPERTIES tab, and scroll to the Generic Node Defaults section and edit the Runtime Image field, your changes are not saved.
- Workaround
- Define the required runtime image explicitly for each node. Click the NODE PROPERTIES tab, and specify the required image in the Runtime Image field.
RHOAIENG-497 - Removing DSCI Results In OpenShift Service Mesh CR Being Deleted Without User Notification
If you delete the DSCInitialization
resource, the OpenShift Service Mesh CR is also deleted. A warning message is not shown.
- Workaround
- None.
RHOAIENG-282 - Workload should not be dispatched if required resources are not available
Sometimes a workload is dispatched even though a single machine instance does not have sufficient resources to provision the RayCluster successfully. The AppWrapper
CRD remains in a Running
state and related pods are stuck in a Pending
state indefinitely.
- Workaround
- Add extra resources to the cluster.
RHOAIENG-131 - gRPC endpoint not responding properly after the InferenceService reports as Loaded
When numerous InferenceService
instances are generated and directed requests, Service Mesh Control Plane (SMCP) becomes unresponsive. The status of the InferenceService
instance is Loaded
, but the call to the gRPC endpoint returns with errors.
- Workaround
-
Edit the
ServiceMeshControlPlane
custom resource (CR) to increase the memory limit of the Istio egress and ingress pods.
RHOAIENG-130 - Synchronization issue when the model is just launched
When the status of the KServe container is Ready
, a request is accepted even though the TGIS container is not ready.
- Workaround
- Wait a few seconds to ensure that all initialization has completed and the TGIS container is actually ready, and then review the request output.
RHOAIENG-3115 - Model cannot be queried for a few seconds after it is shown as ready
Models deployed using the multi-model serving platform might be unresponsive to queries despite appearing as Ready in the dashboard. You might see an “Application is not available" response when querying the model endpoint.
- Workaround
- Wait 30-40 seconds and then refresh the page in your browser.
RHOAIENG-1619 (previously documented as DATA-SCIENCE-PIPELINES-165) - Poor error message when S3 bucket is not writable
When you set up a data connection and the S3 bucket is not writable, and you try to upload a pipeline, the error message Failed to store pipelines
is not helpful.
- Workaround
- Verify that your data connection credentials are correct and that you have write access to the bucket you specified.
RHOAIENG-1207 (previously documented as ODH-DASHBOARD-1758) - Error duplicating OOTB custom serving runtimes several times
If you duplicate a model-serving runtime several times, the duplication fails with the Serving runtime name "<name>" already exists
error message.
- Workaround
-
Change the
metadata.name
field to a unique value.
RHOAIENG-1201 (previously documented as ODH-DASHBOARD-1908) - Cannot create workbench with an empty environment variable
When creating a workbench, if you click Add variable but do not select an environment variable type from the list, you cannot create the workbench. The field is not marked as required, and no error message is shown.
- Workaround
- None.
RHOAIENG-432 (previously documented as RHODS-12928) - Using unsupported characters can generate Kubernetes resource names with multiple dashes
When you create a resource and you specify unsupported characters in the name, then each space is replaced with a dash and other unsupported characters are removed, which can result in an invalid resource name.
- Workaround
- None.
RHOAIENG-226 (previously documented as RHODS-12432) - Deletion of the notebook-culler ConfigMap causes Permission Denied on dashboard
If you delete the notebook-controller-culler-config
ConfigMap in the redhat-ods-applications
namespace, you can no longer save changes to the Cluster Settings page on the OpenShift AI dashboard. The save operation fails with an HTTP request has failed
error.
- Workaround
Complete the following steps as a user with
cluster-admin
permissions:-
Log in to your cluster by using the
oc
client. Enter the following command to update the
OdhDashboardConfig
custom resource in theredhat-ods-applications
application namespace:$ oc patch OdhDashboardConfig odh-dashboard-config -n redhat-ods-applications --type=merge -p '{"spec": {"dashboardConfig": {"notebookController.enabled": true}}}'
-
Log in to your cluster by using the
RHOAIENG-133 - Existing workbench cannot run Elyra pipeline after notebook restart
If you use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab, and you configure the pipeline server after you created a workbench and specified a notebook image within the workbench, you cannot execute the pipeline, even after restarting the notebook.
- Workaround
- Stop the running notebook.
- Edit the workbench to make a small modification. For example, add a new dummy environment variable, or delete an existing unnecessary environment variable. Save your changes.
- Restart the notebook.
- In the left sidebar of JupyterLab, click Runtimes.
- Confirm that the default runtime is selected.
RHOAIENG-11 - Separately installed instance of CodeFlare Operator not supported
In Red Hat OpenShift AI, the CodeFlare Operator is included in the base product and not in a separate Operator. Separately installed instances of the CodeFlare Operator from Red Hat or the community are not supported.
- Workaround
- Delete any installed CodeFlare Operators, and install and configure Red Hat OpenShift AI, as described in the Red Hat Knowledgebase solution How to migrate from a separately installed CodeFlare Operator in your data science cluster.
RHODS-12798 - Pods fail with "unable to init seccomp" error
Pods fail with CreateContainerError
status or Pending
status instead of Running
status, because of a known kernel bug that introduced a seccomp
memory leak. When you check the events on the namespace where the pod is failing, or run the oc describe pod
command, the following error appears:
runc create failed: unable to start container process: unable to init seccomp: error loading seccomp filter into kernel: error loading seccomp filter: errno 524
- Workaround
-
Increase the value of
net.core.bpf_jit_limit
as described in the Red Hat Knowledgebase solution Pods failing with error loading seccomp filter into kernel: errno 524 in OpenShift 4.
KUBEFLOW-177 - Bearer token from application not forwarded by OAuth-proxy
You cannot use an application as a custom workbench image if its internal authentication mechanism is based on a bearer token. The OAuth-proxy configuration removes the bearer token from the headers, and the application cannot work properly.
- Workaround
- None.
RHOAIENG-5646 (previously documented as NOTEBOOKS-218) - Data science pipelines saved from the Elyra pipeline editor reference an incompatible runtime
When you save a pipeline in the Elyra pipeline editor with the format .pipeline
in OpenShift AI version 1.31 or earlier, the pipeline references a runtime that is incompatible with OpenShift AI version 1.32 or later.
As a result, the pipeline fails to run after you upgrade OpenShift AI to version 1.32 or later.
- Workaround
- After you upgrade to OpenShift AI to version 1.32 or later, select the relevant runtime images again.
NOTEBOOKS-210 - A notebook fails to export as a PDF file in Jupyter
When you export a notebook as a PDF file in Jupyter, the export process fails with an error.
- Workaround
- None.
RHOAIENG-1210 (previously documented as ODH-DASHBOARD-1699) - Workbench does not automatically restart for all configuration changes
When you edit the configuration settings of a workbench, a warning message appears stating that the workbench will restart if you make any changes to its configuration settings. This warning is misleading because in the following cases, the workbench does not automatically restart:
- Edit name
- Edit description
- Edit, add, or remove keys and values of existing environment variables
- Workaround
- Manually restart the workbench.
RHOAIENG-1208 (previously documented as ODH-DASHBOARD-1741) - Cannot create a workbench whose name begins with a number
If you try to create a workbench whose name begins with a number, the workbench does not start.
- Workaround
- Delete the workbench and create a new one with a name that begins with a letter.
KUBEFLOW-157 - Logging out of JupyterLab does not work if you are already logged out of the OpenShift AI dashboard
If you log out of the OpenShift AI dashboard before you log out of JupyterLab, then logging out of JupyterLab is not successful. For example, if you know the URL for a Jupyter notebook, you are able to open this again in your browser.
- Workaround
- Log out of JupyterLab before you log out of the OpenShift AI dashboard.
RHODS-9789 - Pipeline servers fail to start if they contain a custom database that includes a dash in its database name or username field
When you create a pipeline server that uses a custom database, if the value that you set for the dbname field or username field includes a dash, the pipeline server fails to start.
- Workaround
- Edit the pipeline server to omit the dash from the affected fields.
RHOAIENG-580 (previously documented as RHODS-9412) - Elyra pipeline fails to run if workbench is created by a user with edit permissions
If a user who has been granted edit permissions for a project creates a project workbench, that user sees the following behavior:
-
During the workbench creation process, the user sees an
Error creating workbench
message related to the creation of Kubernetes role bindings. - Despite the preceding error message, OpenShift AI still creates the workbench. However, the error message means that the user will not be able to use the workbench to run Elyra data science pipelines.
If the user tries to use the workbench to run an Elyra pipeline, Jupyter shows an
Error making request
message that describes failed initialization.- Workaround
- A user with administrator permissions (for example, the project owner) must create the workbench on behalf of the user with edit permissions. That user can then use the workbench to run Elyra pipelines.
RHODS-7718 - User without dashboard permissions is able to continue using their running notebooks and workbenches indefinitely
When a Red Hat OpenShift AI administrator revokes a user’s permissions, the user can continue to use their running notebooks and workbenches indefinitely.
- Workaround
- When the OpenShift AI administrator revokes a user’s permissions, the administrator should also stop any running notebooks and workbenches for that user.
RHOAIENG-1157 (previously documented as RHODS-6955) - An error can occur when trying to edit a workbench
When editing a workbench, an error similar to the following can occur:
Error creating workbench Operation cannot be fulfilled on notebooks.kubeflow.org "workbench-name": the object has been modified; please apply your changes to the latest version and try again
- Workaround
- None.
RHOAIENG-1152 (previously documented as RHODS-6356) - The notebook creation process fails for users who have never logged in to the dashboard
The dashboard’s notebook Administration page displays users belonging to the user group and admin group in OpenShift. However, if an administrator attempts to start a notebook server on behalf of a user who has never logged in to the dashboard, the server creation process fails and displays the following error message:
Request invalid against a username that does not exist.
- Workaround
- Request that the relevant user logs into the dashboard.
RHODS-5763 - Incorrect package version displayed during notebook selection
The Start a notebook server page displays an incorrect version number for the Anaconda notebook image.
- Workaround
- None.
RHODS-5543 - When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler
When a pod cannot be scheduled due to insufficient available resources, the Node Autoscaler creates a new node. There is a delay until the newly created node receives the relevant GPU workload. Consequently, the pod cannot be scheduled and the Node Autoscaler’s continuously creates additional new nodes until one of the nodes is ready to receive the GPU workload. For more information about this issue, see the Red Hat Knowledgebase solution When using the NVIDIA GPU Operator, more nodes than needed are created by the Node Autoscaler.
- Workaround
-
Apply the
cluster-api/accelerator
label inmachineset.spec.template.spec.metadata
. This causes the autoscaler to consider those nodes as unready until the GPU driver has been deployed.
RHOAIENG-1149 (previously documented RHODS-5216) - The application launcher menu incorrectly displays a link to OpenShift Cluster Manager
Red Hat OpenShift AI incorrectly displays a link to the OpenShift Cluster Manager from the application launcher menu. Clicking this link results in a "Page Not Found" error because the URL is not valid.
- Workaround
- None.
RHOAIENG-1137 (previously documented as RHODS-5251) - Notebook server administration page shows users who have lost permission access
If a user who previously started a notebook server in Jupyter loses their permissions to do so (for example, if an OpenShift AI administrator changes the user’s group settings or removes the user from a permitted group), administrators continue to see the user’s notebook servers on the server Administration page. As a consequence, an administrator is able to restart notebook servers that belong to the user whose permissions were revoked.
- Workaround
- None.
RHODS-4799 - Tensorboard requires manual steps to view
When a user has TensorFlow or PyTorch notebook images and wants to use TensorBoard to display data, manual steps are necessary to include environment variables in the notebook environment, and to import those variables for use in your code.
- Workaround
When you start your notebook server, use the following code to set the value for the TENSORBOARD_PROXY_URL environment variable to use your OpenShift AI user ID.
import os os.environ["TENSORBOARD_PROXY_URL"]= os.environ["NB_PREFIX"]+"/proxy/6006/"
RHODS-4718 - The Intel® oneAPI AI Analytics Toolkits quick start references nonexistent sample notebooks
The Intel® oneAPI AI Analytics Toolkits quick start, located on the Resources page on the dashboard, requires the user to load sample notebooks as part of the instruction steps, but refers to notebooks that do not exist in the associated repository.
- Workaround
- None.
RHODS-4627 - The CronJob responsible for validating Anaconda Professional Edition’s license is suspended and does not run daily
The CronJob responsible for validating Anaconda Professional Edition’s license is automatically suspended by the OpenShift AI operator. As a result, the CronJob does not run daily as scheduled. In addition, when Anaconda Professional Edition’s license expires, Anaconda Professional Edition is not indicated as disabled on the OpenShift AI dashboard.
- Workaround
- None.
RHOAIENG-1141 (previously documented as RHODS-4502) - The NVIDIA GPU Operator tile on the dashboard displays button unnecessarily
GPUs are automatically available in Jupyter after the NVIDIA GPU Operator is installed. The Enable button, located on the NVIDIA GPU Operator tile on the Explore page, is therefore redundant. In addition, clicking the Enable button moves the NVIDIA GPU Operator tile to the Enabled page, even if the Operator is not installed.
- Workaround
- None.
RHOAIENG-1135 (previously documented as RHODS-3985) - Dashboard does not display Enabled page content after ISV operator uninstall
After an ISV operator is uninstalled, no content is displayed on the Enabled page on the dashboard. Instead, the following error is displayed:
Error loading components HTTP request failed
- Workaround
- Wait 30-40 seconds and then refresh the page in your browser.
RHODS-3984 - Incorrect package versions displayed during notebook selection
In the OpenShift AI interface, the Start a notebook server page displays incorrect version numbers for the JupyterLab and Notebook packages included in the oneAPI AI Analytics Toolkit notebook image. The page might also show an incorrect value for the Python version used by this image.
- Workaround
-
When you start your oneAPI AI Analytics Toolkit notebook server, you can check which Python packages are installed on your notebook server and which version of the package you have by running the
!pip list
command in a notebook cell.
RHODS-2956 - Error can occur when creating a notebook instance
When creating a notebook instance in Jupyter, a Directory not found
error appears intermittently. This error message can be ignored by clicking Dismiss.
- Workaround
- None.
RHOAING-1147 (previously documented as RHODS-2881) - Actions on dashboard not clearly visible
The dashboard actions to revalidate a disabled application license and to remove a disabled application tile are not clearly visible to the user. These actions appear when the user clicks on the application tile’s Disabled
label. As a result, the intended workflows might not be clear to the user.
- Workaround
- None.
RHOAIENG-1134 (previously documented as RHODS-2879) - License revalidation action appears unnecessarily
The dashboard action to revalidate a disabled application license appears unnecessarily for applications that do not have a license validation or activation system. In addition, when a user attempts to revalidate a license that cannot be revalidated, feedback is not displayed to state why the action cannot be completed.
- Workaround
- None.
RHOAIENG-2305 (previously documented as RHODS-2650) - Error can occur during Pachyderm deployment
When creating an instance of the Pachyderm operator, a webhook error appears intermittently, preventing the creation process from starting successfully. The webhook error is indicative that, either the Pachyderm operator failed a health check, causing it to restart, or that the operator process exceeded its container’s allocated memory limit, triggering an Out of Memory (OOM) kill.
- Workaround
- Repeat the Pachyderm instance creation process until the error no longer appears.
RHODS-2096 - IBM Watson Studio not available in OpenShift AI
IBM Watson Studio is not available when OpenShift AI is installed on OpenShift Dedicated 4.9 or higher, because it is not compatible with these versions of OpenShift Dedicated.
- Workaround
- Contact Marketplace support for assistance manually configuring Watson Studio on OpenShift Dedicated 4.9 and higher.