Chapter 2. Managing applications that show in the dashboard
2.1. Adding an application to the dashboard
If you have installed an application in your OpenShift cluster, you can add a tile for that application to the OpenShift AI dashboard (the Applications
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
-
The dashboard configuration enablement option is set to
true
(the default). Note that an admin user can disable this ability as described in Preventing users from adding applications to the dashboard.
Procedure
- Log in to the OpenShift console as a cluster administrator.
-
In the Administrator perspective, click Home
API Explorer. -
On the API Explorer page, search for the
OdhApplication
kind. -
Click the
OdhApplication
kind to open the resource details page. -
On the OdhApplication details page, select the
redhat-ods-applications
project from the Project list. - Click the Instances tab.
- Click Create OdhApplication.
On the Create OdhApplication page, copy the following code and paste it into the YAML editor.
apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | {org-name} managed
Modify the parameters in the code for your application.
TipTo see example YAML files, click Home
API Explorer, select OdhApplication
, click the Instances tab, select an instance, and then click the YAML tab.- Click Create. The application details page appears.
- Log in to OpenShift AI.
-
In the left menu, click Applications
Explore. - Locate the new tile for your application and click it.
- In the information pane for the application, click Enable.
Verification
-
In the left menu of the OpenShift AI dashboard, click Applications
Enabled and verify that your application is available.
2.2. Preventing users from adding applications to the dashboard
By default, admin users are allowed to add applications to the OpenShift AI dashboard Application
You can disable the ability for admin users to add applications to the dashboard.
Note: The Jupyter tile is enabled by default. To disable it, see Hiding the default Jupyter application.
Prerequisite
- You have cluster administrator privileges for your OpenShift cluster.
Procedure
- Log in to the OpenShift console as a cluster administrator.
Open the dashboard configuration file:
-
In the Administrator perspective, click Home
API Explorer. -
In the search bar, enter
OdhDashboardConfig
to filter by kind. -
Click the
OdhDashboardConfig
custom resource (CR) to open the resource details page. -
Select the
redhat-ods-applications
project from the Project list. - Click the Instances tab.
-
Click the
odh-dashboard-config
instance to open the details page. - Click the YAML tab.
-
In the Administrator perspective, click Home
-
In the
spec:dashboardConfig
section, set the value ofenablement
tofalse
to disable the ability for dashboard users to add applications to the dashboard. - Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.
Verification
Open the OpenShift AI dashboard Application
2.3. Disabling applications connected to OpenShift AI
You can disable applications and components so that they do not appear on the OpenShift AI dashboard when you no longer want to use them, for example, when data scientists no longer use an application or when the application license expires.
Disabling unused applications allows your data scientists to manually remove these application tiles from their OpenShift AI dashboard so that they can focus on the applications that they are most likely to use. See Removing disabled applications from the dashboard for more information about manually removing application tiles.
Do not follow this procedure when disabling the following applications:
- Anaconda Professional Edition. You cannot manually disable Anaconda Professional Edition. It is automatically disabled only when its license expires.
Prerequisites
- You have logged in to the OpenShift web console.
-
You are part of the
cluster-admins
user group in OpenShift. - You have installed or configured the service on your OpenShift cluster.
- The application or component that you want to disable is enabled and appears on the Enabled page.
Procedure
- In the OpenShift web console, switch to the Administrator perspective.
-
Switch to the
redhat-ods-applications
project. -
Click Operators
Installed Operators. - Click on the Operator that you want to uninstall. You can enter a keyword into the Filter by name field to help you find the Operator faster.
Delete any Operator resources or instances by using the tabs in the Operator interface.
During installation, some Operators require the administrator to create resources or start process instances using tabs in the Operator interface. These must be deleted before the Operator can uninstall correctly.
On the Operator Details page, click the Actions drop-down menu and select Uninstall Operator.
An Uninstall Operator? dialog box is displayed.
- Select Uninstall to uninstall the Operator, Operator deployments, and pods. After this is complete, the Operator stops running and no longer receives updates.
Removing an Operator does not remove any custom resource definitions or managed resources for the Operator. Custom resource definitions and managed resources still exist and must be cleaned up manually. Any applications deployed by your Operator and any configured off-cluster resources continue to run and must be cleaned up manually.
Verification
- The Operator is uninstalled from its target clusters.
- The Operator no longer appears on the Installed Operators page.
-
The disabled application is no longer available for your data scientists to use, and is marked as
Disabled
on the Enabled page of the OpenShift AI dashboard. This action may take a few minutes to occur following the removal of the Operator.
2.4. Showing or hiding information about enabled applications
If you have installed another application in your OpenShift cluster, you can add a tile for that application to the OpenShift AI dashboard (the Applications
Prerequisites
- You have cluster administrator privileges for your OpenShift cluster.
Procedure
- Log in to the OpenShift console as a cluster administrator.
-
In the Administrator perspective, click Home
API Explorer. -
On the API Explorer page, search for the
OdhApplication
kind. -
Click the
OdhApplication
kind to open the resource details page. -
On the OdhApplication details page, select the
redhat-ods-applications
project from the Project list. - Click the Instances tab.
- Click Create OdhApplication.
On the Create OdhApplication page, copy the following code and paste it into the YAML editor.
apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | Red Hat managed
Modify the parameters in the code for your application.
TipTo see example YAML files, click Home
API Explorer, select OdhApplication
, click the Instances tab, select an instance, and then click the YAML tab.- Click Create. The application details page appears.
- Log in to OpenShift AI.
-
In the left menu, click Applications
Explore. - Locate the new tile for your application and click it.
- In the information pane for the application, click Enable.
Verification
-
In the left menu of the OpenShift AI dashboard, click Applications
Enabled and verify that your application is available.
2.5. Hiding the default Jupyter application
The OpenShift AI dashboard includes Jupyter as an enabled application by default.
To hide the Jupyter tile from the list of Enabled applications, edit the dashboard configuration file.
Prerequisite
- You have cluster administrator privileges for your OpenShift cluster.
Procedure
- Log in to the OpenShift console as a cluster administrator.
Open the dashboard configuration file:
-
In the Administrator perspective, click Home
API Explorer. -
In the search bar, enter
OdhDashboardConfig
to filter by kind. -
Click the
OdhDashboardConfig
custom resource (CR) to open the resource details page. -
Select the
redhat-ods-applications
project from the Project list. - Click the Instances tab.
-
Click the
odh-dashboard-config
instance to open the details page. - Click the YAML tab.
-
In the Administrator perspective, click Home
-
In the
spec:notebookController
section, set the value ofenabled
tofalse
to hide the Jupyter tile from the list of Enabled applications. - Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster.
Verification
In the OpenShift AI dashboard, select Applications> Enabled. You should not see the Jupyter tile.
2.6. Troubleshooting common problems in Jupyter for administrators
If your users are experiencing errors in Red Hat OpenShift AI relating to Jupyter, their notebooks, or their notebook server, read this section to understand what could be causing the problem, and how to resolve the problem.
If you cannot see the problem here or in the release notes, contact Red Hat Support.
2.6.1. A user receives a 404: Page not found error when logging in to Jupyter
Problem
If you have configured specialized user groups for OpenShift AI, the user name might not be added to the default user group for OpenShift AI.
Diagnosis
Check whether the user is part of the default user group.
Find the names of groups allowed access to Jupyter.
- Log in to the OpenShift web console.
-
Click User Management
Groups. Click the name of your user group, for example,
rhoai-users
.The Group details page for that group appears.
- Click the Details tab for the group and confirm that the Users section for the relevant group contains the users who have permission to access Jupyter.
Resolution
- If the user is not added to any of the groups with permission access to Jupyter, follow Adding users to add them.
- If the user is already added to a group with permission to access Jupyter, contact Red Hat Support.
2.6.2. A user’s notebook server does not start
Problem
The OpenShift cluster that hosts the user’s notebook server might not have access to enough resources, or the Jupyter pod may have failed.
Diagnosis
- Log in to the OpenShift web console.
Delete and restart the notebook server pod for this user.
-
Click Workloads
Pods and set the Project to rhods-notebooks
. Search for the notebook server pod that belongs to this user, for example,
jupyter-nb-<username>-*
.If the notebook server pod exists, an intermittent failure may have occurred in the notebook server pod.
If the notebook server pod for the user does not exist, continue with diagnosis.
-
Click Workloads
Check the resources currently available in the OpenShift cluster against the resources required by the selected notebook server image.
If worker nodes with sufficient CPU and RAM are available for scheduling in the cluster, continue with diagnosis.
- Check the state of the Jupyter pod.
Resolution
If there was an intermittent failure of the notebook server pod:
- Delete the notebook server pod that belongs to the user.
- Ask the user to start their notebook server again.
- If the notebook server does not have sufficient resources to run the selected notebook server image, either add more resources to the OpenShift cluster, or choose a smaller image size.
If the Jupyter pod is in a FAILED state:
-
Retrieve the logs for the
jupyter-nb-*
pod and send them to Red Hat Support for further evaluation. -
Delete the
jupyter-nb-*
pod.
-
Retrieve the logs for the
- If none of the previous resolutions apply, contact Red Hat Support.
2.6.3. The user receives a database or disk is full error or a no space left on device error when they run notebook cells
Problem
The user might have run out of storage space on their notebook server.
Diagnosis
Log in to Jupyter and start the notebook server that belongs to the user having problems. If the notebook server does not start, follow these steps to check whether the user has run out of storage space:
- Log in to the OpenShift web console.
-
Click Workloads
Pods and set the Project to rhods-notebooks
. -
Click the notebook server pod that belongs to this user, for example,
jupyter-nb-<idp>-<username>-*
. Click Logs. The user has exceeded their available capacity if you see lines similar to the following:
Unexpected error while saving file: XXXX database or disk is full
Resolution
- Increase the user’s available storage by expanding their persistent volume: Expanding persistent volumes
-
Work with the user to identify files that can be deleted from the
/opt/app-root/src
directory on their notebook server to free up their existing storage space.
When you delete files using the JupyterLab file explorer, the files move to the hidden /opt/app-root/src/.local/share/Trash/files
folder in the persistent storage for the notebook. To free up storage space for notebooks, you must permanently delete these files.