Working in your data science IDE


Red Hat OpenShift AI Cloud Service 1

Working in your data science IDE from Red Hat OpenShift AI Cloud Service

Abstract

Prepare your data science integrated development environment (IDE) for developing machine learning models.

Preface

In Red Hat OpenShift AI, when you create a workbench, you select a workbench image that includes an integrated development environment (IDE) for developing your machine learning (ML) models.

You can use the following data science IDEs for developing ML models with OpenShift AI:

  • JupyterLab
  • code-server
  • RStudio Server (Technology Preview feature)

For information about RStudio Server, see the Release Notes.

Chapter 1. Accessing your workbench IDE

To access a workbench IDE, use the link provided in the OpenShift AI interface.

Prerequisite

  • You have created a data science project and a workbench.

Procedure

  1. From the OpenShift AI dashboard, click Data science projects.
  2. Click the name of the project that contains the workbench.
  3. Click the Workbenches tab.
  4. If the status of the workbench is Running, skip to the next step.

    If the status of the workbench is Stopped, in the Status column for the workbench, click Start.

    The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started.

  5. Click the open icon ( The open icon ) next to the workbench.

Verification

  • A new browser window opens for the workbench IDE.

Chapter 2. Working in JupyterLab

JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data. You can configure and arrange workflows in data science and machine learning. JupyterLab is an open source web application that supports over 40 programming languages, including Python and R.

2.1. Creating and importing Jupyter notebooks

You can create a blank Jupyter notebook or import a Jupyter notebook in JupyterLab from several different sources.

2.1.1. Creating a Jupyter notebook

You can create a Jupyter notebook from an existing notebook container image to access its resources and properties. The Workbench control panel contains a list of available container images that you can run as a single-user workbench.

Prerequisites

  • Ensure that you have logged in to Red Hat OpenShift AI.
  • Ensure that you have launched your workbench and logged in to JupyterLab.
  • The workbench image exists in a registry, image stream, and is accessible.

Procedure

  1. Click FileNewNotebook.
  2. If prompted, select a kernel for your Jupyter notebook from the list.

    If you want to use a kernel, click Select. If you do not want to use a kernel, click No Kernel.

Verification

  • Check that the notebook file is visible in the JupyterLab interface.

You can load an existing notebook file from local storage into JupyterLab to continue work, or adapt a project for a new use case.

Prerequisites

  • Credentials for logging in to JupyterLab.
  • You have a launched and running workbench based on a JupyterLab image.
  • A notebook file exists in your local storage.

Procedure

  1. In the File Browser in the left sidebar of the JupyterLab interface, click Upload Files ( Upload Files ).
  2. Locate and select the notebook file and then click Open.

    The file is displayed in the File Browser.

Verification

  • The notebook file is displayed in the File Browser in the left sidebar of the JupyterLab interface.
  • You can open the notebook file in JupyterLab.

If your files are stored in Git version control, you can clone a Git repository to work with them in JupyterLab. When you are ready, you can push your changes back to the Git repository so that others can review or use your models.

You can use the JupyterLab user interface to clone a Git repository into your workspace to continue your work or integrate files from an external project.

Prerequisites

  • You have a launched and running workbench based on a JupyterLab image.
  • Read access for the Git repository you want to clone.

Procedure

  1. Copy the HTTPS URL for the Git repository.

    • In GitHub, click ⤓ CodeHTTPS and then click the Copy URL to clipboard icon.
    • In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS.
  2. In the JupyterLab interface, click the Git Clone button ( Git Clone button ).

    You can also click GitClone a repository in the menu, or click the Git icon ( Git button ) and click the Clone a repository button.

    The Clone a repo dialog opens.

  3. Enter the HTTPS URL of the repository that contains your notebook file.
  4. Click CLONE.
  5. If prompted, enter your username and password for the Git repository.

Verification

  • Check that the contents of the repository are visible in the file browser in JupyterLab, or run the ls command in the terminal to verify that the repository shows as a directory.

You can use the command line interface to clone a Git repository into your workspace to continue your work or integrate files from an external project.

Prerequisites

  • You have a launched and running workbench based on a JupyterLab image.

Procedure

  1. Copy the HTTPS URL for the Git repository.

    • In GitHub, click ⤓ CodeHTTPS and then click the Copy URL to clipboard icon.
    • In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS.
  2. In JupyterLab, click FileNewTerminal to open a terminal window.
  3. Enter the git clone command:

    git clone <git-clone-URL>
    Copy to Clipboard Toggle word wrap

    Replace git-clone-URL> with the HTTPS URL, for example:

    [1234567890@jupyter-nb-jdoe ~]$ git clone https://github.com/example/myrepo.git
    Cloning into myrepo...
    remote: Enumerating objects: 11, done.
    remote: Counting objects: 100% (11/11), done.
    remote: Compressing objects: 100% (10/10), done.
    remote: Total 2821 (delta 1), reused 5 (delta 1), pack-reused 2810
    Receiving objects: 100% (2821/2821), 39.17 MiB | 23.89 MiB/s, done.
    Resolving deltas: 100% (1416/1416), done.
    Copy to Clipboard Toggle word wrap

Verification

  • Check that the contents of the repository are visible in the file browser in JupyterLab, or run the ls command in the terminal to verify that the repository shows as a directory.

You can pull changes made by other users into your data science project from a remote Git repository.

Prerequisites

  • You have a launched and running workbench based on a JupyterLab image.
  • You have credentials for logging in to Jupyter.
  • You have configured the remote Git repository.
  • You have permissions to pull files from the remote Git repository to your local repository.
  • You have imported the Git repository into JupyterLab, and the contents of the repository are visible in the file browser in JupyterLab.

Procedure

  1. In the JupyterLab interface, click the Git button ( Git button ).
  2. Click the Pull latest changes button ( Pull latest changes button ).

Verification

  • You can view the changes pulled from the remote repository on the History tab in the Git pane.

2.2.4. Pushing project changes to a Git repository

To build and deploy your application in a production environment, upload your work to a remote Git repository.

Prerequisites

  • You have opened a Jupyter notebook in the JupyterLab interface.
  • You have added the relevant Git repository to your workbench.
  • You have permission to push changes to the relevant Git repository.
  • You have installed the Git version control extension.

Procedure

  1. Click FileSave All to save any unsaved changes.
  2. Click the Git icon ( Git button ) to open the Git pane in the JupyterLab interface.
  3. Confirm that your changed files appear under Changed.

    If your changed files appear under Untracked, click GitSimple Staging to enable a simplified Git process.

  4. Commit your changes.

    1. Ensure that all files under Changed have a blue checkmark beside them.
    2. In the Summary field, enter a brief description of the changes you made.
    3. Click Commit.
  5. Click GitPush to Remote to push your changes to the remote repository.
  6. When prompted, enter your Git credentials and click OK.

Verification

  • Your most recently pushed changes are visible in the remote Git repository.

2.3. Managing Python packages

In JupyterLab, you can view the Python packages that are installed on your workbench image and install additional packages.

You can check which Python packages are installed on your workbench and which version of the package you have by running the pip tool in a notebook cell.

Prerequisites

  • Log in to JupyterLab and open a Jupyter notebook.

Procedure

  1. Enter the following in a new cell in your Jupyter notebook:

    !pip list
    Copy to Clipboard Toggle word wrap
  2. Run the cell.

Verification

  • The output shows an alphabetical list of all installed Python packages and their versions. For example, if you use the pip list command immediately after creating a workbench that uses the Minimal image, the first packages shown are similar to the following:

    Package                           Version
    --------------------------------- ----------
    aiohttp                           3.7.3
    alembic                           1.5.2
    appdirs                           1.4.4
    argo-workflows                    3.6.1
    argon2-cffi                       20.1.0
    async-generator                   1.10
    async-timeout                     3.0.1
    attrdict                          2.0.1
    attrs                             20.3.0
    backcall                          0.2.0
    Copy to Clipboard Toggle word wrap

You can install Python packages that are not part of the default workbench by adding the package and the version to a requirements.txt file and then running the pip install command in a notebook cell.

Note

Although you can install packages directly, it is recommended that you use a requirements.txt file so that the packages stated in the file can be easily re-used across different workbenches.

Prerequisites

  • Log in to JupyterLab and open a Jupyter notebook.

Procedure

  1. Create a new text file using one of the following methods:

    • Click + to open a new launcher and then click Text file.
    • Click FileNewText File.
  2. Rename the text file to requirements.txt.

    1. Right-click the name of the file and then click Rename Text. The Rename File dialog opens.
    2. Enter requirements.txt in the New Name field and then click Rename.
  3. Add the packages to install to the requirements.txt file.

    altair
    Copy to Clipboard Toggle word wrap

    You can specify the exact version to install by using the == (equal to) operator, for example:

    altair==4.1.0
    Copy to Clipboard Toggle word wrap
    Note

    Red Hat recommends specifying exact package versions to enhance the stability of your workbench over time. New package versions can introduce undesirable or unexpected changes in your environment’s behavior.

    To install multiple packages at the same time, place each package on a separate line.

  4. Install the packages in requirements.txt to your server by using a notebook cell.

    1. Create a new notebook cell and enter the following command:

      !pip install -r requirements.txt
      Copy to Clipboard Toggle word wrap
    2. Run the cell by pressing Shift and Enter.
    Important

    The pip install command installs the package on your workbench. However, you must run the import statement in a code cell to use the package in your code.

    import altair
    Copy to Clipboard Toggle word wrap

Verification

If you are seeing errors in Red Hat OpenShift AI related to Jupyter, your Jupyter notebooks, or your workbench, read this section to understand what could be causing the problem.

If you cannot see your problem here or in the release notes, contact Red Hat Support.

I see a 403: Forbidden error when I log in to Jupyter

Problem

If your cluster administrator has configured OpenShift AI user groups, your username might not be added to the default user group or the default administrator group for OpenShift AI.

Resolution

Contact your cluster administrator so that they can add you to the correct group/s.
Copy to Clipboard Toggle word wrap

My workbench does not start

Problem

The OpenShift cluster that hosts your workbench might not have access to enough resources, or the workbench pod may have failed.

Resolution

Check the logs in the Events section in OpenShift for error messages associated with the problem. For example:

Server requested
2021-10-28T13:31:29.830991Z [Warning] 0/7 nodes are available: 2 Insufficient memory,
2 node(s) had taint {node-role.kubernetes.io/infra: }, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: },
that the pod didn't tolerate.
Copy to Clipboard Toggle word wrap

Contact your cluster administrator with details of any relevant error messages so that they can perform further checks.

I see a database or disk is full error or a no space left on device error when I run my notebook cells

Problem

You might have run out of storage space on your workbench.

Resolution

Contact your cluster administrator so that they can perform further checks.

Chapter 3. Working in code-server

Code-server is a web-based interactive development environment supporting multiple programming languages, including Python, for working with Jupyter notebooks. With the code-server workbench image, you can customize your workbench environment to meet your needs using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. For more information, see code-server in GitHub.

Note

Elyra-based pipelines are not available with the code-server workbench image.

3.1. Creating code-server workbenches

You can create a blank Jupyter notebook or import a Jupyter notebook in code-server from several different sources.

3.1.1. Creating a workbench

When you create a workbench, you specify an image (an IDE, packages, and other dependencies). You can also configure connections, cluster storage, and add container storage.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project.
  • If you created a Simple Storage Service (S3) account outside of Red Hat OpenShift AI and you want to create connections to your existing S3 storage buckets, you have the following credential information for the storage buckets:

    • Endpoint URL
    • Access key
    • Secret key
    • Region
    • Bucket name

    For more information, see Working with data in an S3-compatible object store.

Procedure

  1. From the OpenShift AI dashboard, click Data science projects.

    The Data science projects page opens.

  2. Click the name of the project that you want to add the workbench to.

    A project details page opens.

  3. Click the Workbenches tab.
  4. Click Create workbench.

    The Create workbench page opens.

  5. In the Name field, enter a unique name for your workbench.
  6. Optional: If you want to change the default resource name for your workbench, click Edit resource name.

    The resource name is what your resource is labeled in OpenShift. Valid characters include lowercase letters, numbers, and hyphens (-). The resource name cannot exceed 30 characters, and it must start with a letter and end with a letter or number.

    Note: You cannot change the resource name after the workbench is created. You can edit only the display name and the description.

  7. Optional: In the Description field, enter a description for your workbench.
  8. In the Workbench image section, complete the fields to specify the workbench image to use with your workbench.

    From the Image selection list, select a workbench image that suits your use case. A workbench image includes an IDE and Python packages (reusable code). If project-scoped images exist, the Image selection list includes subheadings to distinguish between global images and project-scoped images.

    Optionally, click View package information to view a list of packages that are included in the image that you selected.

    If the workbench image has multiple versions available, select the workbench image version to use from the Version selection list. To use the latest package versions, Red Hat recommends that you use the most recently added image.

    Note

    You can change the workbench image after you create the workbench.

  9. In the Deployment size section, select one of the following options, depending on whether the hardware profiles feature is enabled.

    Important

    The hardware profiles feature is currently available in Red Hat OpenShift AI as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

    • If the hardware profiles feature is not enabled:

      1. From the Container size list, select the appropriate size for the size of the model that you want to train or tune.

        For example, to run the example fine-tuning job described in Fine-tuning a model by using Kubeflow Training, select Medium.

      2. From the Accelerator list, select a suitable accelerator profile for your workbench.

        If project-scoped accelerator profiles exist, the Accelerator list includes subheadings to distinguish between global accelerator profiles and project-scoped accelerator profiles.

    • If the hardware profiles feature is enabled:

      1. From the Hardware profile list, select a suitable hardware profile for your workbench.

        If project-scoped hardware profiles exist, the Hardware profile list includes subheadings to distinguish between global hardware profiles and project-scoped hardware profiles.

        The hardware profile specifies the number of CPUs and the amount of memory allocated to the container, setting the guaranteed minimum (request) and maximum (limit) for both.

      2. If you want to change the default values, click Customize resource requests and limit and enter new minimum (request) and maximum (limit) values.

        Important

        By default, the hardware profiles feature is not enabled: hardware profiles are not shown in the dashboard navigation menu or elsewhere in the user interface. In addition, user interface components associated with the deprecated accelerator profiles functionality are still displayed. To show the SettingsHardware profiles option in the dashboard navigation menu, and the user interface components associated with hardware profiles, set the disableHardwareProfiles value to false in the OdhDashboardConfig custom resource (CR) in OpenShift. For more information about setting dashboard configuration options, see Customizing the dashboard.

  10. Optional: In the Environment variables section, select and specify values for any environment variables.

    Setting environment variables during the workbench configuration helps you save time later because you do not need to define them in the body of your workbenches, or with the IDE command line interface.

    If you are using S3-compatible storage, add these recommended environment variables:

    • AWS_ACCESS_KEY_ID specifies your Access Key ID for Amazon Web Services.
    • AWS_SECRET_ACCESS_KEY specifies your Secret access key for the account specified in AWS_ACCESS_KEY_ID.

    OpenShift AI stores the credentials as Kubernetes secrets in a protected namespace if you select Secret when you add the variable.

  11. In the Cluster storage section, configure the storage for your workbench. Select one of the following options:

    • Create new persistent storage to create storage that is retained after you shut down your workbench. Complete the relevant fields to define the storage:

      1. Enter a name for the cluster storage.
      2. Enter a description for the cluster storage.
      3. Select a storage class for the cluster storage.

        Note

        You cannot change the storage class after you add the cluster storage to the workbench.

      4. For storage classes that support multiple access modes, select an Access mode to define how the volume can be accessed. For more information, see About persistent storage.

        Only the access modes that have been enabled for the storage class by your cluster and OpenShift AI administrators are visible.

      5. Under Persistent storage size, enter a new size in gibibytes or mebibytes.
    • Use existing persistent storage to reuse existing storage and select the storage from the Persistent storage list.
  12. Optional: You can add a connection to your workbench. A connection is a resource that contains the configuration parameters needed to connect to a data source or sink, such as an object storage bucket. You can use storage buckets for storing data, models, and pipeline artifacts. You can also use a connection to specify the location of a model that you want to deploy.

    In the Connections section, use an existing connection or create a new connection:

    • Use an existing connection as follows:

      1. Click Attach existing connections.
      2. From the Connection list, select a connection that you previously defined.
    • Create a new connection as follows:

      1. Click Create connection. The Add connection dialog opens.
      2. From the Connection type drop-down list, select the type of connection. The Connection details section is displayed.
      3. If you selected S3 compatible object storage in the preceding step, configure the connection details:

        1. In the Connection name field, enter a unique name for the connection.
        2. Optional: In the Description field, enter a description for the connection.
        3. In the Access key field, enter the access key ID for the S3-compatible object storage provider.
        4. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.
        5. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.
        6. In the Region field, enter the default region of your S3-compatible object storage account.
        7. In the Bucket field, enter the name of your S3-compatible object storage bucket.
        8. Click Create.
      4. If you selected URI in the preceding step, configure the connection details:

        1. In the Connection name field, enter a unique name for the connection.
        2. Optional: In the Description field, enter a description for the connection.
        3. In the URI field, enter the Uniform Resource Identifier (URI).
        4. Click Create.
  13. Click Create workbench.

Verification

  • The workbench that you created is visible on the Workbenches tab for the project.
  • Any cluster storage that you associated with the workbench during the creation process is displayed on the Cluster storage tab for the project.
  • The Status column on the Workbenches tab displays a status of Starting when the workbench server is starting, and Running when the workbench has successfully started.
  • Optional: Click the open icon ( The open icon ) to open the IDE in a new window.

You can load an existing notebook file from local storage into code-server to continue work, or adapt a project for a new use case.

Prerequisites

  • You have a running code-server workbench.
  • You have a notebook file in your local storage.

Procedure

  1. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → FileOpen File.
  2. In the Open File dialog, click the Show Local button.
  3. Locate and select the notebook file and then click Open.

    The file is displayed in the code-server window.

  4. Save the file and then push the changes to your repository.

Verification

  • The notebook file is displayed in the code-server Explorer view.
  • You can open the notebook file in the code-server window.

If your files are stored in Git version control, you can clone a Git repository to work with them in code-server. When you are ready, you can push your changes back to the Git repository so that others can review or use your models.

You can use the code-server user interface to clone a Git repository into your workspace to continue your work or integrate files from an external project.

Prerequisites

  • You have a running code-server workbench.
  • You have read access for the Git repository you want to clone.

Procedure

  1. Copy the HTTPS URL for the Git repository.

    • In GitHub, click ⤓ CodeHTTPS and then click the Copy URL to clipboard icon.
    • In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS.
  2. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → ViewCommand Palette.
  3. In the Command Palette, enter Git: Clone, and then select Git: Clone from the list.
  4. Paste the HTTPS URL of the repository that contains your notebook file, and then press Enter.
  5. If prompted, enter your username and password for the Git repository.
  6. Select a folder to clone the repository into, and then click OK.
  7. When the repository is cloned, a dialog opens asking if you want to open the cloned repository. Click Open in the dialog.

Verification

  • Check that the contents of the repository are visible in the code-server Explorer view, or run the ls command in the terminal to verify that the repository shows as a directory.

You can use the command line interface to clone a Git repository into your workspace to continue your work or integrate files from an external project.

Prerequisites

  • You have a running code-server workbench.

Procedure

  1. Copy the HTTPS URL for the Git repository.

    • In GitHub, click ⤓ CodeHTTPS and then click the Copy URL to clipboard icon.
    • In GitLab, click Code and then click the Copy URL icon under Clone with HTTPS.
  2. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → TerminalNew Terminal to open a terminal window.
  3. Enter the git clone command:

    git clone <git-clone-URL>
    Copy to Clipboard Toggle word wrap

    Replace <git-clone-URL> with the HTTPS URL, for example:

    $ git clone https://github.com/example/myrepo.git
    Cloning into myrepo...
    remote: Enumerating objects: 11, done.
    remote: Counting objects: 100% (11/11), done.
    remote: Compressing objects: 100% (10/10), done.
    remote: Total 2821 (delta 1), reused 5 (delta 1), pack-reused 2810
    Receiving objects: 100% (2821/2821), 39.17 MiB | 23.89 MiB/s, done.
    Resolving deltas: 100% (1416/1416), done.
    Copy to Clipboard Toggle word wrap

Verification

  • Check that the contents of the repository are visible in the code-server Explorer view, or run the ls command in the terminal to verify that the repository shows as a directory.

You can pull changes made by other users into your workbench from a remote Git repository.

Prerequisites

  • You have configured the remote Git repository.
  • You have imported the Git repository into code-server, and the contents of the repository are visible in the Explorer view in code-server.
  • You have permissions to pull files from the remote Git repository to your local repository.
  • You have a running code-server workbench.

Procedure

  1. In your code-server window, from the Activity Bar, click the Source Control icon ( Source Control icon ).
  2. Click the Views and More Actions button (), and then select Pull.

Verification

  • You can view the changes pulled from the remote repository in the Source Control pane.

To build and deploy your application in a production environment, upload your work to a remote Git repository.

Prerequisites

  • You have a running code-server workbench.
  • You have added the relevant Git repository in code-server.
  • You have permission to push changes to the relevant Git repository.
  • You have installed the Git version control extension.

Procedure

  1. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → FileSave All to save any unsaved changes.
  2. Click the Source Control icon ( Source Control icon ) to open the Source Control pane.
  3. Confirm that your changed files appear under Changes.
  4. Next to the Changes heading, click the Stage All Changes button (+).

    The staged files move to the Staged Changes section.

  5. In the Message field, enter a brief description of the changes you made.
  6. Next to the Commit button, click the More Actions…​ button, and then click Commit & Sync.
  7. If prompted, enter your Git credentials and click OK.

Verification

  • Your most recently pushed changes are visible in the remote Git repository.

3.3. Managing Python packages in code-server

In code-server, you can view the Python packages that are installed on your workbench image and install additional packages.

You can check which Python packages are installed on your workbench and which version of the package you have by running the pip tool in a terminal window.

Prerequisites

  • You have a running code-server workbench.

Procedure

  1. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → TerminalNew Terminal to open a terminal window.
  2. Enter the pip list command.

    pip list
    Copy to Clipboard Toggle word wrap

Verification

  • The output shows an alphabetical list of all installed Python packages and their versions. For example, if you use the pip list command immediately after creating a workbench that uses the Minimal image, the first packages shown are similar to the following:

    Package                  Version
    ------------------------ ----------
    asttokens                2.4.1
    boto3                    1.34.162
    botocore                 1.34.162
    cachetools               5.5.0
    certifi                  2024.8.30
    charset-normalizer       3.4.0
    comm                     0.2.2
    contourpy                1.3.0
    cycler                   0.12.1
    debugpy                  1.8.7
    Copy to Clipboard Toggle word wrap

You can install Python packages that are not part of the default workbench image by adding the package and the version to a requirements.txt file and then running the pip install command in a terminal window.

Note

Although you can install packages directly, it is recommended that you use a requirements.txt file so that the packages stated in the file can be easily re-used across different workbenches.

Prerequisites

  • You have a running code-server workbench.

Procedure

  1. In your code-server window, from the Activity Bar, select the menu icon ( Menu icon ) → FileNew Text File to create a new text file.
  2. Add the packages to install to the text file.

    altair
    Copy to Clipboard Toggle word wrap

    You can specify the exact version to install by using the == (equal to) operator, for example:

    altair==4.1.0
    Copy to Clipboard Toggle word wrap
    Note

    Red Hat recommends specifying exact package versions to enhance the stability of your workbench over time. New package versions can introduce undesirable or unexpected changes in your environment’s behavior.

    To install multiple packages at the same time, place each package on a separate line.

  3. Save the text file as requirements.txt.
  4. From the Activity Bar, select the menu icon ( Menu icon ) → TerminalNew Terminal to open a terminal window.
  5. Install the packages in requirements.txt to your server by using the following command:

    pip install -r requirements.txt
    Copy to Clipboard Toggle word wrap
    Important

    The pip install command installs the package on your workbench. However, you must run the import statement to use the package in your code.

    import altair
    Copy to Clipboard Toggle word wrap

Verification

3.4. Installing extensions with code-server

With the code-server workbench image, you can customize your code-server environment by using extensions to add new languages, themes, and debuggers, and to connect to additional services. You can also enhance the efficiency of your data science work with extensions for syntax highlighting, auto-indentation, and bracket matching.

For details about the third-party extensions that you can install with code-server, see the Open VSX Registry.

Prerequisites

  • You are logged in to Red Hat OpenShift AI.
  • You have created a data science project that has a code-server workbench.

Procedure

  1. From the OpenShift AI dashboard, click Data science projects.

    The Data science projects page opens.

  2. Click the name of the project containing the code-server workbench you want to start.

    A project details page opens.

  3. Click the Workbenches tab.
  4. If the status of the workbench that you want to use is Running, skip to the next step.

    If the status of the workbench is Stopped, in the Status column for the workbench, click Start.

    The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started.

  5. Click the open icon ( The open icon ) next to the workbench.

    The code-server window opens.

  6. In the Activity Bar, click the Extensions icon ( Extensions icon ).
  7. Search for the name of the extension you want to install.
  8. Click Install to add the extension to your code-server environment.

Verification

  • In the Browser - Installed list on the Extensions panel, confirm that the extension you installed is listed.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top