Search

Chapter 5. Implementing pipelines

download PDF
Important

If you are using OpenShift AI in a disconnected environment, you might experience SSL certificate validation issues. For information on how to resolve these issues, see Workbench workaround for executing a pipeline using Elyra.

5.1. Automating workflows with data science pipelines

In previous sections of this tutorial, you used a notebook to train and save your model. Optionally, you can automate these tasks by using Red Hat OpenShift AI pipelines. Pipelines offer a way to automate the execution of multiple notebooks and Python code. By using pipelines, you can execute long training jobs or retrain your models on a schedule without having to manually run them in a notebook.

In this section, you create a simple pipeline by using the GUI pipeline editor. The pipeline uses the notebook that you used in previous sections to train a model and then save it to S3 storage.

Your completed pipeline should look like the one in the 6 Train Save.pipeline file.

Note: You can run and use 6 Train Save.pipeline. To explore the pipeline editor, complete the steps in the following procedure to create your own pipeline.

5.1.1. Create a pipeline

  1. Open your workbench’s JupyterLab environment. If the launcher is not visible, click + to open it.

    Pipeline buttons
  2. Click Pipeline Editor.

    Pipeline Editor button

    You’ve created a blank pipeline!

  3. Set the default runtime image for when you run your notebook or Python code.

    1. In the pipeline editor, click Open Panel

      Open Panel
    2. Select the Pipeline Properties tab.

      Pipeline Properties Tab
    3. In the Pipeline Properties panel, scroll down to Generic Node Defaults and Runtime Image. Set the value to Tensorflow with Cuda and Python 3.9 (UBI 9).

      Pipeline Runtime Image0
  4. Save the pipeline.

5.1.2. Add nodes to your pipeline

Add some steps, or nodes in your pipeline. Your two nodes will use the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks.

  1. From the file-browser panel, drag the 1_experiment_train.ipynb and 2_save_model.ipynb notebooks onto the pipeline canvas.

    Drag and Drop Notebooks
  2. Click the output port of 1_experiment_train.ipynb and drag a connecting line to the input port of 2_save_model.ipynb.

    Connect Nodes
  3. Save the pipeline.

5.1.3. Specify the training file as a dependency

Set node properties to specify the training file as a dependency.

Note: If you don’t set this file dependency, the file is not included in the node when it runs and the training job fails.

  1. Click the 1_experiment_train.ipynb node.

    Select Node 1
  2. In the Properties panel, click the Node Properties tab.
  3. Scroll down to the File Dependencies section and then click Add.

    Add File Dependency
  4. Set the value to data/card_transdata.csv which contains the data to train your model.
  5. Select the Include Subdirectories option and then click Add.

    Set File Dependency Value
  6. Save the pipeline.

5.1.4. Create and store the ONNX-formatted output file

In node 1, the notebook creates the models/fraud/1/model.onnx file. In node 2, the notebook uploads that file to the S3 storage bucket. You must set models/fraud/1/model.onnx file as the output file for both nodes.

  1. Select node 1 and then select the Node Properties tab.
  2. Scroll down to the Output Files section, and then click Add.
  3. Set the value to models/fraud/1/model.onnx and then click Add.

    Set file dependency value
  4. Repeat steps 1-3 for node 2.

5.1.5. Configure the data connection to the S3 storage bucket

In node 2, the notebook uploads the model to the S3 storage bucket.

You must set the S3 storage bucket keys by using the secret created by the My Storage data connection that you set up in the Storing data with data connections section of this tutorial.

You can use this secret in your pipeline nodes without having to save the information in your pipeline code. This is important, for example, if you want to save your pipelines - without any secret keys - to source control.

The secret is named aws-connection-my-storage.

Note

If you named your data connection something other than My Storage, you can obtain the secret name in the OpenShift AI dashboard by hovering over the resource information icon ? in the Data Connections tab.

My Storage Secret Name

The aws-connection-my-storage secret includes the following fields:

  • AWS_ACCESS_KEY_ID
  • AWS_DEFAULT_REGION
  • AWS_S3_BUCKET
  • AWS_S3_ENDPOINT
  • AWS_SECRET_ACCESS_KEY

You must set the secret name and key for each of these fields.

Procedure

  1. Remove any pre-filled environment variables.

    1. Select node 2, and then select the Node Properties tab.

      Under Additional Properties, note that some environment variables have been pre-filled. The pipeline editor inferred that you’d need them from the notebook code.

      Since you don’t want to save the value in your pipelines, remove all of these environment variables.

    2. Click Remove for each of the pre-filled environment variables.

      Remove Env Var
  2. Add the S3 bucket and keys by using the Kubernetes secret.

    1. Under Kubernetes Secrets, click Add.

      Add Kube Secret
    2. Enter the following values and then click Add.

      • Environment Variable: AWS_ACCESS_KEY_ID
      • Secret Name: aws-connection-my-storage
      • Secret Key: AWS_ACCESS_KEY_ID

        Secret Form
    3. Repeat Steps 3a and 3b for each set of these Kubernetes secrets:

      • Environment Variable: AWS_SECRET_ACCESS_KEY

        • Secret Name: aws-connection-my-storage
        • Secret Key: AWS_SECRET_ACCESS_KEY
      • Environment Variable: AWS_S3_ENDPOINT

        • Secret Name: aws-connection-my-storage
        • Secret Key: AWS_S3_ENDPOINT
      • Environment Variable: AWS_DEFAULT_REGION

        • Secret Name: aws-connection-my-storage
        • Secret Key: AWS_DEFAULT_REGION
      • Environment Variable: AWS_S3_BUCKET

        • Secret Name: aws-connection-my-storage
        • Secret Key: AWS_S3_BUCKET
  3. Save and Rename the .pipeline file.

5.1.6. Run the Pipeline

Upload the pipeline on your cluster and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline for this or 6 Train Save.pipeline.

Procedure

  1. Click the play button in the toolbar of the pipeline editor.

    Pipeline Run Button
  2. Enter a name for your pipeline.
  3. Verify the Runtime Configuration: is set to Data Science Pipeline.
  4. Click OK.

    Note

    If Data Science Pipeline is not available as a runtime configuration, you may have created your notebook before the pipeline server was available. You can restart your notebook after the pipeline server has been created in your data science project.

  5. Return to your data science project and expand the newly created pipeline.

    dsp pipeline complete
  6. Click View runs and then view the pipeline run in progress.

    pipeline run complete

The result should be a models/fraud/1/model.onnx file in your S3 bucket which you can serve, just like you did manually in the Preparing a model for deployment section.

5.2. Running a data science pipeline generated from Python code

In the previous section, you created a simple pipeline by using the GUI pipeline editor. It’s often desirable to create pipelines by using code that can be version-controlled and shared with others. The kfp-tekton SDK provides a Python API for creating pipelines. The SDK is available as a Python package that you can install by using the pip install kfp-tekton~=1.5.9 command. With this package, you can use Python code to create a pipeline and then compile it to Tekton YAML. Then you can import the YAML code into OpenShift AI.

This tutorial does not delve into the details of how to use the SDK. Instead, it provides the files for you to view and upload.

  1. Optionally, view the provided Python code in your Jupyter environment by navigating to the fraud-detection-notebooks project’s pipeline directory. It contains the following files:

    • 6_get_data_train_upload.py is the main pipeline code.
    • get_data.py, train_model.py, and upload.py are the three components of the pipeline.
    • build.sh is a script that builds the pipeline and creates the YAML file.

      For your convenience, the output of the build.sh script is provided in the 7_get_data_train_upload.yaml file. The 7_get_data_train_upload.yaml output file is located in the top-level fraud-detection directory.

  2. Right-click the 7_get_data_train_upload.yaml file and then click Download.
  3. Upload the 7_get_data_train_upload.yaml file to OpenShift AI.

    1. In the OpenShift AI dashboard, navigate to your data science project page and then click Import pipeline.

      dsp pipeline import
    2. Enter values for Pipeline name and Pipeline description.
    3. Click Upload and then select 7_get_data_train_upload.yaml from your local files to upload the pipeline.

      dsp pipline import upload
    4. Click Import pipeline to import and save the pipeline.

      The pipeline shows in the list of pipelines.

  4. Expand the pipeline item and then click View runs.

    dsp pipline view runs
  5. Click Create run.
  6. On the Create run page, provide the following values:

    1. For Name, type any name, for example Run 1.
    2. For Pipeline, select the pipeline that you uploaded.

      You can leave the other fields with their default values.

      Create Pipeline Run form
  7. Click Create to create the run.

    A new run starts immediately and opens the run details page.

    pipeline run in progress

There you have it: a pipeline created in Python that is running in OpenShift AI.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.