Preface


As a data scientist, you can organize your data science work into a single project. A project in OpenShift AI can consist of the following components:

Workbenches
Creating a workbench allows you to work with models in your preferred IDE, such as JupyterLab.
Cluster storage
For projects that require data retention, you can add cluster storage to the project.
Connections
Adding a connection to your project allows you to connect data inputs to your workbenches.
Pipelines
Standardize and automate machine learning workflows to enable you to further enhance and deploy your data science models.
Models and model servers
Deploy a trained data science model to serve intelligent applications. Your model is deployed with an endpoint that allows applications to send requests to the model.
Bias metrics for models
Creating bias metrics allows you to monitor your machine learning models for bias.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat Documentation

Legal Notice

Theme

© 2026 Red Hat
Back to top