Chapter 3. Product features


Red Hat OpenShift Data Science provides several features for data scientists and IT operations administrators.

3.1. Features for data scientists

Containers
While tools like JupyterLab already offer intuitive ways for data scientists to develop models on their machines, there are always inherent complexities involved with collaboration and sharing work. Moreover, using specialized hardware such as powerful GPUs can be very expensive when you have to buy and maintain your own. The Jupyter environment that is included with OpenShift Data Science lets you take your development environments anywhere you need it to be. Because all of the workloads are run as containers, collaboration is as easy as sharing an image with your team members, or even simply adding it to the list of default containers that they can use. GPUs and large amounts of memory suddenly become a lot more accessible, too, since you are no longer limited by what your laptop can support.
Integration with third-party machine learning tools
We have all run into situations where our favorite tools or services don’t play well with one another. OpenShift Data Science is designed with flexibility in mind. You can use a wide range of open source and third-party tools with OpenShift Data Science. These tools support the complete machine learning lifecycle, from data engineering and feature extraction to model deployment and management.
Collaboration on notebooks with Git
Use Jupyter’s Git interface to work collaboratively with others, and keep good track of the changes to your code.
Securely built notebook images
Choose from a default set of notebook images that are pre-configured with the tools and libraries that you need for model development. Software stacks, especially those involved in machine learning, tend to be complex systems. There are numerous modules and libraries in the Python ecosystem that can be used, so determining which versions of what libraries to use can be very challenging. OpenShift Data Science comes with many packaged notebook images that have been built with insight from data scientists and recommendation engines. You can start new projects quickly on the right foot without worrying about downloading unproven and possibly insecure images from random upstream repositories.
Custom notebooks
In addition to notebook images provided and supported by Red Hat and independent software vendors (ISVs), you can configure custom notebook images that cater to your project’s specific requirements.
Data science pipelines
OpenShift Data Science supports data science pipelines for a mature and efficient way of running your data science workloads. You can standardize and automate machine learning workflows that enable you to develop and deploy your data science models.
Model serving
As a data scientist, you can deploy your trained machine-learning models to serve intelligent applications in production. Deploying or serving a model makes the model’s functions available as a service endpoint that can be used for testing or integration into applications. You have much control over how this serving is performed.

3.2. Features for IT Operations administrators

Manage users with an identity provider
OpenShift Data Science supports the same authentication systems as your OpenShift cluster. By default, OpenShift Data Science is accessible to all users listed in your identity provider and those users do not need a separate set of credentials to access OpenShift Data Science. Optionally, you can limit the set of users who have access by creating an OpenShift group that specifies a subset of users. You can also create an OpenShift group that identifies the list of users who have administrator access to OpenShift Data Science.
Manage resources with OpenShift
Use your existing OpenShift knowledge to configure and manage resources for your OpenShift Data Science users.
Control Red Hat usage data collection
Choose whether to allow Red Hat to collect data about OpenShift Data Science usage in your cluster. Usage data collection is enabled by default when you install OpenShift Data Science on your OpenShift cluster.
Apply autoscaling to your cluster to reduce usage costs
Use the cluster autoscaler to adjust the size of your cluster to meet its current needs and optimize costs.
Manage resource usage by stopping idle notebooks
Reduce resource usage in your OpenShift Data Science deployment by automatically stopping notebook servers that have been idle for a period of time.
Implement model-serving runtimes
OpenShift Data Science provides support for model-serving runtimes. A model-serving runtime provides integration with a specified model server and the model frameworks that it supports. By default, OpenShift Data Science includes the OpenVINO Model Server runtime. However, if this runtime doesn’t meet your needs (for example, if it doesn’t support a particular model framework), you can add your own custom runtimes.
Install in a disconnected environment
OpenShift Data Science self-managed supports installation in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall and unable to reach the Internet. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. In this case,, you deploy the OpenShift Data Science Operator to a disconnected environment by using a private registry in which you have mirrored (copied) the relevant images.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.