Chapter 2. Product features
Red Hat OpenShift AI provides several features for data scientists and IT operations administrators.
2.1. Features for data scientists
- Containers
- While tools such as JupyterLab already offer intuitive ways for data scientists to develop models on their machines, there are always inherent complexities involved with collaboration and sharing work. Moreover, using specialized hardware such as powerful GPUs can be very expensive when you have to buy and maintain your own. The Jupyter environment that is included with OpenShift AI lets you take your development environment anywhere you need it to be. Because all of the workloads are run as containers, collaboration is as easy as sharing an image with your team members, or even simply adding it to the list of default containers that they can use. As a result, GPUs and large amounts of memory are significantly more accessible, since you are no longer limited by what your laptop can support.
- Integration with third-party machine learning tools
- We have all run into situations where our favorite tools or services do not play well with one another. OpenShift AI is designed with flexibility in mind. You can use a wide range of open source and third-party tools with OpenShift AI. These tools support the complete machine learning lifecycle, from data engineering and feature extraction to model deployment and management.
- Collaboration on notebooks with Git
- Use Jupyter’s Git interface to work collaboratively with others, and keep good track of the changes to your code.
- Securely built notebook images
- Choose from a default set of notebook images that are pre-configured with the tools and libraries that you need for model development. Software stacks, especially those involved in machine learning, tend to be complex systems. There are many modules and libraries in the Python ecosystem that can be used, so determining which versions of what libraries to use can be very challenging. OpenShift AI includes many packaged notebook images that have been built with insight from data scientists and recommendation engines. You can start new projects quickly on the right foot without worrying about downloading unproven and possibly insecure images from random upstream repositories.
- Custom notebooks
- In addition to notebook images provided and supported by Red Hat and independent software vendors (ISVs), you can configure custom notebook images that cater to your project’s specific requirements.
- Data science pipelines
- OpenShift AI supports data science pipelines 2.0, for an efficient way of running your data science workloads. You can standardize and automate machine learning workflows that enable you to develop and deploy your data science models.
- Model serving
- As a data scientist, you can deploy your trained machine-learning models to serve intelligent applications in production. Deploying or serving a model makes the model’s functions available as a service endpoint that can be used for testing or integration into applications. You have much control over how this serving is performed.
- Optimize your data science models with accelerators
- If you work with large data sets, you can optimize the performance of your data science models in OpenShift AI with NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators. Accelerators enable you to scale your work, reduce latency, and increase productivity.
2.2. Features for IT Operations administrators
- Manage users with an identity provider
- OpenShift AI supports the same authentication systems as your OpenShift cluster. By default, OpenShift AI is accessible to all users listed in your identity provider and those users do not need a separate set of credentials to access OpenShift AI. Optionally, you can limit the set of users who have access by creating an OpenShift group that specifies a subset of users. You can also create an OpenShift group that identifies the list of users who have administrator access to OpenShift AI.
- Manage resources with OpenShift
- Use your existing OpenShift knowledge to configure and manage resources for your OpenShift AI users.
- Control Red Hat usage data collection
- Choose whether to allow Red Hat to collect data about OpenShift AI usage in your cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster.
- Apply autoscaling to your cluster to reduce usage costs
- Use the cluster autoscaler to adjust the size of your cluster to meet its current needs and optimize costs.
- Manage resource usage by stopping idle notebooks
- Reduce resource usage in your OpenShift AI deployment by automatically stopping notebook servers that have been idle for a period of time.
- Implement model-serving runtimes
- OpenShift AI provides support for model-serving runtimes. A model-serving runtime provides integration with a specified model server and the model frameworks that it supports. By default, OpenShift AI includes the OpenVINO Model Server runtime. However, if this runtime doesn’t meet your needs (for example, if it doesn’t support a particular model framework), you can add your own custom runtimes.
- Install in a disconnected environment
- OpenShift AI Self-Managed supports installation in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall and unable to reach the Internet. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. In this case, you deploy the OpenShift AI Operator to a disconnected environment by using a private registry in which you have mirrored (copied) the relevant images.
- Manage accelerators
- Enable NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators in OpenShift AI and allow your data scientists to use compute-heavy workloads.