Chapter 1. Architecture of OpenShift AI
Red Hat OpenShift AI is a fully Red Hat managed cloud service that is available as an add-on to Red Hat OpenShift Dedicated and to Red Hat OpenShift Service on Amazon Web Services (ROSA classic).
OpenShift AI integrates the following components and services:
At the service layer:
- OpenShift AI dashboard
- A customer-facing dashboard that shows available and installed applications for the OpenShift AI environment as well as learning resources such as tutorials, quick start examples, and documentation. You can also access administrative functionality from the dashboard, such as user management, cluster settings, hardware profiles, and workbench image settings. In addition, data scientists can create their own projects from the dashboard. This enables them to organize their data science work into a single project.
- Model serving
- Data scientists can deploy trained machine-learning models to serve intelligent applications in production. After deployment, applications can send requests to the model using its deployed API endpoint.
- Data science pipelines
- Data scientists can build portable machine learning (ML) workflows with data science pipelines 2.0, using Docker containers. With data science pipelines, data scientists can automate workflows as they develop their data science models.
- Jupyter (Red Hat managed)
- A Red Hat managed application that allows data scientists to configure a basic standalone workbench and develop machine learning models in JupyterLab.
- Distributed workloads
Data scientists can use multiple nodes in parallel to train machine-learning models or process data more quickly. This approach significantly reduces the task completion time, and enables the use of larger datasets and more complex models.
- At the management layer:
- The Red Hat OpenShift AI Operator
- A meta-operator that deploys and maintains all components and sub-operators that are part of OpenShift AI.
- Monitoring services
- Alertmanager, OpenShift Telemetry, and Prometheus work together to gather metrics from OpenShift AI and organize and display those metrics in useful ways for monitoring and billing purposes. Alerts from Alertmanager are sent to PagerDuty, responsible for notifying Red Hat of any issues with your managed cloud service.
When you install the Red Hat OpenShift AI Add-on in the Cluster Manager, the following new projects are created:
-
The
redhat-ods-operator
project contains the Red Hat OpenShift AI Operator. -
The
redhat-ods-applications
project installs the dashboard and other required components of OpenShift AI. -
The
redhat-ods-monitoring
project contains services for monitoring and billing. -
The
rhods-notebooks
project is where basic workbenches are deployed by default.
You or your data scientists must create additional projects for the applications that will use your machine learning models.
Do not install independent software vendor (ISV) applications in namespaces associated with OpenShift AI add-ons unless you are specifically directed to do so on the application tile on the dashboard.