Chapter 1. Understand how AI assets map to the Red Hat Developer Hub Catalog
This section describes Developer Preview features in the OpenShift AI Connector for Red Hat Developer Hub plugin. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
The OpenShift AI Connector for Red Hat Developer Hub (OpenShift AI Connector for RHDH) serves as a crucial link, enabling the discovery and accessibility of AI assets managed within the Red Hat OpenShift AI offering directly within your RHDH instance.
For more information on model registry components, see Overview of model registries and model catalog.
1.1. Model-to-Entity mapping Copy linkLink copied to clipboard!
Model-to-Entity mapping integrates with OpenShift AI Connector for RHDH, the model catalog, and KServe-based Model Deployments (InferenceServices). This integration automatically converts your AI/ML artifacts into familiar Backstage entities, simplifying management and providing a unified view of your available AI models to your developer teams.
This offering interfaces with the OpenShift AI Connector for RHDH, model catalog, and KServe-based Model Deployments (InferenceServices) to create familiar Backstage entities.
| RHOAI Artifact | RHDH/Backstage Entity Kind | RHDH/Backstage Entity Type | Purpose |
|---|---|---|---|
| Model Server (InferenceService) | Component |
| Represents a running, accessible AI model endpoint. See Configuring your model-serving platform. |
| AI Model (Model Registry Version) | Resource |
|
Represents the specific AI model artifact, for example, |
| Model Server API Details | API |
| Provides the OpenAPI/Swagger specification for the REST endpoint of the model. See Red Hat OpenShifT AI: API Tiers |
| Model Cards | TechDocs | N/A | Model cards from the RHOAI model catalog are associated with the Component and Resource entities. See Registering a model from the model catalog. |
Once the OpenShift AI Connector for RHDH is installed and connected with RHOAI, the transfer of information commences automatically.
1.2. Out-of-the-Box as asset details synched from RHOAI Copy linkLink copied to clipboard!
The connector propagates the following key data:
InferenceServices (Component type model-server):
- URL of the OpenShift Route (if exposed).
- URL of the Kubernetes Service.
- Authentication requirement status.
Model registry (Resource type
ai-model):- Model description, artifact URIs, and author/owner information.
Model catalog:
- Links to the Model Card (as RHDH TechDocs).
- Model license URL.