Chapter 3. Using connections
3.1. Adding a connection to your project Copy linkLink copied to clipboard!
You can enhance your project by adding a connection that contains the configuration parameters needed to connect to a data source or sink.
When you want to work with a very large data sets, you can store your data in an Open Container Initiative (OCI)-compliant registry, S3-compatible object storage bucket, or a URI-based repository, so that you do not fill up your local storage. You also have the option of associating the connection with an existing workbench that does not already have a connection.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
- You have created a project that you can add a connection to.
- You have access to S3-compatible object storage, URI-based repository, or OCI-compliant registry.
- If you intend to add the connection to an existing workbench, you have saved any data in the workbench to avoid losing work.
Procedure
From the OpenShift AI dashboard, click Projects.
The Projects page opens.
Click the name of the project that you want to add a connection to.
A project details page opens.
- Click the Connections tab.
- Click Add connection.
In the Add connection modal, select a Connection type. The OCI-compliant registry, S3 compatible object storage, and URI options are pre-installed connection types. Additional options might be available if your OpenShift AI administrator added them.
The Add connection form opens with fields specific to the connection type that you selected.
Enter a unique name for the connection.
A resource name is generated based on the name of the connection. A resource name is the label for the underlying resource in OpenShift.
- Optional: Edit the default resource name. Note that you cannot change the resource name after you create the connection.
- Optional: Provide a description of the connection.
Complete the form depending on the connection type that you selected. For example:
If you selected S3 compatible object storage as the connection type, configure the connection details:
- In the Access key field, enter the access key ID for the S3-compatible object storage provider.
- In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.
In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.
NoteMake sure to use the appropriate endpoint format. Improper formatting might cause connection errors or restrict access to storage resources. For more information about how to format object storage endpoints, see Overview of object storage endpoints.
- In the Region field, enter the default region of your S3-compatible object storage account.
- In the Bucket field, enter the name of your S3-compatible object storage bucket.
- Click Create.
- If you selected URI in the preceding step, in the URI field, enter the Uniform Resource Identifier (URI).
- If you selected OCI-compliant registry in the preceding step, in the OCI storage location field, enter the URI.
- Click Add connection.
Verification
- The connection that you added is displayed on the Connections tab for the project.
3.2. Updating a connection Copy linkLink copied to clipboard!
You can edit the configuration of an existing connection as described in this procedure.
Any changes that you make to a connection are not applied to dependent resources (for example, a workbench) until those resources are restarted, redeployed, or otherwise regenerated.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
- You have created a project, created a workbench, and you have defined a connection.
Procedure
From the OpenShift AI dashboard, click Projects.
The Projects page opens.
Click the name of the project that contains the connection that you want to change.
A project details page opens.
- Click the Connections tab.
Click the action menu (⋮) beside the connection that you want to change and then click Edit.
The Edit connection form opens.
- Make your changes.
- Click Save.
Verification
- The updated connection is displayed on the Connections tab for the project.
3.3. Deleting a connection Copy linkLink copied to clipboard!
You can delete connections that are no longer relevant to your project.
Prerequisites
- You have logged in to Red Hat OpenShift AI.
- You have created a project with a connection.
Procedure
From the OpenShift AI dashboard, click Projects.
The Projects page opens.
Click the name of the project that you want to delete the connection from.
A project details page opens.
- Click the Connections tab.
Click the action menu (⋮) beside the connection that you want to delete and then click Delete connection.
The Delete connection dialog opens.
- Enter the name of the connection in the text field to confirm that you intend to delete it.
- Click Delete connection.
Verification
- The connection that you deleted is no longer displayed on the Connections page for the project.
3.4. Using the connections API Copy linkLink copied to clipboard!
You can use the connections API to enable flexible connection management to external data sources and services in OpenShift AI. Connections are stored as Kubernetes Secrets with standardized annotations that enable protocol-based validation and routing. Components use connections by referencing them in their resource specifications. The Operator and components use the opendatahub.io/connections annotation to establish the relationship between resources and connection Secrets.
For all new connection Secrets, use the annotation opendatahub.io/connection-type-protocol. The old annotation format, opendatahub.io/connection-type-ref, is deprecated. While both annotation formats are currently supported, opendatahub.io/connection-type-protocol takes precedence.
The connection API supports the following connection types:
-
s3: S3-compatible object storage -
uri: Public HTTP/HTTPS URIs -
oci: OCI-compliant container registries
Additionally, the connection API supports the following workloads:
- Notebook (Workbenches)
- InferenceService (Model Serving)
- LLMInferenceService (llm-d Model Serving)
With connections API, the protocol-based annotation allows components to identify and validate appropriate connections for their use case. Protocol-specific integration logic is implemented by each component according to their needs.
All connections use the following basic structure:
While the Operator does minimal validation of s3 and uri secrets, the overall correctness of the connection secret is the responsibility of the user. OpenShift AI might add more robust validation in future releases.
3.4.1. Namespace isolation in connections API Copy linkLink copied to clipboard!
Connections are Kubernetes Secrets stored within user project namespaces. Cross-namespace access is not supported, which means that connections can only be used by resources within the same namespace where the connection Secret exists.
3.4.2. Role-based access control (RBAC) requirements in connections API Copy linkLink copied to clipboard!
- You must have appropriate RBAC permissions to create, read, update, or delete Secrets in their project namespace.
- Components using connections must have ServiceAccount permissions to read Secrets in the namespace.
- Without access to a connection Secret, the using resource, for example, workbench or model serving, fails to start or function.
For more information about managing RBAC in OpenShift, see Using RBAC to define and apply permissions.
3.4.3. Validation scope Copy linkLink copied to clipboard!
You can create connection Secrets with maximum flexibility, as the webhook validation for the Operator is advisory, not restrictive. With this flexibility, you can:
-
Include the
opendatahub.io/connection-type-protocolannotation to trigger validation of protocol-specific fields, which acts as helpful guidance. - Create the Secret, even if you omit required annotations or include invalid fields; the system will not block secret creation.
If your connection is invalid, it will cause workload failures at runtime. Always validate your connection credentials before deploying workloads.
3.4.4. Using connection annotations based on workload type Copy linkLink copied to clipboard!
When configuring connections API, the format for referencing a connection Secret via the opendatahub.io/connections annotation changes based on the type of Kubernetes Custom Resource (CR) or workload being used.
-
For a
Notebookcustom resource, multiple connections are supported. Use comma-separated values using the namespace/name format. For example:opendatahub.io/connections: 'my-project/connection1,my-project/connection2'. -
For
InferenceServiceandLLMInferenceServicecustom resources, the Connection name is a simple string, assumed to be in the same namespace as the service. Only a single connection is supported. For example:opendatahub.io/connections: 'my-connection'. For
InferenceServiceandLLMInferenceServiceusing S3 connections, the additional annotationopendatahub.io/connection-pathis used to specify the exact location of the model within the bucket. For example:metadata: annotations: opendatahub.io/connections: "my-s3-connection" opendatahub.io/connection-path: "my-bucket-path" # Specify path within S3 bucket
metadata: annotations: opendatahub.io/connections: "my-s3-connection" opendatahub.io/connection-path: "my-bucket-path" # Specify path within S3 bucketCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.5. Creating an Amazon S3-compatible connection type using the connections API Copy linkLink copied to clipboard!
In OpenShift AI, you can create an Amazon S3-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret resource that holds the necessary credentials and configuration for an S3-compatible connection.
Prerequisites
- You have access to a Kubernetes cluster where you have permissions to create Secrets.
- You have the following details for your S3 storage: the S3 endpoint URL, bucket name, Access Key ID, and Secret Access Key.
Procedure
Create a YAML file (for example,
s3-connection.yaml) that defines a KubernetesSecretof type Opaque. This secret will contain the S3 connection parameters in thestringDatasection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the example YAML, replace the required fields by populating the placeholder values in the
stringDatasection with your actual S3 connection details:- 1
- S3 endpoint URL: The full URL for your S3 compatible endpoint.
- 2
- Mandatory bucket name: The exact name of the S3 bucket you intend to connect to.
- 3
- Access key ID: Your S3 account access key ID.
- 4
- Secret access key: Your S3 account secret access key.
- 5
- Optional region field: If your S3 provider requires a specific region or if you are using AWS, you may include this optional field.
NoteThe
opendatahub.io/connection-type-protocol: `s3`annotation is required by applications to recognize this Secret as an S3 connection.
Apply the Secret to the cluster by using the
kubectl applycommand to create the Secret in your Kubernetes cluster.kubectl apply -f s3-connection.yaml
kubectl apply -f s3-connection.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.5.1. Using an Amazon S3 connection with InferenceService custom resource Copy linkLink copied to clipboard!
You can use an Amazon S3-compatible connection type with an InferenceService custom resource. In the following procedure, you define the storage location for your model when deploying a KServe InferenceService custom resource.
Prerequisites
-
You have created an S3 connection Secret in the
projectnamespace. - You have deployed a KServe Operator in your cluster.
- Your model files are stored in the designated S3 bucket.
Procedure
-
Create a YAML file (for example,
inferenceservice.yaml) that defines the KServeInferenceServicecustom resource. This resource defines how your model is served. Specify the connection and path annotations in the
metadata.annotationssection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In the
opendatahub.io/connectionsfield, reference the name of your S3 connection Secret. - 2
- In the
opendatahub.io/connection-pathfield, reference the folder path within the S3 bucket. This optional but highly recommended annotation specifies the path within the S3 bucket where your model files are located.
When used with an InferenceService custom resource, the opendatahub.io/connections annotation usually requires the Secret name (for example, my-s3-connection) if the Secret is in the same namespace as the InferenceService.
3.4.5.2. Using an Amazon S3 connection with LLMInferenceService custom resource Copy linkLink copied to clipboard!
You can use an Amazon S3-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you define the storage location for your large language model (LLM) when deploying a KServe LLMInferenceService by using an S3 connection.
Prerequisites
-
You have created an S3 connection Secret in the
projectnamespace. -
You have deployed a KServe Operator that supports the
LLMInferenceServicecustom resource. - Your LLM model files are stored in the designated S3 bucket at a specific path.
Procedure
-
Create a YAML file (for example,
llm-service.yaml) that defines the KServeLLMInferenceServicecustom resource. This resource is specialized for serving large language models. Specify the connection and path annotations in the
metadata.annotationssection to link the service to your S3 storage.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.6. Creating a URI-compatible connection type using the connections API Copy linkLink copied to clipboard!
In OpenShift AI, you can create a URI-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret resource that holds a simple URI for connecting to an external resource, such as a model file hosted on an HTTP server or Hugging Face.
Prerequisites
- You have access to a Kubernetes cluster where you have permissions to create Secrets.
-
You have access to the full HTTP/HTTPS URL or Hugging Face URI (
hf://) for the target resource.
Procedure
Create a YAML file (for example,
uri-connection.yaml) that defines a KubernetesSecretof typeOpaque. This secret will contain the URI in thestringDatasection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the example YAML, replace the required URI field by populating the placeholder value in the
stringDatasection to include complete URL to the resource. This can be an HTTP/HTTPS link, or a Hugging Face URI.NoteThe
opendatahub.io/connection-type-protocol: uriannotation is used by certain operators to identify the purpose of the Kubernetes Secret.
Apply the Secret to the cluster by using the
kubectl applycommand to create the Secret in your Kubernetes cluster.kubectl apply -f uri-connection.yaml
kubectl apply -f uri-connection.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.6.1. Using a URI connection with InferenceService custom resource Copy linkLink copied to clipboard!
You can use a URI-compatible connection type with an InferenceService custom resource. In the following procedure, you reference a pre-configured URI connection to define the storage location for your model when deploying a KServe InferenceService.
Prerequisites
-
You have created a URI Connection Secret in the
projectnamespace. For more information, see Creating a URI connection type using the Connections API. - You have deployed a KServe Operator in your cluster.
- Your model file is accessible at the URI specified in the Secret.
Procedure
-
Create a YAML file (for example,
uri-inferenceservice.yaml) that defines the KServeInferenceServicecustom resource. Specify the URI connection annotation in the
metadata.annotations section. Add theopendatahub.io/connectionsannotation and set its value to reference the URI Secret name,my-uri-connection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
InferenceServicecustom resource by using thekubectl applycommand.kubectl apply -f uri-inferenceservice.yaml
kubectl apply -f uri-inferenceservice.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.6.2. Using a URI connection with LLMInferenceService custom resource Copy linkLink copied to clipboard!
You can use a URI-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you reference a pre-configured URI connection to define the storage location for your large language model (LLM) when deploying a KServe LLMInferenceService.
Prerequisites
-
You have created a URI connection Secret in the
projectnamespace. -
You have deployed a KServe Operator that supports the
LLMInferenceServicecustom resource. - Your LLM model files are accessible at the URI specified in the Secret.
Procedure
-
Create a YAML file (for example,
uri-llm-service.yaml) that defines the KServeLLMInferenceServicecustom resource. Specify the URI connection annotation in the
metadata.annotationssection. Add theopendatahub.io/connectionsannotation and set its value to reference the URI Secret name,my-uri-connection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
LLMInferenceServicecustom resource by using thekubectl applycommand.kubectl apply -f uri-llm-service.yaml
kubectl apply -f uri-llm-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.7. Creating an OCI-compatible connection type using the connections API Copy linkLink copied to clipboard!
In OpenShift AI, you can create an OCI-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret for storing credentials to an OCI-compatible container registry (like Quay.io or a private registry). This allows applications to authenticate and pull container images.
Prerequisites
- You have access to a Kubernetes cluster with permissions to create Secrets.
- You have access to the Registry URL with the organization (for example, http://quay.io/my-org).
- You have the username and password or token for the container registry.
- You have installed a tool for Base64 encoding (for example, base64 command-line utility).
Procedure
Prepare the authentication data by using Base64 to encode the
username:passwordstring, the.dockerconfigjsoncontent, and theOCI_HOSTURL.Encode credentials by combining
username:passwordand encode it to get the value for theauthfield in the JSON structure.echo -n 'myusername:mypassword' | base64 # Result: <base64-encoded-username:password>
echo -n 'myusername:mypassword' | base64 # Result: <base64-encoded-username:password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate and encode
.dockerconfigjsonby creating the JSON structure and encoding the entire string with Base64.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Encode the full registry URL including the organization.
echo -n 'http://quay.io/my-org' | base64 # The encoded result is: <base64-encoded-registry-url>
echo -n 'http://quay.io/my-org' | base64 # The encoded result is: <base64-encoded-registry-url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a YAML file (for example, oci-connection.yaml) that defines a Kubernetes Secret of type
kubernetes.io/dockerconfigjson. Use the encoded strings from the previous step in thedatasection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the secret to the cluster by using the
kubectl applycommand to create the Secret.kubectl apply -f oci-connection.yaml
kubectl apply -f oci-connection.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.7.1. Using an OCI connection with InferenceService custom resource Copy linkLink copied to clipboard!
You can use an OCI-compatible connection type with an InferenceService custom resource. In the following procedure, you define the private image registry location for your model by using an OCI connection when deploying a KServe InferenceService custom resource.
Prerequisites
-
You have created an OCI connection Secret in the
projectnamespace. - You have deployed a KServe Operator in your cluster.
Procedure
-
Create a YAML file (for example,
oci-inferenceservice.yaml) that defines the KServeInferenceServicecustom resource. -
Specify the OCI connection annotation in the
metadata.annotationssection. Add theopendatahub.io/connectionsannotation and set its value to reference the OCI Secret name,my-oci-connection. Define the model format by configuring the
.spec.predictor.model.modelFormatfield to specify the framework of the model (for example,pytorch).Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
InferenceServicecustom reource by using thekubectl applycommand to create the resource.kubectl apply -f oci-inferenceservice.yaml
kubectl apply -f oci-inferenceservice.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.7.2. Using an OCI connection with LLMInferenceService custom resource Copy linkLink copied to clipboard!
You can use an OCI-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you define the private image registry location for your Large Language Model (LLM) container image by using an OCI connection when deploying a KServe LLMInferenceService.
Prerequisites
-
You have created an OCI connection Secret in the
projectnamespace. For more information, see Creating an OCI connection type using the Connections API. -
You have a KServe Operator deployed that supports the
LLMInferenceServicecustom resource.
Procedure
-
Create a YAML file (for example,
oci-llm-service.yaml) that defines the KServeLLMInferenceServicecustom resource. Specify the OCI connection annotation in the
metadata.annotationssection. Add theopendatahub.io/connectionsannotation and set its value to reference the OCI Secret name,my-oci-connection.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
LLMInferenceServicecustom resource by using thekubectl applycommand.kubectl apply -f oci-llm-service.yaml
kubectl apply -f oci-llm-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow