Chapter 3. Using connections


3.1. Adding a connection to your project

You can enhance your project by adding a connection that contains the configuration parameters needed to connect to a data source or sink.

When you want to work with a very large data sets, you can store your data in an Open Container Initiative (OCI)-compliant registry, S3-compatible object storage bucket, or a URI-based repository, so that you do not fill up your local storage. You also have the option of associating the connection with an existing workbench that does not already have a connection.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project that you can add a connection to.
  • You have access to S3-compatible object storage, URI-based repository, or OCI-compliant registry.
  • If you intend to add the connection to an existing workbench, you have saved any data in the workbench to avoid losing work.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project that you want to add a connection to.

    A project details page opens.

  3. Click the Connections tab.
  4. Click Add connection.
  5. In the Add connection modal, select a Connection type. The OCI-compliant registry, S3 compatible object storage, and URI options are pre-installed connection types. Additional options might be available if your OpenShift AI administrator added them.

    The Add connection form opens with fields specific to the connection type that you selected.

  6. Enter a unique name for the connection.

    A resource name is generated based on the name of the connection. A resource name is the label for the underlying resource in OpenShift.

  7. Optional: Edit the default resource name. Note that you cannot change the resource name after you create the connection.
  8. Optional: Provide a description of the connection.
  9. Complete the form depending on the connection type that you selected. For example:

    1. If you selected S3 compatible object storage as the connection type, configure the connection details:

      1. In the Access key field, enter the access key ID for the S3-compatible object storage provider.
      2. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified.
      3. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket.

        Note

        Make sure to use the appropriate endpoint format. Improper formatting might cause connection errors or restrict access to storage resources. For more information about how to format object storage endpoints, see Overview of object storage endpoints.

      4. In the Region field, enter the default region of your S3-compatible object storage account.
      5. In the Bucket field, enter the name of your S3-compatible object storage bucket.
      6. Click Create.
    2. If you selected URI in the preceding step, in the URI field, enter the Uniform Resource Identifier (URI).
    3. If you selected OCI-compliant registry in the preceding step, in the OCI storage location field, enter the URI.
  10. Click Add connection.

Verification

  • The connection that you added is displayed on the Connections tab for the project.

3.2. Updating a connection

You can edit the configuration of an existing connection as described in this procedure.

Note

Any changes that you make to a connection are not applied to dependent resources (for example, a workbench) until those resources are restarted, redeployed, or otherwise regenerated.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project, created a workbench, and you have defined a connection.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project that contains the connection that you want to change.

    A project details page opens.

  3. Click the Connections tab.
  4. Click the action menu () beside the connection that you want to change and then click Edit.

    The Edit connection form opens.

  5. Make your changes.
  6. Click Save.

Verification

  • The updated connection is displayed on the Connections tab for the project.

3.3. Deleting a connection

You can delete connections that are no longer relevant to your project.

Prerequisites

  • You have logged in to Red Hat OpenShift AI.
  • You have created a project with a connection.

Procedure

  1. From the OpenShift AI dashboard, click Projects.

    The Projects page opens.

  2. Click the name of the project that you want to delete the connection from.

    A project details page opens.

  3. Click the Connections tab.
  4. Click the action menu () beside the connection that you want to delete and then click Delete connection.

    The Delete connection dialog opens.

  5. Enter the name of the connection in the text field to confirm that you intend to delete it.
  6. Click Delete connection.

Verification

  • The connection that you deleted is no longer displayed on the Connections page for the project.

3.4. Using the connections API

You can use the connections API to enable flexible connection management to external data sources and services in OpenShift AI. Connections are stored as Kubernetes Secrets with standardized annotations that enable protocol-based validation and routing. Components use connections by referencing them in their resource specifications. The Operator and components use the opendatahub.io/connections annotation to establish the relationship between resources and connection Secrets.

Important

For all new connection Secrets, use the annotation opendatahub.io/connection-type-protocol. The old annotation format, opendatahub.io/connection-type-ref, is deprecated. While both annotation formats are currently supported, opendatahub.io/connection-type-protocol takes precedence.

The connection API supports the following connection types:

  • s3: S3-compatible object storage
  • uri: Public HTTP/HTTPS URIs
  • oci: OCI-compliant container registries

Additionally, the connection API supports the following workloads:

  • Notebook (Workbenches)
  • InferenceService (Model Serving)
  • LLMInferenceService (llm-d Model Serving)

With connections API, the protocol-based annotation allows components to identify and validate appropriate connections for their use case. Protocol-specific integration logic is implemented by each component according to their needs.

All connections use the following basic structure:

apiVersion: v1
kind: Secret
metadata:
  name: <connection-name>
  namespace: <your-namespace>
  annotations:
    opendatahub.io/connection-type-protocol: "<protocol>" # Required: s3, uri, or oci
type: Opaque
data:
  # Protocol-specific fields (base64-encoded)
Copy to Clipboard Toggle word wrap
Note

While the Operator does minimal validation of s3 and uri secrets, the overall correctness of the connection secret is the responsibility of the user. OpenShift AI might add more robust validation in future releases.

3.4.1. Namespace isolation in connections API

Connections are Kubernetes Secrets stored within user project namespaces. Cross-namespace access is not supported, which means that connections can only be used by resources within the same namespace where the connection Secret exists.

  • You must have appropriate RBAC permissions to create, read, update, or delete Secrets in their project namespace.
  • Components using connections must have ServiceAccount permissions to read Secrets in the namespace.
  • Without access to a connection Secret, the using resource, for example, workbench or model serving, fails to start or function.

For more information about managing RBAC in OpenShift, see Using RBAC to define and apply permissions.

3.4.3. Validation scope

You can create connection Secrets with maximum flexibility, as the webhook validation for the Operator is advisory, not restrictive. With this flexibility, you can:

  • Include the opendatahub.io/connection-type-protocol annotation to trigger validation of protocol-specific fields, which acts as helpful guidance.
  • Create the Secret, even if you omit required annotations or include invalid fields; the system will not block secret creation.
Note

If your connection is invalid, it will cause workload failures at runtime. Always validate your connection credentials before deploying workloads.

When configuring connections API, the format for referencing a connection Secret via the opendatahub.io/connections annotation changes based on the type of Kubernetes Custom Resource (CR) or workload being used.

  • For a Notebook custom resource, multiple connections are supported. Use comma-separated values using the namespace/name format. For example: opendatahub.io/connections: 'my-project/connection1,my-project/connection2'.
  • For InferenceService and LLMInferenceService custom resources, the Connection name is a simple string, assumed to be in the same namespace as the service. Only a single connection is supported. For example: opendatahub.io/connections: 'my-connection'.
  • For InferenceService and LLMInferenceService using S3 connections, the additional annotation opendatahub.io/connection-path is used to specify the exact location of the model within the bucket. For example:

    metadata:
    annotations:
    opendatahub.io/connections: "my-s3-connection"
    opendatahub.io/connection-path: "my-bucket-path"  # Specify path within S3 bucket
    Copy to Clipboard Toggle word wrap

In OpenShift AI, you can create an Amazon S3-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret resource that holds the necessary credentials and configuration for an S3-compatible connection.

Prerequisites

  • You have access to a Kubernetes cluster where you have permissions to create Secrets.
  • You have the following details for your S3 storage: the S3 endpoint URL, bucket name, Access Key ID, and Secret Access Key.

Procedure

  1. Create a YAML file (for example, s3-connection.yaml) that defines a Kubernetes Secret of type Opaque. This secret will contain the S3 connection parameters in the stringData section.

    kind: Secret
    metadata:
      name: <connection-name> # Choose a descriptive name for your connection
      namespace: <your-namespace> # Specify the namespace where the connection is needed
      annotations:
        opendatahub.io/connection-type-protocol: "s3"
    type: Opaque
    stringData:
      # --- REQUIRED FIELDS ---
      AWS_S3_ENDPOINT: "<s3-endpoint-url>" 
    1
    
      AWS_S3_BUCKET: "<bucket-name>" 
    2
    
      AWS_ACCESS_KEY_ID: "<access-key-id>" 
    3
    
      AWS_SECRET_ACCESS_KEY: "<secret-access-key>" 
    4
    
      # -----------------------
    
      # --- OPTIONAL FIELDS (Example) ---
      # AWS_DEFAULT_REGION: "us-east-1" 
    5
    Copy to Clipboard Toggle word wrap
    1. In the example YAML, replace the required fields by populating the placeholder values in the stringData section with your actual S3 connection details:

      1
      S3 endpoint URL: The full URL for your S3 compatible endpoint.
      2
      Mandatory bucket name: The exact name of the S3 bucket you intend to connect to.
      3
      Access key ID: Your S3 account access key ID.
      4
      Secret access key: Your S3 account secret access key.
      5
      Optional region field: If your S3 provider requires a specific region or if you are using AWS, you may include this optional field.
      Note

      The opendatahub.io/connection-type-protocol: `s3` annotation is required by applications to recognize this Secret as an S3 connection.

  2. Apply the Secret to the cluster by using the kubectl apply command to create the Secret in your Kubernetes cluster.

    kubectl apply -f s3-connection.yaml
    Copy to Clipboard Toggle word wrap

You can use an Amazon S3-compatible connection type with an InferenceService custom resource. In the following procedure, you define the storage location for your model when deploying a KServe InferenceService custom resource.

Prerequisites

  • You have created an S3 connection Secret in the project namespace.
  • You have deployed a KServe Operator in your cluster.
  • Your model files are stored in the designated S3 bucket.

Procedure

  1. Create a YAML file (for example, inferenceservice.yaml) that defines the KServe InferenceService custom resource. This resource defines how your model is served.
  2. Specify the connection and path annotations in the metadata.annotations section.

    apiVersion: serving.kserve.io/v1beta1
    kind: InferenceService
    metadata:
      name: my-model                   # Name of the service
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-s3-connection"    
    1
    
        opendatahub.io/connection-path: "my-bucket-path"  
    2
    
    spec:
      predictor:
        model:
          modelFormat:
            name: pytorch             # Specify the framework format (for example, pytorch, tensorflow)
          # NOTE: The storageUri will be automatically generated and injected here
          # by the operator (for example, storageUri: s3://my-bucket/my-bucket-path)
    Copy to Clipboard Toggle word wrap
    1
    In the opendatahub.io/connections field, reference the name of your S3 connection Secret.
    2
    In the opendatahub.io/connection-path field, reference the folder path within the S3 bucket. This optional but highly recommended annotation specifies the path within the S3 bucket where your model files are located.
Note

When used with an InferenceService custom resource, the opendatahub.io/connections annotation usually requires the Secret name (for example, my-s3-connection) if the Secret is in the same namespace as the InferenceService.

You can use an Amazon S3-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you define the storage location for your large language model (LLM) when deploying a KServe LLMInferenceService by using an S3 connection.

Prerequisites

  • You have created an S3 connection Secret in the project namespace.
  • You have deployed a KServe Operator that supports the LLMInferenceService custom resource.
  • Your LLM model files are stored in the designated S3 bucket at a specific path.

Procedure

  1. Create a YAML file (for example, llm-service.yaml) that defines the KServe LLMInferenceService custom resource. This resource is specialized for serving large language models.
  2. Specify the connection and path annotations in the metadata.annotations section to link the service to your S3 storage.

    apiVersion: serving.kserve.io/v1alpha1
    kind: LLMInferenceService
    metadata:
      name: my-llm-model                   # Name of the LLM serving instance
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-s3-connection"      
    1
    
        opendatahub.io/connection-path: "my-bucket-path"    
    2
    
    spec:
      model:
        # NOTE: The .spec.model.uri field is automatically injected by the operator
        # based on the connection and path annotations above.
    
        # Example of the injected field: .spec.model.uri: s3://my-bucket/my-bucket-path
    Copy to Clipboard Toggle word wrap
    1
    In the opendatahub.io/connections field, reference the name of your S3 connection Secret. For example, my-s3-connection.
    2
    In the opendatahub.io/connection-path field, specify the path within the S3 bucket where your LLM model files are stored. For example, my-bucket-path.

In OpenShift AI, you can create a URI-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret resource that holds a simple URI for connecting to an external resource, such as a model file hosted on an HTTP server or Hugging Face.

Prerequisites

  • You have access to a Kubernetes cluster where you have permissions to create Secrets.
  • You have access to the full HTTP/HTTPS URL or Hugging Face URI (hf://) for the target resource.

Procedure

  1. Create a YAML file (for example, uri-connection.yaml) that defines a Kubernetes Secret of type Opaque. This secret will contain the URI in the stringData section.

    apiVersion: v1
    kind: Secret
    metadata:
      name: <connection-name>
      namespace: <your-namespace>
      annotations:
        opendatahub.io/connection-type-protocol: "uri"
    type: Opaque
    stringData:
      URI: "<uniform-resource-identifier>" # The full URI/URL of the external resource
    Copy to Clipboard Toggle word wrap
    1. In the example YAML, replace the required URI field by populating the placeholder value in the stringData section to include complete URL to the resource. This can be an HTTP/HTTPS link, or a Hugging Face URI.

      Note

      The opendatahub.io/connection-type-protocol: uri annotation is used by certain operators to identify the purpose of the Kubernetes Secret.

  2. Apply the Secret to the cluster by using the kubectl apply command to create the Secret in your Kubernetes cluster.

    kubectl apply -f uri-connection.yaml
    Copy to Clipboard Toggle word wrap

You can use a URI-compatible connection type with an InferenceService custom resource. In the following procedure, you reference a pre-configured URI connection to define the storage location for your model when deploying a KServe InferenceService.

Prerequisites

  • You have created a URI Connection Secret in the project namespace. For more information, see Creating a URI connection type using the Connections API.
  • You have deployed a KServe Operator in your cluster.
  • Your model file is accessible at the URI specified in the Secret.

Procedure

  1. Create a YAML file (for example, uri-inferenceservice.yaml) that defines the KServe InferenceService custom resource.
  2. Specify the URI connection annotation in the metadata.annotations section. Add the opendatahub.io/connections annotation and set its value to reference the URI Secret name, my-uri-connection.

    apiVersion: serving.kserve.io/v1beta1
    kind: InferenceService
    metadata:
      name: my-model                   # Name of the service
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-uri-connection"      # Reference to the URI Connection Secret
    spec:
      predictor:
        model:
          modelFormat:
            name: sklearn             # Specify the framework format (for example, sklearn, tensorflow)
          # NOTE: The storageUri will be automatically generated and injected here
          # by the operator using the URI value from the Secret.
          # Example: .spec.predictor.model.storageUri: https://example.com/models/my-model.tar.gz
    Copy to Clipboard Toggle word wrap
  3. Apply the InferenceService custom resource by using the kubectl apply command.

    kubectl apply -f uri-inferenceservice.yaml
    Copy to Clipboard Toggle word wrap

You can use a URI-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you reference a pre-configured URI connection to define the storage location for your large language model (LLM) when deploying a KServe LLMInferenceService.

Prerequisites

  • You have created a URI connection Secret in the project namespace.
  • You have deployed a KServe Operator that supports the LLMInferenceService custom resource.
  • Your LLM model files are accessible at the URI specified in the Secret.

Procedure

  1. Create a YAML file (for example, uri-llm-service.yaml) that defines the KServe LLMInferenceService custom resource.
  2. Specify the URI connection annotation in the metadata.annotations section. Add the opendatahub.io/connections annotation and set its value to reference the URI Secret name, my-uri-connection.

    apiVersion: serving.kserve.io/v1alpha1
    kind: LLMInferenceService
    metadata:
      name: my-llm-model                   # Name of the LLM serving instance
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-uri-connection"      # Reference to the URI Connection Secret
    spec:
      model:
        # NOTE: The .spec.model.uri field is automatically generated and injected here
        # by the operator using the URI value from the Secret.
        # Example: .spec.model.uri: https://example.com/models/llm-model
    Copy to Clipboard Toggle word wrap
  3. Apply the LLMInferenceService custom resource by using the kubectl apply command.

    kubectl apply -f uri-llm-service.yaml
    Copy to Clipboard Toggle word wrap

In OpenShift AI, you can create an OCI-compatible connection type by using the connections API. In the following procedure, you define a Kubernetes Secret for storing credentials to an OCI-compatible container registry (like Quay.io or a private registry). This allows applications to authenticate and pull container images.

Prerequisites

  • You have access to a Kubernetes cluster with permissions to create Secrets.
  • You have access to the Registry URL with the organization (for example, http://quay.io/my-org).
  • You have the username and password or token for the container registry.
  • You have installed a tool for Base64 encoding (for example, base64 command-line utility).

Procedure

  1. Prepare the authentication data by using Base64 to encode the username:password string, the .dockerconfigjson content, and the OCI_HOST URL.

    1. Encode credentials by combining username:password and encode it to get the value for the auth field in the JSON structure.

      echo -n 'myusername:mypassword' | base64
      # Result: <base64-encoded-username:password>
      Copy to Clipboard Toggle word wrap
    2. Generate and encode .dockerconfigjson by creating the JSON structure and encoding the entire string with Base64.

      {
        "auths": {
          "quay.io": {
            "auth": "<base64-encoded-username:password>"
          }
        }
      }
      # The encoded result is: <base64-encoded-docker-config>
      Copy to Clipboard Toggle word wrap
    3. Encode the full registry URL including the organization.

      echo -n 'http://quay.io/my-org' | base64
      # The encoded result is: <base64-encoded-registry-url>
      Copy to Clipboard Toggle word wrap
  2. Create a YAML file (for example, oci-connection.yaml) that defines a Kubernetes Secret of type kubernetes.io/dockerconfigjson. Use the encoded strings from the previous step in the data section.

    apiVersion: v1
    kind: Secret
    metadata:
      name: <connection-name>
      namespace: <your-namespace>
      annotations:
        opendatahub.io/connection-type-protocol: "oci" # Protocol identifier
    type: kubernetes.io/dockerconfigjson
    data:
      # Required Field: Base64-encoded Docker config JSON
      .dockerconfigjson: <base64-encoded-docker-config>
    
      # Required Field: Base64-encoded registry host URL with organization
      OCI_HOST: <base64-encoded-registry-url>
    Copy to Clipboard Toggle word wrap
  3. Apply the secret to the cluster by using the kubectl apply command to create the Secret.

    kubectl apply -f oci-connection.yaml
    Copy to Clipboard Toggle word wrap

You can use an OCI-compatible connection type with an InferenceService custom resource. In the following procedure, you define the private image registry location for your model by using an OCI connection when deploying a KServe InferenceService custom resource.

Prerequisites

  • You have created an OCI connection Secret in the project namespace.
  • You have deployed a KServe Operator in your cluster.

Procedure

  1. Create a YAML file (for example, oci-inferenceservice.yaml) that defines the KServe InferenceService custom resource.
  2. Specify the OCI connection annotation in the metadata.annotations section. Add the opendatahub.io/connections annotation and set its value to reference the OCI Secret name, my-oci-connection.
  3. Define the model format by configuring the .spec.predictor.model.modelFormat field to specify the framework of the model (for example, pytorch).

    apiVersion: serving.kserve.io/v1beta1
    kind: InferenceService
    metadata:
      name: my-model                   # Name of the service
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-oci-connection"      # Reference to the OCI Connection Secret
    spec:
      predictor:
        model:
          modelFormat:
            name: pytorch             # Specify the framework format (for example, pytorch)
          # NOTE: The operator webhook creates and injects .spec.predictor.imagePullSecrets
          # for OCI authentication based on the Secret.
    Copy to Clipboard Toggle word wrap
  4. Apply the InferenceService custom reource by using the kubectl apply command to create the resource.

    kubectl apply -f oci-inferenceservice.yaml
    Copy to Clipboard Toggle word wrap

You can use an OCI-compatible connection type with the LLMInferenceService custom resource. In the following procedure, you define the private image registry location for your Large Language Model (LLM) container image by using an OCI connection when deploying a KServe LLMInferenceService.

Prerequisites

  • You have created an OCI connection Secret in the project namespace. For more information, see Creating an OCI connection type using the Connections API.
  • You have a KServe Operator deployed that supports the LLMInferenceService custom resource.

Procedure

  1. Create a YAML file (for example, oci-llm-service.yaml) that defines the KServe LLMInferenceService custom resource.
  2. Specify the OCI connection annotation in the metadata.annotations section. Add the opendatahub.io/connections annotation and set its value to reference the OCI Secret name, my-oci-connection.

    apiVersion: serving.kserve.io/v1alpha1
    kind: LLMInferenceService
    metadata:
      name: my-llm-model                   # Name of the LLM serving instance
      namespace: my-project
      annotations:
        opendatahub.io/connections: "my-oci-connection"      # Reference to the OCI Connection Secret
    spec:
      model:
        # Define the container image path here, if required.
        # NOTE: The operator webhook automatically injects `.spec.template.imagePullSecrets` for OCI authentication based on this connection.
        # The imagePullSecrets field will be set to the connection secret name.
    Copy to Clipboard Toggle word wrap
  3. Apply the LLMInferenceService custom resource by using the kubectl apply command.

    kubectl apply -f oci-llm-service.yaml
    Copy to Clipboard Toggle word wrap
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top