Chapter 8. Working with certificates


When you install Red Hat OpenShift AI, OpenShift automatically applies a default Certificate Authority (CA) bundle to manage authentication for most OpenShift AI components, such as workbenches and model servers. These certificates are trusted self-signed certificates that help secure communication. However, as a cluster administrator, you might need to configure additional self-signed certificates to use some components, such as the data science pipeline server and object storage solutions. If an OpenShift AI component uses a self-signed certificate that is not part of the existing cluster-wide CA bundle, you have the following options for including the certificate:

  • Add it to the OpenShift cluster-wide CA bundle.
  • Add it to a custom CA bundle, separate from the cluster-wide CA bundle.

As a cluster administrator, you can also change how to manage authentication for OpenShift AI as follows:

  • Manually manage certificate changes, instead of relying on the OpenShift AI Operator to handle them automatically.
  • Remove the cluster-wide CA bundle, either from all namespaces or specific ones. If you prefer to implement a different authentication approach, you can override the default OpenShift AI behavior, as described in Removing the CA bundle.

8.1. Understanding how OpenShift AI handles certificates

After installing OpenShift AI, the Red Hat OpenShift AI Operator automatically creates an empty odh-trusted-ca-bundle configuration file (ConfigMap). The Cluster Network Operator (CNO) injects the cluster-wide CA bundle into the odh-trusted-ca-bundle configMap with the label "config.openshift.io/inject-trusted-cabundle".

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/part-of: opendatahub-operator
    config.openshift.io/inject-trusted-cabundle: 'true'
  name: odh-trusted-ca-bundle

After the CNO operator injects the bundle, it updates the ConfigMap with the contents of the ca-bundle.crt file.

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/part-of: opendatahub-operator
    config.openshift.io/inject-trusted-cabundle: 'true'
  name: odh-trusted-ca-bundle
data:
  ca-bundle.crt: |
    <BUNDLE OF CLUSTER-WIDE CERTIFICATES>

The management of CA bundles is configured through the Data Science Cluster Initialization (DSCI) object. Within this object, you can set the spec.trustedCABundle.managementState field to one of the following values:

  • Managed: (Default) The Red Hat OpenShift AI Operator manages the odh-trusted-ca-bundle ConfigMap and adds it to all non-reserved existing and new namespaces. It does not add the ConfigMap to any reserved or system namespaces, such as default, openshift-\* or kube-*. The Red Hat OpenShift AI Operator automatically updates the ConfigMap to reflect any changes made to the customCABundle field.
  • Unmanaged: The Red Hat OpenShift AI administrator manually manages the odh-trusted-ca-bundle ConfigMap, instead of allowing the Operator to manage it. Changing the managementState from Managed to Unmanaged does not remove the odh-trusted-ca-bundle ConfigMap. However, the ConfigMap is no longer automatically updated if changes are made to the customCABundle field.

    The Unmanaged setting is useful if your organization implements a different method for managing trusted CA bundles, such as Ansible automation, and does not want the Red Hat OpenShift AI Operator to handle certificates automatically. This setting provides greater control, preventing the Operator from overwriting custom configurations.

8.2. Adding certificates

If you must use a self-signed certificate that is not part of the existing cluster-wide CA bundle, you have two options for configuring the certificate:

  • Add it to the cluster-wide CA bundle.

    This option is useful when the certificate is needed for secure communication across multiple services or when it’s required by security policies to be trusted cluster-wide. This option ensures that all services and components in the cluster trust the certificate automatically. It simplifies management because the certificate is trusted across the entire cluster, avoiding the need to configure the certificate separately for each service.

  • Add it to a custom CA bundle that is separate from the OpenShift cluster-wide bundle.

    Consider this option for the following scenarios:

    • Limit scope: Only specific services need the certificate, not the whole cluster.
    • Isolation: Keeps custom certificates separate, preventing changes to the global configuration.
    • Avoid global impact: Does not affect services that do not need the certificate.
    • Easier management: Makes it simpler to manage certificates for specific services.

8.3. Adding certificates to a cluster-wide CA bundle

You can add a self-signed certificate to a cluster-wide Certificate Authority (CA) bundle (ca-bundle.crt).

When the cluster-wide CA bundle is updated, the Cluster Network Operator (CNO) automatically detects the change and injects the updated bundle into the odh-trusted-ca-bundle ConfigMap, making the certificate available to OpenShift AI components.

Note: By default, the management state for the Trusted CA bundle is Managed (that is, the spec.trustedCABundle.managementState field in the Red Hat OpenShift AI Operator’s DSCI object is set to Managed). If you change this setting to Unmanaged, you must manually update the odh-trusted-ca-bundle ConfigMap to include the updated cluster-wide CA bundle.

Alternatively, you can add certificates to a custom CA bundle, as described in Adding certificates to a custom CA bundle.

Prerequisites

  • You have created a self-signed certificate and saved the certificate to a file. For example, you have created a certificate using OpenSSL and saved it to a file named example-ca.crt.
  • You have cluster administrator access for the OpenShift cluster where Red Hat OpenShift AI is installed.
  • You have installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.

Procedure

  1. Create a ConfigMap that includes the root CA certificate used to sign the certificate, where </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system:

    oc create configmap custom-ca \
     	--from-file=ca-bundle.crt=</path/to/example-ca.crt> \
     	-n openshift-config
  2. Update the cluster-wide proxy configuration with the newly-created ConfigMap:

    oc patch proxy/cluster \
        	 --type=merge \
       	 --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}'

Verification

Run the following command to verify that all non-reserved namespaces contain the odh-trusted-ca-bundle ConfigMap:

oc get configmaps --all-namespaces -l app.kubernetes.io/part-of=opendatahub-operator | grep odh-trusted-ca-bundle

Additional resources

8.4. Adding certificates to a custom CA bundle

You can add self-signed certificates to a custom CA bundle that is separate from the OpenShift cluster-wide bundle.

This method is ideal for scenarios where components need access to external resources that require a self-signed certificate. For example, you may need to add self-signed certificates to grant data science pipelines access S3-compatible object storage.

Prerequisites

  • You have created a self-signed certificate and saved the certificate to a file. For example, you have created a certificate using OpenSSL and saved it to a file named example-ca.crt.
  • You have cluster administrator access for the OpenShift cluster where Red Hat OpenShift AI is installed.
  • You have installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.

Procedure

  1. Log in to OpenShift.
  2. Click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  3. Click the DSC Initialization tab.
  4. Click the default-dsci object.
  5. Click the YAML tab.
  6. In the spec.trustedCABundle section, add the custom certificate to the customCABundle field, as shown in the following example:

    spec:
      trustedCABundle:
        managementState: Managed
        customCABundle: |
          -----BEGIN CERTIFICATE-----
          examplebundle123
          -----END CERTIFICATE-----
  7. Click Save.

The Red Hat OpenShift AI Operator automatically updates the ConfigMap to reflect any changes made to the customCABundle field. It adds the odh-ca-bundle.crt file containing the certificates to the odh-trusted-ca-bundle ConfigMap, as shown in the following example:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/part-of: opendatahub-operator
    config.openshift.io/inject-trusted-cabundle: 'true'
  name: odh-trusted-ca-bundle
data:
  ca-bundle.crt: |
    <BUNDLE OF CLUSTER-WIDE CERTIFICATES>
  odh-ca-bundle.crt: |
    <BUNDLE OF CUSTOM CERTIFICATES>

Verification

Run the following command to verify that a non-reserved namespace contains the odh-trusted-ca-bundle ConfigMap and that the ConfigMap contains your customCABundle value. In the following command, example-namespace is the non-reserved namespace and examplebundle123 is the customCABundle value.

oc get configmap odh-trusted-ca-bundle -n example-namespace -o yaml | grep examplebundle123

8.5. Using self-signed certificates with OpenShift AI components

Some OpenShift AI components have additional options or required configuration for self-signed certificates.

8.5.1. Accessing S3-compatible object storage with self-signed certificates

To securely connect OpenShift AI components to object storage solutions or databases that are deployed within an OpenShift cluster that uses self-signed certificates, you must provide a certificate authority (CA) certificate. Each namespace includes a ConfigMap named kube-root-ca.crt, which contains the CA certificate of the internal API Server.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You have installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI.
  • You have deployed an object storage solution or database in your OpenShift cluster.

Procedure

  1. In a terminal window, log in to the OpenShift CLI as shown in the following example:

    oc login api.<cluster_name>.<cluster_domain>:6443 --web
  2. Retrieve the current OpenShift AI trusted CA configuration and store it in a new file:

    oc get dscinitializations.dscinitialization.opendatahub.io default-dsci -o json | jq -r '.spec.trustedCABundle.customCABundle' > /tmp/my-custom-ca-bundles.crt
  3. Add the cluster’s kube-root-ca.crt ConfigMap to the OpenShift AI trusted CA configuration:

    oc get configmap kube-root-ca.crt -o jsonpath="{['data']['ca\.crt']}" >> /tmp/my-custom-ca-bundles.crt
  4. Update the OpenShift AI trusted CA configuration to trust certificates issued by the certificate authorities in kube-root-ca.crt:

    oc patch dscinitialization default-dsci --type='json' -p='[{"op":"replace","path":"/spec/trustedCABundle/customCABundle","value":"'"$(awk '{printf "%s\\n", $0}' /tmp/my-custom-ca-bundles.crt)"'"}]'

Verification

  • You can successfully deploy components that are configured to use object storage solutions or databases that are deployed in the OpenShift cluster. For example, a pipeline server that is configured to use a database deployed in the cluster starts successfully.
Note

You can verify your new certificate configuration by following the steps in the OpenShift AI tutorial - Fraud Detection example. Run the script to install local object storage buckets and create connections, and then enable data science pipelines.

For more information about running the script to install local object storage buckets, see Running a script to install local object storage buckets and create connections.

For more information about enabling data science pipelines, see Enabling data science pipelines.

8.5.2. Configuring a certificate for data science pipelines

By default, OpenShift AI includes OpenShift cluster-wide certificates in the odh-trusted-ca-bundle ConfigMap. These cluster-wide certificates cover most components, such as workbenches and model servers. However, the pipeline server might require additional Certificate Authority (CA) configuration, especially when interacting with external systems that use self-signed or custom certificates.

You have the following options for adding the certificate for data science pipelines:

Prerequisites

  • You have cluster administrator access for the OpenShift cluster where Red Hat OpenShift AI is installed.
  • You have created a self-signed certificate and saved the certificate to a file. For example, you have created a certificate using OpenSSL and saved it to a file named example-ca.crt.
  • You have configured a data science pipeline server.

Procedure

  1. Log in to the OpenShift console.
  2. From Workloads ConfigMaps, create a ConfigMap with the required bundle in the same data science project as the target data science pipeline:

    kind: ConfigMap
    apiVersion: v1
    metadata:
        name: custom-ca-bundle
    data:
        ca-bundle.crt: |
        # contents of ca-bundle.crt
  3. Add the following snippet to the .spec.apiserver.caBundle field of the underlying Data Science Pipelines Application (DSPA):

    apiVersion: datasciencepipelinesapplications.opendatahub.io/v1
    kind: DataSciencePipelinesApplication
    metadata:
        name: data-science-dspa
    spec:
        ...
        apiServer:
        ...
        cABundle:
            configMapName: custom-ca-bundle
            configMapKey: ca-bundle.crt
  4. Save the ConfigMap. The pipeline server pod automatically redeploys with the updated bundle.

Verification

Confirm that your CA bundle was successfully mounted:

  1. Log in to the OpenShift console.
  2. Go to the data science project that has the target data science pipeline.
  3. Click the Pods tab.
  4. Click the pipeline server pod with the ds-pipeline-dspa-<hash> prefix.
  5. Click Terminal.
  6. Enter cat /dsp-custom-certs/dsp-ca.crt.
  7. Verify that your CA bundle is present within this file.

8.5.3. Configuring a certificate for workbenches

Important

By default, self-signed certificates apply to workbenches that you create after configuring cluster-wide certificates. To apply cluster-wide certificates to an existing workbench, stop and then restart the workbench.

Self-signed certificates are stored in /etc/pki/tls/custom-certs/ca-bundle.crt. Workbenches use a preset environment variable that many popular HTTP client packages point to for certificates. For packages that are not included by default, you can provide this certificate path. For example, for the kfp package to connect to the data science pipeline server:

from kfp.client import Client

with open(sa_token_file_path, 'r') as token_file:
    bearer_token = token_file.read()

    client = Client(
        host='https://<GO_TO_ROUTER_OF_DS_PROJECT>/',
        existing_token=bearer_token,
        ssl_ca_cert='/etc/pki/tls/custom-certs/ca-bundle.crt'
    )
    print(client.list_experiments())

8.5.4. Using the cluster-wide CA bundle for the single-model serving platform

By default, the single-model serving platform in OpenShift AI uses a self-signed certificate generated at installation for the endpoints that are created when deploying a server.

If you have configured cluster-wide certificates on your OpenShift cluster, they are used by default for other types of endpoints, such as endpoints for routes.

The following procedure explains how to use the same certificate that you already have for your OpenShift cluster.

Prerequisites

  • You have cluster administrator access for the OpenShift cluster where Red Hat OpenShift AI is installed.
  • You have configured cluster-wide certificates in OpenShift.
  • You have configured the single-model serving platform, as described in Installing the single-model serving platform.

Procedure

  1. Log in to the OpenShift console.
  2. From the list of projects, open the openshift-ingress project.
  3. Click YAML.
  4. Search for "cert" to find a secret with a name that includes "cert". For example, rhods-internal-primary-cert-bundle-secret. The contents of the secret should contain two items that are used for all OpenShift Routes: tls.cert (the certificate) and tls.key (the key).
  5. Copy the reference to the secret.
  6. From the list of projects, open the istio-system project.
  7. Create a YAML file and paste the reference to the secret that you copied from the openshift-ingress YAML file.
  8. Edit the YAML code to keep only the relevant content, as shown in the following example. Replace rhods-internal-primary-cert-bundle-secret with the name of your secret:

    kind: Secret
    apiVersion: v1
    metadata:
    name: rhods-internal-primary-cert-bundle-secret
    data:
    tls.crt: >-
        LS0tLS1CRUd...
    tls.key: >-
        LS0tLS1CRUd...
    type: kubernetes.io/tls
  9. Save the YAML file in the istio-system project.
  10. Navigate to Operators Installed Operators Red Hat OpenShift AI.
  11. Click Data Science Cluster*, and then click default-dsc YAML.
  12. Edit the kserve configuration section to refer to your secret as shown in the following example. Replace rhods-internal-primary-cert-bundle-secret with the name of the secret that you created in Step 8.

    kserve:
    devFlags: {}
    managementState: Managed
    serving:
        ingressGateway:
        certificate:
            secretName: rhods-internal-primary-cert-bundle-secret
            type: Provided
        managementState: Managed
        name: knative-serving

8.6. Managing certificates without the Red Hat OpenShift AI Operator

By default, the Red Hat OpenShift AI Operator manages the odh-trusted-ca-bundle ConfigMap, which contains the trusted CA bundle and is applied to all non-reserved namespaces in the cluster. The Operator automatically updates this ConfigMap whenever changes are made to the CA bundle.

If your organization prefers to manage trusted CA bundles independently, for example, by using Ansible automation, you can disable this default behavior to prevent automatic updates by the Red Hat OpenShift AI Operator.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.

Procedure

  1. In the OpenShift web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  2. Click the DSC Initialization tab.
  3. Click the default-dsci object.
  4. Click the YAML tab.
  5. In the spec section, change the value of the managementState field for trustedCABundle to Unmanaged, as shown:

    spec:
      trustedCABundle:
        managementState: Unmanaged
  6. Click Save.

    Changing the managementState from Managed to Unmanaged prevents automatic updates when the customCABundle field is modified, but does not remove the odh-trusted-ca-bundle ConfigMap.

Verification

  1. In the spec section, set the value of the customCABundle field for trustedCABundle, for example:

    spec:
      trustedCABundle:
        managementState: Unmanaged
        customCABundle: example123
  2. Click Save.
  3. Click Workloads ConfigMaps.
  4. Select a project from the project list.
  5. Click the odh-trusted-ca-bundle ConfigMap.
  6. Click the YAML tab and verify that the value of the customCABundle field did not update.

8.7. Removing the CA bundle

If you prefer to implement a different authentication approach for your OpenShift AI installation, you can override the default behavior by removing the CA bundle.

You have two options for removing the CA bundle:

  • Remove the CA bundle from all non-reserved projects in OpenShift AI.
  • Remove the CA bundle from a specific project.

8.7.1. Removing the CA bundle from all namespaces

You can remove a Certificate Authority (CA) bundle from all non-reserved namespaces in OpenShift AI. This process changes the default configuration and disables the creation of the odh-trusted-ca-bundle configuration file (ConfigMap), as described in Working with certificates.

Note

The odh-trusted-ca-bundle ConfigMaps are only deleted from namespaces when you set the managementState of trustedCABundle to Removed; deleting the DSC Initialization does not delete the ConfigMaps.

To remove a CA bundle from a single namespace only, see Removing the CA bundle from a single namespace.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You installed the OpenShift command line interface (oc) as described in Installing the OpenShift CLI.

Procedure

  1. In the OpenShift web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator.
  2. Click the DSC Initialization tab.
  3. Click the default-dsci object.
  4. Click the YAML tab.
  5. In the spec section, change the value of the managementState field for trustedCABundle to Removed:

    spec:
      trustedCABundle:
        managementState: Removed
  6. Click Save.

Verification

  • Run the following command to verify that the odh-trusted-ca-bundle ConfigMap has been removed from all namespaces:

    oc get configmaps --all-namespaces | grep odh-trusted-ca-bundle

    The command should not return any ConfigMaps.

8.7.2. Removing the CA bundle from a single namespace

You can remove a custom Certificate Authority (CA) bundle from individual namespaces in OpenShift AI. This process disables the creation of the odh-trusted-ca-bundle configuration file (ConfigMap) for the specified namespace only.

To remove a CA bundle from all namespaces, Removing the CA bundle from all namespaces.

Prerequisites

  • You have cluster administrator privileges for your OpenShift cluster.
  • You installed the OpenShift command line interface (oc) as described in Installing the OpenShift CLI.

Procedure

  • Run the following command to remove a CA bundle from a namespace. In the following command, example-namespace is the non-reserved namespace.

    oc annotate ns example-namespace security.opendatahub.io/inject-trusted-ca-bundle=false

Verification

  • Run the following command to verify that the CA bundle has been removed from the namespace. In the following command, example-namespace is the non-reserved namespace.

    oc get configmap odh-trusted-ca-bundle -n example-namespace

    The command should return configmaps "odh-trusted-ca-bundle" not found.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.