Chapter 10. Configuring a central authentication service for an external OIDC identity provider


The built-in OpenShift OAuth server supports integration with various identity providers. However, it has limitations in direct OpenID Connect (OIDC) configurations on Red Hat OpenShift Service on AWS (ROSA) and on-premises OpenShift (OCP) 4.20 and later clusters. The internal oauth service can be disabled or removed, which breaks dependencies like oauth-proxy sidecar containers.

You can configure an external OIDC identity provider directly with Red Hat OpenShift AI by configuring a centralized Gateway API. The Gateway API configuration provides a secure, scalable, and manageable authentication solution because it centralizes the authentication logic and decouples it from individual backend services.

Important

OpenID Connect (OIDC) configuration is currently available in Red Hat OpenShift AI 3.0 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.1. About centralized authentication Gateway API

A Gateway API with centralized authentication centralizes ingress traffic for all services behind a single domain, providing the following advanced capabilities:

  • Centralized authentication: A single authentication service requiring only one client ID and secret from the external OIDC Identity Provider (IDP).
  • Simplified backend services: Backend services assume all incoming traffic is authenticated and contains necessary user headers.
  • Authorization handling: Services still handle authorization at the service or pod level using sidecars like kube-rbac-proxy.
  • Encrypted Communication: Traffic from the gateway to the backend services is fully encrypted with Transport Layer Security (TLS).

The Gateway API is implemented via an Istio Gateway on OpenShift (OCP) 4.19 and later. Since the Istio Gateway is built on the Envoy Proxy, it provides access to powerful Envoy-specific Custom Resource Definitions (CRDs), such as EnvoyFilter. The opendatahub-operator manages the deployment of kube-auth-proxy. The Operator then configures the Istio Gateway to use this service via an EnvoyFilter Custom Resource (CR).

For more information on supported OpenID Connect (OIDC) identity providers, see OpenShift documentation on Direct authentication identity providers

As a OpenShift AI administrator, you can configure an OpenID Connect (OIDC) authentication for Gateway API using parameters from your external OIDC identity provider.

Important

You must configure the OpenShift for direct authentication with an external OIDC identity provider before configuring the OpenShift AI Gateway for the Gateway to function properly.

Prerequisites

Note

You must configure OpenShift for direct authentication using the same OIDC provider that the Gateway uses.

  • You have successfully installed and deployed OpenShift AI.

    • You have deployed the DataScienceCluster (DSC) and DSCInitialization. For more information, see Installing and managing OpenShift AI components.
    • You have deployed the OpenShift AI Operator in the rhods-operator namespace.
    • You have enabled Gateway API support on OCP 4.19 or later with Istio Gateway.
    • You have the following external authentication provider details:

      • Issuer URL
      • Client ID
      • Client Secret
      • Realm name (for Keycloak)
    • You have cluster administrator access which is required to create secrets and configure GatewayConfig.

For detailed step-by-step instructions, troubleshooting, and field definitions, refer to the OpenShift documentation on Configuring an external OIDC identity provider.

Procedure

  1. In the OpenShift CLI (oc), verify the OpenShift authentication type by running the following command:

    $ oc get authentication.config/cluster -o jsonpath='{.spec.type}'
    Copy to Clipboard Toggle word wrap

    If the authentication is successful, you will see the following output: OIDC

  2. Verify that your OIDC provider is configured as expected by running the following command:

    $ oc get authentication.config/cluster -o jsonpath='{.spec.oidcProviders[*].name}'
    Copy to Clipboard Toggle word wrap

    If the OIDC configuration is successful, you will see your provider name (e.g., keycloak).

  3. Verify that the kube-apiserver has rolled out changes as expected.

    $ oc get co kube-apiserver
    Copy to Clipboard Toggle word wrap

    If success is indicated, the expected output should look like the following example:

    NAME              VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    kube-apiserver    4.14.9    True        False         False      1d
    Copy to Clipboard Toggle word wrap
    Note

    The rollout can take 20 minutes or more. Wait until all nodes have the new revision before proceeding. You can proceed to Gateway configuration steps when oc get authentication.config/cluster shows type: OIDC, oc get co kube-apiserver shows the authentication rollout is complete, and you can successfully authenticate to OpenShift using OIDC credentials.

  4. Define the the following environment variables. You must replace the placeholder values with the actual details from your OIDC Identity Provider (IdP):

    # Replace with your actual values
    KEYCLOAK_DOMAIN="<keycloak.example.com>"
    KEYCLOAK_REALM="<your-realm>"
    KEYCLOAK_CLIENT_ID="<your-client-id>"
    KEYCLOAK_CLIENT_SECRET="<your-client-secret>"
    Copy to Clipboard Toggle word wrap
  5. Create the client secret in the openshift-ingress namespace:

    $ oc create secret generic keycloak-client-secret \
        --from-literal=clientSecret=$KEYCLOAK_CLIENT_SECRET \
        -n openshift-ingress
    Copy to Clipboard Toggle word wrap
  6. Update the GatewayConfig custom resource to enable OIDC authentication by patching it with the secret reference and OIDC details:

    $ oc patch gatewayconfig default-gateway --type='merge' -p='{
        "spec": {
          "oidc": {
            "issuerURL": "https://'$KEYCLOAK_DOMAIN'/realms/'$KEYCLOAK_REALM'",
            "clientID": "'$KEYCLOAK_CLIENT_ID'",
            "clientSecretRef": {
              "name": "keycloak-client-secret",
              "key": "clientSecret"
            }
          }
        }
      }'
    Copy to Clipboard Toggle word wrap
  7. Verify that the client secret has been created and that the GatewayConfig shows the correct OIDC configuration:

    $ oc get secret keycloak-client-secret -n openshift-ingress
    $ oc get gatewayconfig default-gateway -o jsonpath='{.spec.oidc}'
    Copy to Clipboard Toggle word wrap

    Expected output for secret and GatewayConfig should look like the following example:

    # Expected output (for secret)
    NAME                     TYPE     DATA   AGE
    keycloak-client-secret   Opaque   1      2m
    # Expected output (for GatewayConfig)
    {"clientID":"your-client-id","clientSecretRef":{"key":"clientSecret","name":"keycloak-client-secret"},"issuerURL":"https://keycloak.example.com/realms/your-realm"}
    Copy to Clipboard Toggle word wrap

Verification

  1. After configuring and authenticating the Gateway for your identity provider, you need to ensure that you can access your OpenShift console.

    1. Access the gateway by accessing the Console link:

      $ oc get consolelink
      Copy to Clipboard Toggle word wrap
    2. Login with your OIDC credentials and verify the following:

      1. You are redirected to the OIDC provider login page. A successful authentication redirects back to the Gateway.
      2. Your OpenShift AI components are accessible (for example: Dashboard, Notebooks).
  2. Check the GatewayConfig status to verify that the OIDC configuration was successfully provisioned:

    $ oc get gatewayconfig default-gateway -o yaml
    Copy to Clipboard Toggle word wrap

    The expected output is the full YAML configuration of the GatewayConfig resource, showing the OIDC configuration details under spec.oidc and confirming successful deployment by displaying both the Ready and ProvisioningSucceeded conditions with a status: "True" value.

  3. Verify the kube-auth-proxy deployment is running successfully in the openshift-ingress namespace:

    $ oc get deployment kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output looks like the following example:

    NAME              READY   UP-TO-DATE   AVAILABLE   AGE
    kube-auth-proxy   1/1     1            1           5m
    Copy to Clipboard Toggle word wrap
  4. Check the status and accessibility of the data-science-gateway:

    $ oc get gateway data-science-gateway -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output looks like the following example:

    NAME                   CLASS                        ADDRESS                                                                        PROGRAMMED   AGE
    data-science-gateway   data-science-gateway-class   aa87f5da7f0c748d5aa63b4916604108-107643684.us-east-1.elb.amazonaws.com         True         5m
    Copy to Clipboard Toggle word wrap
  5. Test the OIDC discovery endpoint by running the following command:

    curl -s https://your-keycloak-domain/realms/your-realm/.well-known/openid-configuration
    Copy to Clipboard Toggle word wrap

    The expected output is a JSON object containing the OIDC configuration endpoints (issuer, authorization_endpoint, token_endpoint, etc.) that confirm the OIDC provider is publicly discoverable.

Next steps

Once the external OIDC is configured and authenticated, the Cluster Administrator must perform the necessary authorization by mapping external Identity Provider (IdP) groups to specific OpenShift ClusterRoles to grant access to projects and resources.

  1. Create a ClusterRole that grants users read and list access to OpenShift projects in the console:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: odh-projects-read
    rules:
    - apiGroups: ["project.openshift.io"]
      resources: ["projects"]
      verbs: ["get","list"]
    Copy to Clipboard Toggle word wrap
  2. Bind the odh-projects-read ClusterRole to your IdP group (for example, odh-users).

    $ oc adm policy add-cluster-role-to-group odh-projects-read odh-users
    Copy to Clipboard Toggle word wrap
  3. Grant the ability to create and manage new projects by assigning the built-in self-provisioner ClusterRole to your group.

    $ oc adm policy add-cluster-role-to-group self-provisioner odh-users
    Copy to Clipboard Toggle word wrap

10.2.1. Security considerations

  • Secret Management: Store OIDC client secrets securely and rotate them regularly.
  • Network Policies: Consider implementing network policies to restrict access to the authentication proxy.
  • TLS Configuration: Ensure all OIDC communication uses Transport Layer Security (TLS).
  • Token Validation: While kube-auth-proxy validates tokens, ensure your OIDC provider is configured with appropriate token lifetimes.
  • Audit Logging: Enable audit logging for authentication events.

If your users are experiencing errors in Red Hat OpenShift AI relating to Gateway API configuration, read this section to understand what could be causing the problem, and how to resolve the problem.

If the problem is not documented here or in the release notes, contact Red Hat Support.

Problem

While setting up the OIDC, the GatewayConfig status shows as not ready. You see error messages about missing OIDC configuration and the GatewayConfig resource shows its status as Ready: False.

Diagnosis

  1. Check GatewayConfig status by running the following command.

    $ oc get gatewayconfig default-gateway -o yaml
    Copy to Clipboard Toggle word wrap
  2. Check for specific error messages by running the following command.

    $ oc describe gatewayconfig default-gateway
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the GatewayConfig resource is successfully provisioned by showing the OIDC configuration details under Spec.Oidc and displaying both the Ready and ProvisioningSucceeded status conditions with a True status.

  3. Verify that the OIDC configuration is correct by running the following command.

    $ oc get gatewayconfig default-gateway -o jsonpath='{.spec.oidc}'
    Copy to Clipboard Toggle word wrap

    Expected output shows the following example:

    {"clientID":"your-client-id","clientSecretRef":{"key":"clientSecret","name":"keycloak-client-secret"},"issuerURL":"https://keycloak.example.com/realms/your-realm"}
    Copy to Clipboard Toggle word wrap

Resolution

  1. Verify the OIDC secret exists and is correct by running the following command.

    $ oc get secret keycloak-client-secret -n openshift-ingress
    Copy to Clipboard Toggle word wrap
  2. Check OIDC issuer URL accessibility by running the following command.

    curl -I https://your-keycloak-domain/realms/your-realm/.well-known/openid-configuration
    Copy to Clipboard Toggle word wrap

    The expected output confirms the OIDC issuer URL is accessible by returning the HTTP status code HTTP/2 200 and the correct content-type: application/json header.

  3. Ensure that the client Secret is correct.

10.3.2. Authentication proxy fails to start

Problem

The authentication proxy component fails to start after deploying kube-auth-proxy. The associated Pods are in a failing state, showing statuses such as CrashLoopBackOff or Pending, and the kube-auth-proxy Deployment is not ready.

Diagnosis

  1. Check the kube-auth-proxy deployment status by running the following command.

    $ oc get deployment kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the deployment is successfully provisioned, showing 1/1 under the READY column.

  2. Check the Pod logs by running the following command.

    $ oc logs -l app=kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the OAuth2 Proxy is configured and starting on the specified ports.

    # Expected output
    time="2024-01-15T10:30:00Z" level=info msg="OAuth2 Proxy configured"
    time="2024-01-15T10:30:00Z" level=info msg="OAuth2 Proxy starting on :4180"
    time="2024-01-15T10:30:00Z" level=info msg="OAuth2 Proxy starting on :8443"
    Copy to Clipboard Toggle word wrap
  3. Check the Pod events for errors by running the following command.

    $ oc describe pod -l app=kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output should look like the following example.

    # Expected output
    Name:          kube-auth-proxy-7d4f8b9c6-xyz12
    Namespace:     openshift-ingress
    Status:        Running
    Containers:
      kube-auth-proxy:
        State:          Running
        Ready:          True
        Restart Count:  0
    Events:
      Type    Reason       Age   From                 Message
      ----    ------       ----  ----                 -------
      Normal  Scheduled    5m    default-scheduler    Successfully assigned openshift-ingress/kube-auth-proxy-7d4f8b9c6-xyz12 to worker-node-1
    Copy to Clipboard Toggle word wrap

Resolution

  1. Verify that the authentication secret contains the correct client secret by running the following command.

    $ oc get secret kube-auth-proxy-creds -n openshift-ingress -o yaml
    Copy to Clipboard Toggle word wrap

    The expected output should contain the keys OAUTH2_PROXY_CLIENT_SECRET, OAUTH2_PROXY_COOKIE_SECRET, and OAUTH2_PROXY_CLIENT_ID.

  2. Check if the OIDC issuer URL is accessible from the cluster by running the following command.

    curl -I https://your-keycloak-domain/realms/your-realm/.well-known/openid-configuration
    Copy to Clipboard Toggle word wrap

    The expected output should return the HTTP status code HTTP/2 200.

  3. Ensure that the client ID exists in your OIDC provider.

10.3.3. The Gateway is inaccessible

Problem

After configuring OIDC, you cannot access the Gateway URL: https://data-science-gateway.$CLUSTER_DOMAIN. Attempts to access the URL return 502 (Bad Gateway) or 503 (Service Unavailable) errors, indicating a networking failure that prevents external access or traffic routing to the service endpoint.

Diagnosis

  1. Check the Gateway status of data-science-gateway by running the following command.

    $ oc get gateway data-science-gateway -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output shows PROGRAMMED column as True, and a valid address is listed under the ADDRESS column.

  2. Check the HTTPRoute status by running the following command.

    $ oc get httproute -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output shows that the oauth-callback-route is present.

  3. Check the EnvoyFilter by running the following command.

    $ oc get envoyfilter -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output shows that the authn-filter is present.

  4. Check the kube-auth-proxy Service by running the following command.

    $ oc get service kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output shows that the Service and correct ports are present, like the following example:

    # Expected output
    NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE
    kube-auth-proxy   ClusterIP   172.30.31.69   <none>        8443/TCP,9000/TCP   41h
    Copy to Clipboard Toggle word wrap

Resolution

  1. Verify the Gateway has a valid address by running the following command.

    $ oc get gateway data-science-gateway -n openshift-ingress -o jsonpath='{.status.addresses}'
    Copy to Clipboard Toggle word wrap

    The expected output shows a valid IP address or hostname.

  2. Check if the HTTPRoute is properly configured by running the following command.

    $ oc describe httproute oauth-callback-route -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output confirms proper parent references and backend services.

  3. Ensure the EnvoyFilter is applied correctly by running the following command.

    $ oc describe envoyfilter authn-filter -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected outputconfirms the proper configuration for kube-auth-proxy.

10.3.4. The OIDC authentication fails

Problem

The OIDC authentication fails and you are unable to log in through the Gateway. You also experience symptoms such as redirect loops or explicit authentication errors after attempting to log in.

Diagnosis

  1. Check the kube-auth-proxy logs for specific error messages by running the following command.

    $ oc logs -l app=kube-auth-proxy -n openshift-ingress
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the OAuth2 Proxy is configured and starting on the specified ports.

  2. Verify the OIDC configuration in the kube-auth-proxy Secret by running the following command.

    $ oc get secret kube-auth-proxy-creds -n openshift-ingress -o yaml
    Copy to Clipboard Toggle word wrap

    The expected output shows that the Secret contains the keys OAUTH2_PROXY_CLIENT_ID, OAUTH2_PROXY_CLIENT_SECRET, and OAUTH2_PROXY_COOKIE_SECRET. The output should look like the following example.

    # Expected output
    apiVersion: v1
    kind: Secret
    metadata:
      name: kube-auth-proxy-creds
      namespace: openshift-ingress
    type: Opaque
    data:
      OAUTH2_PROXY_CLIENT_ID: b2RoLWNsaWVudA==  # base64 encoded "data-science"
      OAUTH2_PROXY_CLIENT_SECRET: <base64-encoded-secret>
      OAUTH2_PROXY_COOKIE_SECRET: <base64-encoded-cookie-secret>
    Copy to Clipboard Toggle word wrap
  3. Test the OIDC discovery endpoint by running the following command.

    curl -s https://your-keycloak-domain/realms/your-realm/.well-known/openid-configuration | jq .
    Copy to Clipboard Toggle word wrap

    The expected output returns the complete JSON configuration, including valid endpoints for issuer, authorization_endpoint, and token_endpoint.

Resolution

  1. Log in to the OIDC provider (for example, Keycloak) and verify that the redirect URI registered for the client matches the expected endpoint on the Gateway: https://data-science-gateway.$CLUSTER_DOMAIN/oauth2/callback. Mismatches are a frequent cause of redirect loops.
  2. Check if the client secret is properly set by running the following command.

    echo $KEYCLOAK_CLIENT_SECRET | base64 -d
    Copy to Clipboard Toggle word wrap

    The expected output matches the secret in your OIDC provider.

  3. Ensure that the issuer URL is accessible and correct by running the following command.

    curl -I https://your-keycloak-domain/realms/your-realm/.well-known/openid-configuration
    Copy to Clipboard Toggle word wrap

    The expected output returns HTTP/2 200.

Problem

After successfully authenticating with OIDC, you experience an authorization failure that prevents access to the dashboard. The failure results in 403 Forbidden errors while accessing the dashboard.

  1. Check the odh-dashboard Deployment status by running the following command.

    $ oc get deployment -n redhat-ods-applications rhods-dashboard
    Copy to Clipboard Toggle word wrap

    The expected outcome confirms that the Pods are running, similar to the following example.

    NAME            READY   UP-TO-DATE   AVAILABLE   AGE
    rhods-dashboard   2/2     2            2          7h42m
    Copy to Clipboard Toggle word wrap
  2. Check the dashboard logs for any authorization errors by running the following command.

    $ oc logs -l app=rhods-dashboard -n redhat-ods-applications
    Copy to Clipboard Toggle word wrap

    In the expected output, the logs confirm the Dashboard is running and ready to serve requests.

  3. Verify the user permissions by running the following command.

    $ oc auth can-i get projects --as=your-username
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the user has the required access.

Resolution

  1. Ensure that you have cluster-level RBAC permissions by running the following command.

    $ oc adm policy add-cluster-role-to-user view your-username
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the view cluster role has been added to the user.

  2. Verify that the odh-dashboard HTTPRoute is properly configured with correct parent references (linking it to the Gateway) by running the following command.

    $ oc get httproute rhods-dashboard -n redhat-ods-applications -o yaml
    Copy to Clipboard Toggle word wrap

    The expected output shows proper parent references to the Gateway.

  3. Check if the user is in the expected groups that may have roles bound to them required by the dashboard.

    $ oc get user your-username -o yaml
    Copy to Clipboard Toggle word wrap

    The expected output confirms that the user is in the odh-users group.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat