Chapter 3. Red Hat Developer Hub integration with Amazon Web Services (AWS)


You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions.

The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Service (EKS) using one of the following methods:

  • The Helm chart
  • The Red Hat Developer Hub Operator

When you deploy Developer Hub in Elastic Kubernetes Service (EKS) using Helm Chart, it orchestrates a robust development environment within the AWS ecosystem.

Prerequisites

Procedure

  1. Go to your terminal and run the following command to add the Helm chart repository containing the Developer Hub chart to your local Helm registry:

    helm repo add openshift-helm-charts https://charts.openshift.io/
    Copy to Clipboard Toggle word wrap
  2. Create a pull secret using the following command:

    kubectl create secret docker-registry rhdh-pull-secret \
        --docker-server=registry.redhat.io \
        --docker-username=<user_name> \ 
    1
    
        --docker-password=<password> \ 
    2
    
        --docker-email=<email> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Enter your username in the command.
    2
    Enter your password in the command.
    3
    Enter your email address in the command.

    The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.

  3. Create a file named values.yaml using the following template:

    global:
      # TODO: Set your application domain name.
      host: <your Developer Hub domain name>
    
    
    route:
      enabled: false
    
    
    upstream:
      service:
        # NodePort is required for the ALB to route to the Service
        type: NodePort
    
    
      ingress:
        enabled: true
        annotations:
          kubernetes.io/ingress.class: alb
    
    
          alb.ingress.kubernetes.io/scheme: internet-facing
    
    
          # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.:
          alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx
    
    
          alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    
    
          alb.ingress.kubernetes.io/ssl-redirect: '443'
    
    
          # TODO: Set your application domain name.
          external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name>
    
    
      backstage:
        image:
          pullSecrets:
          - rhdh-pull-secret
        podSecurityContext:
          # you can assign any random value as fsGroup
          fsGroup: 2000
      postgresql:
        image:
          pullSecrets:
          - rhdh-pull-secret
        primary:
          podSecurityContext:
            enabled: true
            # you can assign any random value as fsGroup
            fsGroup: 3000
      volumePermissions:
        enabled: true
    Copy to Clipboard Toggle word wrap
  4. Run the following command in your terminal to deploy Developer Hub using the latest version of Helm Chart and using the values.yaml file created in the previous step:

    helm install rhdh \
      openshift-helm-charts/redhat-developer-hub \
      [--version 1.1.4] \
      --values /path/to/values.yaml
    Copy to Clipboard Toggle word wrap
    Note

Verification

Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.

You can deploy the Developer Hub on EKS using the Red Hat Developer Hub Operator with or without Operator Lifecycle Manager (OLM) framework. Following that, you can proceed to install your Developer Hub instance in EKS.

Prerequisites

Procedure

  1. Run the following command in your terminal to create the rhdh-operator namespace where the Operator is installed:

    kubectl create namespace rhdh-operator
    Copy to Clipboard Toggle word wrap
  2. Create a pull secret using the following command:

    kubectl -n rhdh-operator create secret docker-registry rhdh-pull-secret \
        --docker-server=registry.redhat.io \
        --docker-username=<user_name> \ 
    1
    
        --docker-password=<password> \ 
    2
    
        --docker-email=<email> 
    3
    Copy to Clipboard Toggle word wrap
    1
    Enter your username in the command.
    2
    Enter your password in the command.
    3
    Enter your email address in the command.

    The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.

  3. Create a CatalogSource resource that contains the Operators from the Red Hat Ecosystem:

    cat <<EOF | kubectl -n rhdh-operator apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: redhat-catalog
    spec:
      sourceType: grpc
      image: registry.redhat.io/redhat/redhat-operator-index:v4.15
      secrets:
      - "rhdh-pull-secret"
      displayName: Red Hat Operators
    EOF
    Copy to Clipboard Toggle word wrap
  4. Create an OperatorGroup resource as follows:

    cat <<EOF | kubectl apply -n rhdh-operator -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: rhdh-operator-group
    EOF
    Copy to Clipboard Toggle word wrap
  5. Create a Subscription resource using the following code:

    cat <<EOF | kubectl apply -n rhdh-operator -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: rhdh
      namespace: rhdh-operator
    spec:
      channel: fast
      installPlanApproval: Automatic
      name: rhdh
      source: redhat-catalog
      sourceNamespace: rhdh-operator
      startingCSV: rhdh-operator.v1.1.2
    EOF
    Copy to Clipboard Toggle word wrap
  6. Run the following command to verify that the created Operator is running:

    kubectl -n rhdh-operator get pods -w
    Copy to Clipboard Toggle word wrap

    If the operator pod shows ImagePullBackOff status, then you might need permissions to pull the image directly within the Operator deployment’s manifest.

    Tip

    You can include the required secret name in the deployment.spec.template.spec.imagePullSecrets list and verify the deployment name using kubectl get deployment -n rhdh-operator command:

    kubectl -n rhdh-operator patch deployment \
        rhdh.fast --patch '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"rhdh-pull-secret"}]}}}}' \
        --type=merge
    Copy to Clipboard Toggle word wrap
  7. Update the default configuration of the operator to ensure that Developer Hub resources can start correctly in EKS using the following steps:

    1. Edit the backstage-default-config ConfigMap in the rhdh-operator namespace using the following command:

      kubectl -n rhdh-operator edit configmap backstage-default-config
      Copy to Clipboard Toggle word wrap
    2. Locate the db-statefulset.yaml string and add the fsGroup to its spec.template.spec.securityContext, as shown in the following example:

        db-statefulset.yaml: |
          apiVersion: apps/v1
          kind: StatefulSet
      --- TRUNCATED ---
          spec:
          --- TRUNCATED ---
            restartPolicy: Always
            securityContext:
            # You can assign any random value as fsGroup
              fsGroup: 2000
            serviceAccount: default
            serviceAccountName: default
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    3. Locate the deployment.yaml string and add the fsGroup to its specification, as shown in the following example:

        deployment.yaml: |
          apiVersion: apps/v1
          kind: Deployment
      --- TRUNCATED ---
          spec:
            securityContext:
              # You can assign any random value as fsGroup
              fsGroup: 3000
            automountServiceAccountToken: false
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    4. Locate the service.yaml string and change the type to NodePort as follows:

        service.yaml: |
          apiVersion: v1
          kind: Service
          spec:
           # NodePort is required for the ALB to route to the Service
            type: NodePort
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    5. Save and exit.

      Wait for a few minutes until the changes are automatically applied to the operator pods.

Prerequisites

  • You have installed the following commands:

    • git
    • make
    • sed

Procedure

  1. Clone the Operator repository to your local machine using the following command:

    git clone --depth=1 https://github.com/janus-idp/operator.git rhdh-operator && cd rhdh-operator
    Copy to Clipboard Toggle word wrap
  2. Run the following command and generate the deployment manifest:

    make deployment-manifest
    Copy to Clipboard Toggle word wrap

    The previous command generates a file named rhdh-operator-<VERSION>.yaml, which is updated manually.

  3. Run the following command to apply replacements in the generated deployment manifest:

    sed -i "s/backstage-operator/rhdh-operator/g" rhdh-operator-*.yaml
    sed -i "s/backstage-system/rhdh-operator/g" rhdh-operator-*.yaml
    sed -i "s/backstage-controller-manager/rhdh-controller-manager/g" rhdh-operator-*.yaml
    Copy to Clipboard Toggle word wrap
  4. Open the generated deployment manifest file in an editor and perform the following steps:

    1. Locate the db-statefulset.yaml string and add the fsGroup to its spec.template.spec.securityContext, as shown in the following example:

         db-statefulset.yaml: |
          apiVersion: apps/v1
          kind: StatefulSet
      --- TRUNCATED ---
          spec:
          --- TRUNCATED ---
            restartPolicy: Always
            securityContext:
              # You can assign any random value as fsGroup
              fsGroup: 2000
            serviceAccount: default
            serviceAccountName: default
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    2. Locate the deployment.yaml string and add the fsGroup to its specification, as shown in the following example:

        deployment.yaml: |
          apiVersion: apps/v1
          kind: Deployment
      --- TRUNCATED ---
          spec:
            securityContext:
              # You can assign any random value as fsGroup
              fsGroup: 3000
            automountServiceAccountToken: false
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    3. Locate the service.yaml string and change the type to NodePort as follows:

        service.yaml: |
          apiVersion: v1
          kind: Service
          spec:
            # NodePort is required for the ALB to route to the Service
            type: NodePort
      --- TRUNCATED ---
      Copy to Clipboard Toggle word wrap
    4. Replace the default images with the images that are pulled from the Red Hat Ecosystem:

      sed -i "s#gcr.io/kubebuilder/kube-rbac-proxy:.*#registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.15#g" rhdh-operator-*.yaml
      
      sed -i "s#quay.io/janus-idp/operator:.*#registry.redhat.io/rhdh/rhdh-rhel9-operator:1.1#g" rhdh-operator-*.yaml
      
      sed -i "s#quay.io/janus-idp/backstage-showcase:.*#registry.redhat.io/rhdh/rhdh-hub-rhel9:1.1#g" rhdh-operator-*.yaml
      
      sed -i "s#quay.io/fedora/postgresql-15:.*#registry.redhat.io/rhel9/postgresql-15:latest#g" rhdh-operator-*.yaml
      Copy to Clipboard Toggle word wrap
  5. Add the image pull secret to the manifest in the Deployment resource as follows:

    --- TRUNCATED ---
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app.kubernetes.io/component: manager
        app.kubernetes.io/created-by: rhdh-operator
        app.kubernetes.io/instance: controller-manager
        app.kubernetes.io/managed-by: kustomize
        app.kubernetes.io/name: deployment
        app.kubernetes.io/part-of: rhdh-operator
        control-plane: controller-manager
      name: rhdh-controller-manager
      namespace: rhdh-operator
    spec:
      replicas: 1
      selector:
        matchLabels:
          control-plane: controller-manager
      template:
        metadata:
          annotations:
            kubectl.kubernetes.io/default-container: manager
          labels:
            control-plane: controller-manager
        spec:
          imagePullSecrets:
            - name: rhdh-pull-secret
    --- TRUNCATED ---
    Copy to Clipboard Toggle word wrap
  6. Apply the manifest to deploy the operator using the following command:

    kubectl apply -f rhdh-operator-VERSION.yaml
    Copy to Clipboard Toggle word wrap
  7. Run the following command to verify that the Operator is running:

    kubectl -n rhdh-operator get pods -w
    Copy to Clipboard Toggle word wrap

After the Red Hat Developer Hub Operator is installed and running, you can create a Developer Hub instance in EKS.

Prerequisites

Procedure

  1. Create a ConfigMap named app-config-rhdh containing the Developer Hub configuration using the following template:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: app-config-rhdh
    data:
      "app-config-rhdh.yaml": |
        app:
          title: Red Hat Developer Hub
          baseUrl: https://<rhdh_dns_name>
        backend:
          auth:
            keys:
              - secret: "${BACKEND_SECRET}"
          baseUrl: https://<rhdh_dns_name>
          cors:
            origin: https://<rhdh_dns_name>
    Copy to Clipboard Toggle word wrap
  2. Create a Secret named secrets-rhdh and add a key named BACKEND_SECRET with a Base64-encoded string as value:

    apiVersion: v1
    kind: Secret
    metadata:
      name: secrets-rhdh
    stringData:
      # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup
      BACKEND_SECRET: "xxx"
    Copy to Clipboard Toggle word wrap
    Important

    Ensure that you use a unique value of BACKEND_SECRET for each Developer Hub instance.

    You can use the following command to generate a key:

    node-p'require("crypto").randomBytes(24).toString("base64")'
    Copy to Clipboard Toggle word wrap
  3. To enable pulling the PostgreSQL image from the Red Hat Ecosystem Catalog, add the image pull secret in the default service account within the namespace where the Developer Hub instance is being deployed:

    kubectl patch serviceaccount default \
        -p '{"imagePullSecrets": [{"name": "rhdh-pull-secret"}]}' \
        -n <your_namespace>
    Copy to Clipboard Toggle word wrap
  4. Create a Custom Resource file using the following template:

    apiVersion: rhdh.redhat.com/v1alpha1
    kind: Backstage
    metadata:
     # TODO: this the name of your Developer Hub instance
      name: my-rhdh
    spec:
      application:
        imagePullSecrets:
        - "rhdh-pull-secret"
        route:
          enabled: false
        appConfig:
          configMaps:
            - name: "app-config-rhdh"
        extraEnvs:
          secrets:
            - name: "secrets-rhdh"
    Copy to Clipboard Toggle word wrap
  5. Create an Ingress resource using the following template, ensuring to customize the names as needed:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      # TODO: this the name of your Developer Hub Ingress
      name: my-rhdh
      annotations:
        alb.ingress.kubernetes.io/scheme: internet-facing
    
        alb.ingress.kubernetes.io/target-type: ip
    
        # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.:
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx
    
         alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    
        alb.ingress.kubernetes.io/ssl-redirect: '443'
    
        # TODO: Set your application domain name.
        external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name>
    
    spec:
      ingressClassName: alb
      rules:
        # TODO: Set your application domain name.
        - host: <rhdh_dns_name>
          http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  # TODO: my-rhdh is the name of your Backstage Custom Resource.
                  # Adjust if you changed it!
                  name: backstage-my-rhdh
                  port:
                    name: http-backend
    Copy to Clipboard Toggle word wrap

    In the previous template, replace ` <rhdh_dns_name>` with your Developer Hub domain name and update the value of alb.ingress.kubernetes.io/certificate-arn with your certificate ARN.

Verification

Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.

In the Red Hat Developer Hub, monitoring and logging are facilitated through Amazon Web Services (AWS) integration. With features like Amazon CloudWatch for real-time monitoring and Amazon Prometheus for comprehensive logging, you can ensure the reliability, scalability, and compliance of your Developer Hub application hosted on AWS infrastructure.

This integration enables you to oversee, diagnose, and refine your applications in the Red Hat ecosystem, leading to an improved development and operational journey.

3.3.1. Monitoring with Amazon Prometheus

Red Hat Developer Hub provides Prometheus metrics related to the running application. For more information about enabling or deploying Prometheus for EKS clusters, see Prometheus metrics in the Amazon documentation.

To monitor Developer Hub using Amazon Prometheus, you need to create an Amazon managed service for the Prometheus workspace and configure the ingestion of the Developer Hub Prometheus metrics. For more information, see Create a workspace and Ingest Prometheus metrics to the workspace sections in the Amazon documentation.

After ingesting Prometheus metrics into the created workspace, you can configure the metrics scraping to extract data from pods based on specific pod annotations.

3.3.1.1. Configuring annotations for monitoring

You can configure the annotations for monitoring in both Helm deployment and Operator-backed deployment.

Helm deployment

To annotate the backstage pod for monitoring, update your values.yaml file as follows:

upstream:
  backstage:
    # --- TRUNCATED ---
    podAnnotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path: '/metrics'
      prometheus.io/port: '7007'
      prometheus.io/scheme: 'http'
Copy to Clipboard Toggle word wrap
Operator-backed deployment

Procedure

  1. As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows:

    # Update OPERATOR_NS accordingly
    OPERATOR_NS=rhdh-operator
    kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
    Copy to Clipboard Toggle word wrap
  2. Find the deployment.yaml key in the ConfigMap and add the annotations to the spec.template.metadata.annotations field as follows:

    deployment.yaml: |-
      apiVersion: apps/v1
      kind: Deployment
      # --- truncated ---
      spec:
        template:
          # --- truncated ---
          metadata:
            labels:
             rhdh.redhat.com/app:  # placeholder for 'backstage-<cr-name>'
            # --- truncated ---
            annotations:
              prometheus.io/scrape: 'true'
              prometheus.io/path: '/metrics'
              prometheus.io/port: '7007'
              prometheus.io/scheme: 'http'
      # --- truncated ---
    Copy to Clipboard Toggle word wrap
  3. Save your changes.

Verification

To verify if the scraping works:

  1. Use kubectl to port-forward the Prometheus console to your local machine as follows:

    kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
    Copy to Clipboard Toggle word wrap
  2. Open your web browser and navigate to http://localhost:9090 to access the Prometheus console.
  3. Monitor relevant metrics, such as process_cpu_user_seconds_total.

3.3.2. Logging with Amazon CloudWatch logs

Logging within the Red Hat Developer Hub relies on the winston library. By default, logs at the debug level are not recorded. To activate debug logs, you must set the environment variable LOG_LEVEL to debug in your Red Hat Developer Hub instance.

3.3.2.1. Configuring the application log level

You can configure the application log level in both Helm deployment and Operator-backed deployment.

Helm deployment

To update the logging level, add the environment variable LOG_LEVEL to your Helm chart’s values.yaml file:

upstream:
  backstage:
    # --- Truncated ---
    extraEnvVars:
      - name: LOG_LEVEL
        value: debug
Copy to Clipboard Toggle word wrap
Operator-backed deployment

You can modify the logging level by including the environment variable LOG_LEVEL in your custom resource as follows:

spec:
  # Other fields omitted
  application:
    extraEnvs:
      envs:
        - name: LOG_LEVEL
          value: debug
Copy to Clipboard Toggle word wrap

3.3.2.2. Retrieving logs from Amazon CloudWatch

The CloudWatch Container Insights are used to capture logs and metrics for Amazon EKS. For more information, see Logging for Amazon EKS documentation.

To capture the logs and metrics, install the Amazon CloudWatch Observability EKS add-on in your cluster. Following the setup of Container Insights, you can access container logs using Logs Insights or Live Tail views.

CloudWatch names the log group where all container logs are consolidated in the following manner:

/aws/containerinsights/<ClusterName>/application

Following is an example query to retrieve logs from the Developer Hub instance:

fields @timestamp, @message, kubernetes.container_name
| filter kubernetes.container_name in ["install-dynamic-plugins", "backstage-backend"]
Copy to Clipboard Toggle word wrap

In this section, Amazon Cognito is an AWS service for adding an authentication layer to Developer Hub. You can sign in directly to the Developer Hub using a user pool or fedarate through a third-party identity provider.

Although Amazon Cognito is not part of the core authentication providers for the Developer Hub, it can be integrated using the generic OpenID Connect (OIDC) provider.

You can configure your Developer Hub in both Helm Chart and Operator-backed deployments.

Prerequisites

  • You have a User Pool or you have created a new one. For more information about user pools, see Amazon Cognito user pools documentation.

    Note

    Ensure that you have noted the AWS region where the user pool is located and the user pool ID.

  • You have created an App Client within your user pool for integrating the hosted UI. For more information, see Setting up the hosted UI with the Amazon Cognito console.

    When setting up the hosted UI using the Amazon Cognito console, ensure to make the following adjustments:

    1. In the Allowed callback URL(s) section, include the URL https://<rhdh_url>/api/auth/oidc/handler/frame. Ensure to replace <rhdh_url> with your Developer Hub application’s URL, such as, my.rhdh.example.com.
    2. Similarly, in the Allowed sign-out URL(s) section, add https://<rhdh_url>. Replace <rhdh_url> with your Developer Hub application’s URL, such as my.rhdh.example.com.
    3. Under OAuth 2.0 grant types, select Authorization code grant to return an authorization code.
    4. Under OpenID Connect scopes, ensure to select at least the following scopes:

      • OpenID
      • Profile
      • Email
    Helm deployment

    Procedure

    1. Edit or create your custom app-config-rhdh ConfigMap as follows:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: app-config-rhdh
      data:
        "app-config-rhdh.yaml": |
          # --- Truncated ---
          app:
            title: Red Hat Developer Hub
      
          signInPage: oidc
          auth:
            environment: production
            session:
              secret: ${AUTH_SESSION_SECRET}
            providers:
              oidc:
                production:
                  clientId: ${AWS_COGNITO_APP_CLIENT_ID}
                  clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET}
                  metadataUrl: ${AWS_COGNITO_APP_METADATA_URL}
                  callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL}
                  scope: 'openid profile email'
                  prompt: auto
      Copy to Clipboard Toggle word wrap
    2. Edit or create your custom secrets-rhdh Secret using the following template:

      apiVersion: v1
      kind: Secret
      metadata:
        name: secrets-rhdh
      stringData:
        AUTH_SESSION_SECRET: "my super auth session secret - change me!!!"
        AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id"
        AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret"
        AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration"
        AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
      Copy to Clipboard Toggle word wrap
    3. Add references of both the ConfigMap and Secret resources in your values.yaml file:

      upstream:
        backstage:
          image:
            pullSecrets:
            - rhdh-pull-secret
          podSecurityContext:
            fsGroup: 2000
          extraAppConfig:
            - filename: app-config-rhdh.yaml
              configMapRef: app-config-rhdh
          extraEnvVarsSecrets:
            - secrets-rhdh
      Copy to Clipboard Toggle word wrap
    4. Upgrade the Helm deployment:

      helm upgrade rhdh \
        openshift-helm-charts/redhat-developer-hub \
        [--version 1.1.4] \
        --values /path/to/values.yaml
      Copy to Clipboard Toggle word wrap
    Operator-backed deployment
    1. Add the following code to your app-config-rhdh ConfigMap:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: app-config-rhdh
      data:
        "app-config-rhdh.yaml": |
          # --- Truncated ---
      
          signInPage: oidc
          auth:
            # Production to disable guest user login
            environment: production
            # Providing an auth.session.secret is needed because the oidc provider requires session support.
            session:
              secret: ${AUTH_SESSION_SECRET}
            providers:
              oidc:
                production:
                  # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts
                  clientId: ${AWS_COGNITO_APP_CLIENT_ID}
                  clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET}
                  metadataUrl: ${AWS_COGNITO_APP_METADATA_URL}
                  callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL}
                  # Minimal set of scopes needed. Feel free to add more if needed.
                  scope: 'openid profile email'
      
                  # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP.
                  # You should set prompt to:
                  # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session
                  # - login: will force the IDP to always present a login form to the user
                  prompt: auto
      Copy to Clipboard Toggle word wrap
    2. Add the following code to your secrets-rhdh Secret:

      apiVersion: v1
      kind: Secret
      metadata:
        name: secrets-rhdh
      stringData:
        # --- Truncated ---
      
        # TODO: Change auth session secret.
        AUTH_SESSION_SECRET: "my super auth session secret - change me!!!"
      
        # TODO: user pool app client ID
        AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id"
      
        # TODO: user pool app client Secret
        AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret"
      
        # TODO: Replace region and user pool ID
        AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration"
      
        # TODO: Replace <rhdh_dns>
        AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
      Copy to Clipboard Toggle word wrap
    3. Ensure your Custom Resource contains references to both the app-config-rhdh ConfigMap and secrets-rhdh Secret:

      apiVersion: rhdh.redhat.com/v1alpha1
      kind: Backstage
      metadata:
       # TODO: this the name of your Developer Hub instance
        name: my-rhdh
      spec:
        application:
          imagePullSecrets:
          - "rhdh-pull-secret"
          route:
            enabled: false
          appConfig:
            configMaps:
              - name: "app-config-rhdh"
          extraEnvs:
            secrets:
              - name: "secrets-rhdh"
      Copy to Clipboard Toggle word wrap
    4. Optional: If you have an existing Developer Hub instance backed by the Custom Resource and you have not edited it, you can manually delete the Developer Hub deployment to recreate it using the operator. Run the following command to delete the Developer Hub deployment:

      kubectl delete deployment -l app.kubernetes.io/instance=<CR_NAME>
      Copy to Clipboard Toggle word wrap

Verification

  1. Navigate to your Developer Hub web URL and sign in using OIDC authentication, which prompts you to authenticate through the configured AWS Cognito user pool.
  2. Once logged in, access Settings and verify user details.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat