Tutorials


OpenShift Dedicated 4

OpenShift Dedicated tutorials

Red Hat OpenShift Documentation Team

Abstract

Tutorials for managing OpenShift Dedicated clusters.

Chapter 1. Tutorials overview

Use the step-by-step tutorials from Red Hat experts to get the most out of your Managed OpenShift cluster.

Important

This content is authored by Red Hat experts but has not yet been tested on every supported configuration.

This guide demonstrates how to modify the hostname and TLS certificate of the Web console, OAuth server, and Downloads component routes in OpenShift Dedicated on Google Cloud version 4.14 and above.[1]

The changes that we make to the component routes[2] in this guide are described in greater detail in the Customing the internal OAuth server URL, Customing the console route, and Customing the download route OpenShift Dedicated documentation.

2.1. Prerequisites

  • OCM CLI (ocm) version 1.0.5 or higher
  • gcloud CLI (gcloud)
  • An OpenShift Dedicated on Google Cloud cluster version 4.14 or higher
  • OpenShift CLI (oc)
  • jq CLI
  • Access to the cluster as a user with the cluster-admin role.
  • OpenSSL (for generating the demonstration SSL/TLS certificates)

2.2. Setting up your environment

  1. Log in to your cluster using an account with cluster-admin privileges.
  2. Configure an environment variable for your cluster name:

    $ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}"  | sed 's/-[a-z0-9]\{5\}$//')
    Copy to Clipboard Toggle word wrap
  3. Ensure all fields output correctly before moving to the next section:

    $ echo "Cluster: ${CLUSTER_NAME}"
    Copy to Clipboard Toggle word wrap

    Example output

    Cluster: my-osd-cluster
    Copy to Clipboard Toggle word wrap

2.3. Find the current routes

  1. Verify that you can reach the component routes on their default hostnames.

    You can find the hostnames by querying the lists of routes in the openshift-console and openshift-authentication projects.

    $ oc get routes -n openshift-console
    $ oc get routes -n openshift-authentication
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        HOST/PORT                                                                          PATH       SERVICES    PORT    TERMINATION          WILDCARD
    console     console-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com    ... 1 more  console    https   reencrypt/Redirect   None
    downloads   downloads-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com  ... 1 more  downloads  http    edge/Redirect        None
    NAME              HOST/PORT                                                             PATH        SERVICES          PORT   TERMINATION            WILDCARD
    oauth-openshift   oauth-openshift.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more  oauth-openshift   6443   passthrough/Redirect   None
    Copy to Clipboard Toggle word wrap

    From this output you can see that our base hostname is z9a9.p2.openshiftapps.com.

  2. Get the ID of the default ingress by running the following command:

    $ export INGRESS_ID=$(ocm list ingress -c ${CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')
    Copy to Clipboard Toggle word wrap
  3. Ensure all fields output correctly before moving to the next section:

    $ echo "Ingress ID: ${INGRESS_ID}"
    Copy to Clipboard Toggle word wrap

    Example output

    Ingress ID: r3l6
    Copy to Clipboard Toggle word wrap

    By running these commands you can see that the default component routes for our cluster are:

    • console-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com for Console
    • downloads-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com for Downloads
    • oauth-openshift.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com for OAuth

We can use the ocm edit ingress command to change the hostname of each service and add a TLS certificate for all of our component routes. The relevant parameters are shown in this excerpt of the command-line help for the ocm edit ingress command:

$ ocm edit ingress -h
Edit a cluster ingress for a cluster. Usage:
  ocm edit ingress ID [flags]
  [...]
  --component-routes string                Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...'
Copy to Clipboard Toggle word wrap

For this example, we’ll use the following custom component routes:

  • console.my-new-domain.dev for Console
  • downloads.my-new-domain.dev for Downloads
  • oauth.my-new-domain.dev for OAuth

In this section, we create three separate self-signed certificate key pairs and then trust them to verify that we can access our new component routes using a real web browser.

Warning

This is for demonstration purposes only, and is not recommended as a solution for production workloads. Consult your certificate authority to understand how to create certificates with similar attributes for your production workloads.

Important

To prevent issues with HTTP/2 connection coalescing, you must use a separate individual certificate for each endpoint. Using a wildcard or SAN certificate is not supported.

  • Generate a certificate for each component route, taking care to set our certificate’s subject (-subj) to the custom domain of the component route we want to use:

    Example

    $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj "/CN=console.my-new-domain.dev"
    $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj "/CN=downloads.my-new-domain.dev"
    $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj "/CN=oauth.my-new-domain.dev"
    Copy to Clipboard Toggle word wrap

    This generates three pairs of .pem files, key-<component>.pem and cert-<component>.pem.

  • Create three TLS secrets in the openshift-config namespace.

    These become your secret reference when you update the component routes later in this guide.

    $ oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config
    $ oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config
    $ oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config
    Copy to Clipboard Toggle word wrap

When you create a cluster, the service creates a load balancer and generates a load balancer IP for that load balancer. We need to know the load balancer IP in order to create DNS records for our cluster.

You can find the load balancer IP by running the oc get svc command against the openshift-ingress namespace. The load balancer IP of the load balancer is the EXTERNAL-IP associated with the router-default service in the openshift-ingress namespace.

$ oc get svc -n openshift-ingress
NAME            TYPE          CLUSTER-IP     EXTERNAL-IP        PORT(S)                     AGE
router-default  LoadBalancer  172.30.237.88  34.85.169.230      80:31175/TCP,443:31554/TCP  76d
Copy to Clipboard Toggle word wrap

In our case, the load balancer IP is 34.85.169.230.

Save this value for later, as we will need it to configure DNS records for our new component route hostnames.

Create an A record in your DNS settings, pointing the domain to the IP address of the router-default’s load balancer.

When your DNS records have been updated, you can use the OCM CLI to change the component routes.

  1. Use the ocm edit ingress command to update your default ingress route with the new base domain and the secret reference associated with it, taking care to update the hostnames for each component route.

    $ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'
    Copy to Clipboard Toggle word wrap
    Note

    You can also edit only a subset of the component routes by leaving the component routes you do not want to change set to an empty string. For example, if you only want to change the Console and OAuth server hostnames and TLS certificates, you would run the following command:

    $ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname="";tlsSecretRef="", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'
    Copy to Clipboard Toggle word wrap
  2. Run the ocm list ingress command to verify that your changes were successfully made:

    $ ocm list ingress -c ${CLUSTER_NAME} -ojson | jq ".[] | select(.id == \"${INGRESS_ID}\") | .component_routes"
    Copy to Clipboard Toggle word wrap

    Example output

    {
      "console": {
        "kind": "ComponentRoute",
        "hostname": "console.my-new-domain.dev",
        "tls_secret_ref": "console-tls"
      },
      "downloads": {
        "kind": "ComponentRoute",
        "hostname": "downloads.my-new-domain.dev",
        "tls_secret_ref": "downloads-tls"
      },
      "oauth": {
        "kind": "ComponentRoute",
        "hostname": "oauth.my-new-domain.dev",
        "tls_secret_ref": "oauth-tls"
      }
    }
    Copy to Clipboard Toggle word wrap

  3. Add your certificate to the truststore on your local system, then confirm that you can access your components at their new routes using your local web browser.

If you want to reset the component routes to the default configuration, run the following ocm edit ingress command:

$ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname="";tlsSecretRef="",downloads: hostname="";tlsSecretRef="", oauth: hostname="";tlsSecretRef=""'
Copy to Clipboard Toggle word wrap


[1] Modifying these routes on OpenShift Dedicated OCM versions prior to 4.14 is not typically supported. However, if you have a cluster using version 4.13, you can request for Red Hat Support to enable support for this feature on your version 4.13 cluster by opening a support case.
[2] We use the term "component routes" to refer to the OAuth, Console, and Downloads routes that are provided when OCM are first installed.

Use this guide to implement egress restrictions for OpenShift Dedicated on Google Cloud by using Google Cloud’s Next Generation Firewall (NGFW). NGFW is a fully distributed firewall service that allows fully qualified domain name (FQDN) objects in firewall policy rules. This is necessary for many of the external endpoints that OpenShift Dedicated relies on.

Important

The ability to restrict egress traffic using a firewall or other network device is only supported with OpenShift Dedicated clusters deployed using Private Service Connect (PSC). Clusters that do not use PSC require a support exception to use this functionality. For additional assistance, please open a support case.

3.1. Reviewing your prerequisites

  • You have the Google Cloud Command Line Interface (gcloud) installed.
  • You are logged into the Google Cloud CLI and have selected the Google Cloud project where you plan to deploy OpenShift Dedicated.
  • You have the minimum necessary permissions in Google Cloud, including:

    • Compute Network Admin
    • DNS Administrator
  • You have enabled certain services by running the following commands in your terminal:

    $ gcloud services enable networksecurity.googleapis.com
    Copy to Clipboard Toggle word wrap
    $ gcloud services enable networkservices.googleapis.com
    Copy to Clipboard Toggle word wrap
    $ gcloud services enable servicenetworking.googleapis.com
    Copy to Clipboard Toggle word wrap

3.2. Setting up your environment

In your terminal, configure the following environment variables:

export project_id=$(gcloud config list --format="value(core.project)")
export region=us-east1
export prefix=osd-ngfw
export service_cidr="172.30.0.0/16"
export machine_cidr="10.0.0.0/22"
export pod_cidr="10.128.0.0/14"
Copy to Clipboard Toggle word wrap

This example uses us-east1 as the region to deploy into and the prefix osd-ngfw for the cluster’s resources. The default CIDR ranges are assigned for the service and pod networks. The machine CIDR is based on the subnet ranges that will be set later in this tutorial. Modify the parameters to meet your needs.

3.3. Creating the VPC and subnets

Before you can deploy a Google Cloud NGFW, you must first create the Virtual Private Cloud (VPC) and subnets that you will use for OpenShift Dedicated:

  1. Create the VPC by running the following command:

    $ gcloud compute networks create ${prefix}-vpc --subnet-mode=custom
    Copy to Clipboard Toggle word wrap
  2. Create the worker subnets by running the following command:

    $ gcloud compute networks subnets create ${prefix}-worker \
        --range=10.0.2.0/23 \
        --network=${prefix}-vpc \
        --region=${region} \
        --enable-private-ip-google-access
    Copy to Clipboard Toggle word wrap
  3. Create the control plane subnets by running the following command:

    $ gcloud compute networks subnets create ${prefix}-control-plane \
        --range=10.0.0.0/25 \
        --network=${prefix}-vpc \
        --region=${region} \
        --enable-private-ip-google-access
    Copy to Clipboard Toggle word wrap
  4. Create the PSC subnets by running the following command:

    $ gcloud compute networks subnets create ${prefix}-psc \
        --network=${prefix}-vpc \
        --region=${region} \
        --stack-type=IPV4_ONLY \
        --range=10.0.0.128/29 \
        --purpose=PRIVATE_SERVICE_CONNECT
    Copy to Clipboard Toggle word wrap

    These examples use the subnet ranges of 10.0.2.0/23 for the worker subnet, 10.0.0.0/25 for the control plane subnet, and 10.0.0.128/29 for the PSC subnet. Modify the parameters to meet your needs. Ensure the parameter values are contained within the machine CIDR you set earlier in this tutorial.

3.4. Deploying a global network firewall policy

  1. Create a global network firewall policy by running the following command:

    $ gcloud compute network-firewall-policies create \
        ${prefix} \
        --description "OpenShift Dedicated Egress Firewall" \
        --global
    Copy to Clipboard Toggle word wrap
  2. Associate the newly created global network firewall policy to the VPC you created above by running the following command:

    $ gcloud compute network-firewall-policies associations create \
        --firewall-policy ${prefix} \
        --network ${prefix}-vpc \
        --global-firewall-policy
    Copy to Clipboard Toggle word wrap

The Network Address Translation (NAT) gateway enables internet connectivity for your private VMs by masquerading all their traffic under a single public IP address. As the designated exit point, it translates their internal IPs for any outbound requests, such as fetching updates. This process effectively grants them access to the internet without ever exposing their private addresses.

  1. Reserve an IP address for Cloud NAT by running the following command:

    $ gcloud compute addresses create ${prefix}-${region}-cloudnatip \
        --region=${region}
    Copy to Clipboard Toggle word wrap
  2. Create a Cloud Router by running the following command:

    $ gcloud compute routers create ${prefix}-router \
        --region=${region} \
        --network=${prefix}-vpc
    Copy to Clipboard Toggle word wrap
  3. Create a Cloud NAT by running the following command:

    $ gcloud compute routers nats create ${prefix}-cloudnat-${region} \
        --router=${prefix}-router --router-region ${region} \
        --nat-all-subnet-ip-ranges \
        --nat-external-ip-pool=${prefix}-${region}-cloudnatip
    Copy to Clipboard Toggle word wrap

The private Domain Name System (DNS) zone optimizes how your resources connect to Google APIs by ensuring traffic never travels over the public internet. It functions by intercepting DNS requests for Google services and resolving them to private IP addresses, forcing the connection onto Google’s internal network for a faster, more secure data exchange.

  1. Create a private DNS zone for the googleapis.com domain by running the following command:

    $ gcloud dns managed-zones create ${prefix}-googleapis \
        --visibility=private \
        --networks=https://www.googleapis.com/compute/v1/projects/${project_id}/global/networks/${prefix}-vpc \
        --description="Private Google Access" \
        --dns-name=googleapis.com
    Copy to Clipboard Toggle word wrap
  2. Begin a record set transaction by running the following command:

    $ gcloud dns record-sets transaction start \
        --zone=${prefix}-googleapis
    Copy to Clipboard Toggle word wrap
  3. Stage the DNS records for Google APIs under the googleapis.com domain by running the following commands:

    $ gcloud dns record-sets transaction add --name="*.googleapis.com." \
        --type=CNAME restricted.googleapis.com. \
        --zone=${prefix}-googleapis \
        --ttl=300
    Copy to Clipboard Toggle word wrap
    $ gcloud dns record-sets transaction add 199.36.153.4 199.36.153.5 199.36.153.6 199.36.153.7 \
        --name=restricted.googleapis.com. \
        --type=A \
        --zone=${prefix}-googleapis \
        --ttl=300
    Copy to Clipboard Toggle word wrap
  4. Apply the staged record set transaction you started above by running the following command:

    $ gcloud dns record-sets transaction execute \
        --zone=$prefix-googleapis
    Copy to Clipboard Toggle word wrap

3.7. Creating the firewall rules

  1. Create a blanket allow rule for private IP (RFC 1918) address space by running the following command:

    $ gcloud compute network-firewall-policies rules create 500 \
        --description "Allow egress to private IP ranges" \
        --action=allow \
        --firewall-policy=${prefix} \
        --global-firewall-policy \
        --direction=EGRESS \
        --layer4-configs all \
        --dest-ip-ranges=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
    Copy to Clipboard Toggle word wrap
  2. Create an allow rule for HTTPS (tcp/443) domains required for OpenShift Dedicated by running the following command:

    $ gcloud compute network-firewall-policies rules create 600 \
        --description "Allow egress to OpenShift Dedicated required domains (tcp/443)" \
        --action=allow \
        --firewall-policy=${prefix} \
        --global-firewall-policy \
        --direction=EGRESS \
        --layer4-configs tcp:443 \
        --dest-fqdns accounts.google.com,pull.q1w2.quay.rhcloud.com,http-inputs-osdsecuritylogs.splunkcloud.com,nosnch.in,api.deadmanssnitch.com,events.pagerduty.com,api.pagerduty.com,api.openshift.com,mirror.openshift.com,observatorium.api.openshift.com,observatorium-mst.api.openshift.com,console.redhat.com,infogw.api.openshift.com,api.access.redhat.com,cert-api.access.redhat.com,catalog.redhat.com,sso.redhat.com,registry.connect.redhat.com,registry.access.redhat.com,cdn01.quay.io,cdn02.quay.io,cdn03.quay.io,cdn04.quay.io,cdn05.quay.io,cdn06.quay.io,cdn.quay.io,quay.io,registry.redhat.io,quayio-production-s3.s3.amazonaws.com
    Copy to Clipboard Toggle word wrap
    Important

    If there is not a matching rule that allows the traffic, it will be blocked by the firewall. To allow access to other resources, such as internal networks or other external endpoints, create additional rules with a priority of less than 1000. For more information on how to create firewall rules, see Use global network firewall policies and rules.

3.8. Creating your cluster

You are now ready to create your OpenShift Dedicated on Google Cloud cluster. For more information, see Creating a cluster on Google Cloud with Workload Identity Federation authentication.

3.9. Deleting your cluster

To delete your cluster, see Deleting an OpenShift Dedicated cluster on Google Cloud.

3.10. Cleaning up resources

To prevent ongoing charges, after you delete your cluster you must manually delete the Google Cloud networking infrastructure you created as part of this tutorial. Deleting the cluster will not automatically remove these underlying resources. You can clean up these resources using a combination of gcloud CLI commands and actions within the Google Cloud console.

Before you begin the process of cleaning up the the resources you created for this tutorial, run the following commands and complete any prompts.

  1. To authenticate your identity run the following command:

    $ gcloud init
    Copy to Clipboard Toggle word wrap
  2. To log in to your Google Cloud account, run the following command:

    $ gcloud auth application-default login
    Copy to Clipboard Toggle word wrap
  3. To log in to the OpenShift Cluster manager CLI tool, run the following command:

    $ ocm login --use-auth-code
    Copy to Clipboard Toggle word wrap

You are now ready to clean up the resources you created as part of this tutorial. To respect resource dependencies, delete them in the reverse order of their creation.

  1. Delete the firewall policy’s association with the VPC by running the following command:

    $ gcloud compute network-firewall-policies associations delete \
        --firewall-policy=${prefix} \
        --network=${prefix}-vpc \
        --global-firewall-policy
    Copy to Clipboard Toggle word wrap
  2. Delete the global network firewall policy by running the following command:

    $ gcloud compute network-firewall-policies delete ${prefix} --global
    Copy to Clipboard Toggle word wrap
  3. A managed DNS zone in Google Cloud cannot be deleted until all user-defined record sets are removed. Define variables to target the specific Google Cloud project and the managed DNS zone being cleaned up by running the following command:

    $ cat /tmp/delete_records.sh
    PROJECT_ID=<your-project-id>
    ZONE_NAME=<your-managed-zone-name>
    Copy to Clipboard Toggle word wrap
  4. List the record sets that are included within the Private DNS zone by running the following command:

    $ gcloud \
        dns record-sets list \
        --project=$PROJECT_ID \
        --zone=$ZONE_NAME \
        --filter="type!=NS AND type!=SOA" \
        --format="value(name,type)" | while read name type;
    Copy to Clipboard Toggle word wrap
  5. Delete the record sets that are included within that Private DNS Zone by running the following command:

    $ gcloud --project=$PROJECT_ID dns record-sets delete "$name" --zone=$ZONE_NAME --type="$type"
    Copy to Clipboard Toggle word wrap
  6. Delete the Private DNS Zone by running the following command:

    $ gcloud dns managed-zones delete ${prefix}-googleapis
    Copy to Clipboard Toggle word wrap
  7. Delete the Cloud NAT gateway:

    $ gcloud compute routers nats delete ${prefix}-cloudnat-${region} \
        --router=${prefix}-router \
        --router-region=${region}
    Copy to Clipboard Toggle word wrap
  8. Delete the Cloud Router by running the following command:

    $ gcloud compute routers delete ${prefix}-router --region=${region}
    Copy to Clipboard Toggle word wrap
  9. Delete the reserved IP address by running the following command:

    $ gcloud compute addresses delete ${prefix}-${region}-cloudnatip --region=${region}
    Copy to Clipboard Toggle word wrap
  10. Delete the worker subnet by running the following command:

    $ gcloud compute networks subnets delete ${prefix}-worker --region=${region}
    Copy to Clipboard Toggle word wrap
  11. Delete the control plane subnet by running the following command:

    $ gcloud compute networks subnets delete ${prefix}-control-plane --region=${region}
    Copy to Clipboard Toggle word wrap
  12. Delete the PSC subnet by running the following command:

    $ gcloud compute networks subnets delete ${prefix}-psc --region=${region}
    Copy to Clipboard Toggle word wrap
  13. Delete the VPC by running the following command:

    $ gcloud compute networks delete ${prefix}-vpc
    Copy to Clipboard Toggle word wrap

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat