Tutorials
OpenShift Dedicated tutorials
Abstract
Chapter 1. Tutorials overview Copy linkLink copied to clipboard!
Use the step-by-step tutorials from Red Hat experts to get the most out of your Managed OpenShift cluster.
This content is authored by Red Hat experts but has not yet been tested on every supported configuration.
Chapter 2. Tutorial: Updating component routes with custom domains and TLS certificates Copy linkLink copied to clipboard!
This guide demonstrates how to modify the hostname and TLS certificate of the Web console, OAuth server, and Downloads component routes in OpenShift Dedicated on Google Cloud version 4.14 and above.[1]
The changes that we make to the component routes[2] in this guide are described in greater detail in the Customing the internal OAuth server URL, Customing the console route, and Customing the download route OpenShift Dedicated documentation.
2.1. Prerequisites Copy linkLink copied to clipboard!
-
OCM CLI (
ocm) version 1.0.5 or higher -
gcloud CLI (
gcloud) - An OpenShift Dedicated on Google Cloud cluster version 4.14 or higher
-
OpenShift CLI (
oc) -
jqCLI -
Access to the cluster as a user with the
cluster-adminrole. - OpenSSL (for generating the demonstration SSL/TLS certificates)
2.2. Setting up your environment Copy linkLink copied to clipboard!
-
Log in to your cluster using an account with
cluster-adminprivileges. Configure an environment variable for your cluster name:
export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all fields output correctly before moving to the next section:
echo "Cluster: ${CLUSTER_NAME}"$ echo "Cluster: ${CLUSTER_NAME}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Cluster: my-osd-cluster
Cluster: my-osd-clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Find the current routes Copy linkLink copied to clipboard!
Verify that you can reach the component routes on their default hostnames.
You can find the hostnames by querying the lists of routes in the
openshift-consoleandopenshift-authenticationprojects.oc get routes -n openshift-console oc get routes -n openshift-authentication
$ oc get routes -n openshift-console $ oc get routes -n openshift-authenticationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect None
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD console console-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more console https reencrypt/Redirect None downloads downloads-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more downloads http edge/Redirect None NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD oauth-openshift oauth-openshift.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.com ... 1 more oauth-openshift 6443 passthrough/Redirect NoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow From this output you can see that our base hostname is
z9a9.p2.openshiftapps.com.Get the ID of the default ingress by running the following command:
export INGRESS_ID=$(ocm list ingress -c ${CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')$ export INGRESS_ID=$(ocm list ingress -c ${CLUSTER_NAME} -o json | jq -r '.[] | select(.default == true) | .id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all fields output correctly before moving to the next section:
echo "Ingress ID: ${INGRESS_ID}"$ echo "Ingress ID: ${INGRESS_ID}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Ingress ID: r3l6
Ingress ID: r3l6Copy to Clipboard Copied! Toggle word wrap Toggle overflow By running these commands you can see that the default component routes for our cluster are:
-
console-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.comfor Console -
downloads-openshift-console.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.comfor Downloads -
oauth-openshift.apps.my-example-cluster-gcp.z9a9.p2.openshiftapps.comfor OAuth
-
We can use the ocm edit ingress command to change the hostname of each service and add a TLS certificate for all of our component routes. The relevant parameters are shown in this excerpt of the command-line help for the ocm edit ingress command:
ocm edit ingress -h
$ ocm edit ingress -h
Edit a cluster ingress for a cluster. Usage:
ocm edit ingress ID [flags]
[...]
--component-routes string Component routes settings. Available keys [oauth, console, downloads]. For each key a pair of hostname and tlsSecretRef is expected to be supplied. Format should be a comma separate list 'oauth: hostname=example-hostname;tlsSecretRef=example-secret-ref,downloads:...'
For this example, we’ll use the following custom component routes:
-
console.my-new-domain.devfor Console -
downloads.my-new-domain.devfor Downloads -
oauth.my-new-domain.devfor OAuth
2.4. Creating a valid TLS certificate for each component route Copy linkLink copied to clipboard!
In this section, we create three separate self-signed certificate key pairs and then trust them to verify that we can access our new component routes using a real web browser.
This is for demonstration purposes only, and is not recommended as a solution for production workloads. Consult your certificate authority to understand how to create certificates with similar attributes for your production workloads.
To prevent issues with HTTP/2 connection coalescing, you must use a separate individual certificate for each endpoint. Using a wildcard or SAN certificate is not supported.
Generate a certificate for each component route, taking care to set our certificate’s subject (
-subj) to the custom domain of the component route we want to use:Example
openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj "/CN=console.my-new-domain.dev" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj "/CN=downloads.my-new-domain.dev" openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj "/CN=oauth.my-new-domain.dev"
$ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-console.pem -out cert-console.pem -subj "/CN=console.my-new-domain.dev" $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-downloads.pem -out cert-downloads.pem -subj "/CN=downloads.my-new-domain.dev" $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 365 -keyout key-oauth.pem -out cert-oauth.pem -subj "/CN=oauth.my-new-domain.dev"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This generates three pairs of
.pemfiles,key-<component>.pemandcert-<component>.pem.
2.5. Adding the certificates to the cluster as secrets Copy linkLink copied to clipboard!
Create three TLS secrets in the
openshift-confignamespace.These become your secret reference when you update the component routes later in this guide.
oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-config
$ oc create secret tls console-tls --cert=cert-console.pem --key=key-console.pem -n openshift-config $ oc create secret tls downloads-tls --cert=cert-downloads.pem --key=key-downloads.pem -n openshift-config $ oc create secret tls oauth-tls --cert=cert-oauth.pem --key=key-oauth.pem -n openshift-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Finding the load balancer IP of the load balancer in your cluster Copy linkLink copied to clipboard!
When you create a cluster, the service creates a load balancer and generates a load balancer IP for that load balancer. We need to know the load balancer IP in order to create DNS records for our cluster.
You can find the load balancer IP by running the oc get svc command against the openshift-ingress namespace. The load balancer IP of the load balancer is the EXTERNAL-IP associated with the router-default service in the openshift-ingress namespace.
oc get svc -n openshift-ingress
$ oc get svc -n openshift-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.237.88 34.85.169.230 80:31175/TCP,443:31554/TCP 76d
In our case, the load balancer IP is 34.85.169.230.
Save this value for later, as we will need it to configure DNS records for our new component route hostnames.
2.7. Adding component route DNS records to your hosting provider Copy linkLink copied to clipboard!
Create an A record in your DNS settings, pointing the domain to the IP address of the router-default’s load balancer.
2.8. Updating the component routes and TLS secret using the OCM CLI Copy linkLink copied to clipboard!
When your DNS records have been updated, you can use the OCM CLI to change the component routes.
Use the
ocm edit ingresscommand to update your default ingress route with the new base domain and the secret reference associated with it, taking care to update the hostnames for each component route.ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'$ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname=downloads.my-new-domain.dev;tlsSecretRef=downloads-tls,oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also edit only a subset of the component routes by leaving the component routes you do not want to change set to an empty string. For example, if you only want to change the Console and OAuth server hostnames and TLS certificates, you would run the following command:
ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname="";tlsSecretRef="", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'$ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname=console.my-new-domain.dev;tlsSecretRef=console-tls,downloads: hostname="";tlsSecretRef="", oauth: hostname=oauth.my-new-domain.dev;tlsSecretRef=oauth-tls'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
ocm list ingresscommand to verify that your changes were successfully made:ocm list ingress -c ${CLUSTER_NAME} -ojson | jq ".[] | select(.id == \"${INGRESS_ID}\") | .component_routes"$ ocm list ingress -c ${CLUSTER_NAME} -ojson | jq ".[] | select(.id == \"${INGRESS_ID}\") | .component_routes"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add your certificate to the truststore on your local system, then confirm that you can access your components at their new routes using your local web browser.
2.9. Resetting the component routes to the default using the OCM CLI Copy linkLink copied to clipboard!
If you want to reset the component routes to the default configuration, run the following ocm edit ingress command:
ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname="";tlsSecretRef="",downloads: hostname="";tlsSecretRef="", oauth: hostname="";tlsSecretRef=""'
$ ocm edit ingress -c ${CLUSTER_NAME} ${INGRESS_ID} --component-routes 'console: hostname="";tlsSecretRef="",downloads: hostname="";tlsSecretRef="", oauth: hostname="";tlsSecretRef=""'
Chapter 3. Tutorial: Limit egress with Google Cloud Next Generation Firewall Copy linkLink copied to clipboard!
Use this guide to implement egress restrictions for OpenShift Dedicated on Google Cloud by using Google Cloud’s Next Generation Firewall (NGFW). NGFW is a fully distributed firewall service that allows fully qualified domain name (FQDN) objects in firewall policy rules. This is necessary for many of the external endpoints that OpenShift Dedicated relies on.
The ability to restrict egress traffic using a firewall or other network device is only supported with OpenShift Dedicated clusters deployed using Private Service Connect (PSC). Clusters that do not use PSC require a support exception to use this functionality. For additional assistance, please open a support case.
3.1. Reviewing your prerequisites Copy linkLink copied to clipboard!
-
You have the Google Cloud Command Line Interface (
gcloud) installed. - You are logged into the Google Cloud CLI and have selected the Google Cloud project where you plan to deploy OpenShift Dedicated.
You have the minimum necessary permissions in Google Cloud, including:
-
Compute Network Admin -
DNS Administrator
-
You have enabled certain services by running the following commands in your terminal:
gcloud services enable networksecurity.googleapis.com
$ gcloud services enable networksecurity.googleapis.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow gcloud services enable networkservices.googleapis.com
$ gcloud services enable networkservices.googleapis.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow gcloud services enable servicenetworking.googleapis.com
$ gcloud services enable servicenetworking.googleapis.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Setting up your environment Copy linkLink copied to clipboard!
In your terminal, configure the following environment variables:
This example uses us-east1 as the region to deploy into and the prefix osd-ngfw for the cluster’s resources. The default CIDR ranges are assigned for the service and pod networks. The machine CIDR is based on the subnet ranges that will be set later in this tutorial. Modify the parameters to meet your needs.
3.3. Creating the VPC and subnets Copy linkLink copied to clipboard!
Before you can deploy a Google Cloud NGFW, you must first create the Virtual Private Cloud (VPC) and subnets that you will use for OpenShift Dedicated:
Create the VPC by running the following command:
gcloud compute networks create ${prefix}-vpc --subnet-mode=custom$ gcloud compute networks create ${prefix}-vpc --subnet-mode=customCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the worker subnets by running the following command:
gcloud compute networks subnets create ${prefix}-worker \ --range=10.0.2.0/23 \ --network=${prefix}-vpc \ --region=${region} \ --enable-private-ip-google-access$ gcloud compute networks subnets create ${prefix}-worker \ --range=10.0.2.0/23 \ --network=${prefix}-vpc \ --region=${region} \ --enable-private-ip-google-accessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane subnets by running the following command:
gcloud compute networks subnets create ${prefix}-control-plane \ --range=10.0.0.0/25 \ --network=${prefix}-vpc \ --region=${region} \ --enable-private-ip-google-access$ gcloud compute networks subnets create ${prefix}-control-plane \ --range=10.0.0.0/25 \ --network=${prefix}-vpc \ --region=${region} \ --enable-private-ip-google-accessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the PSC subnets by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow These examples use the subnet ranges of 10.0.2.0/23 for the worker subnet, 10.0.0.0/25 for the control plane subnet, and 10.0.0.128/29 for the PSC subnet. Modify the parameters to meet your needs. Ensure the parameter values are contained within the machine CIDR you set earlier in this tutorial.
3.4. Deploying a global network firewall policy Copy linkLink copied to clipboard!
Create a global network firewall policy by running the following command:
gcloud compute network-firewall-policies create \ ${prefix} \ --description "OpenShift Dedicated Egress Firewall" \ --global$ gcloud compute network-firewall-policies create \ ${prefix} \ --description "OpenShift Dedicated Egress Firewall" \ --globalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Associate the newly created global network firewall policy to the VPC you created above by running the following command:
gcloud compute network-firewall-policies associations create \ --firewall-policy ${prefix} \ --network ${prefix}-vpc \ --global-firewall-policy$ gcloud compute network-firewall-policies associations create \ --firewall-policy ${prefix} \ --network ${prefix}-vpc \ --global-firewall-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Creating a Cloud Router and a Cloud Network Address Translation gateway Copy linkLink copied to clipboard!
The Network Address Translation (NAT) gateway enables internet connectivity for your private VMs by masquerading all their traffic under a single public IP address. As the designated exit point, it translates their internal IPs for any outbound requests, such as fetching updates. This process effectively grants them access to the internet without ever exposing their private addresses.
Reserve an IP address for Cloud NAT by running the following command:
gcloud compute addresses create ${prefix}-${region}-cloudnatip \ --region=${region}$ gcloud compute addresses create ${prefix}-${region}-cloudnatip \ --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Cloud Router by running the following command:
gcloud compute routers create ${prefix}-router \ --region=${region} \ --network=${prefix}-vpc$ gcloud compute routers create ${prefix}-router \ --region=${region} \ --network=${prefix}-vpcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Cloud NAT by running the following command:
gcloud compute routers nats create ${prefix}-cloudnat-${region} \ --router=${prefix}-router --router-region ${region} \ --nat-all-subnet-ip-ranges \ --nat-external-ip-pool=${prefix}-${region}-cloudnatip$ gcloud compute routers nats create ${prefix}-cloudnat-${region} \ --router=${prefix}-router --router-region ${region} \ --nat-all-subnet-ip-ranges \ --nat-external-ip-pool=${prefix}-${region}-cloudnatipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Creating private Domain Name System records for Private Google Access Copy linkLink copied to clipboard!
The private Domain Name System (DNS) zone optimizes how your resources connect to Google APIs by ensuring traffic never travels over the public internet. It functions by intercepting DNS requests for Google services and resolving them to private IP addresses, forcing the connection onto Google’s internal network for a faster, more secure data exchange.
Create a private DNS zone for the googleapis.com domain by running the following command:
gcloud dns managed-zones create ${prefix}-googleapis \ --visibility=private \ --networks=https://www.googleapis.com/compute/v1/projects/${project_id}/global/networks/${prefix}-vpc \ --description="Private Google Access" \ --dns-name=googleapis.com$ gcloud dns managed-zones create ${prefix}-googleapis \ --visibility=private \ --networks=https://www.googleapis.com/compute/v1/projects/${project_id}/global/networks/${prefix}-vpc \ --description="Private Google Access" \ --dns-name=googleapis.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Begin a record set transaction by running the following command:
gcloud dns record-sets transaction start \ --zone=${prefix}-googleapis$ gcloud dns record-sets transaction start \ --zone=${prefix}-googleapisCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stage the DNS records for Google APIs under the googleapis.com domain by running the following commands:
gcloud dns record-sets transaction add --name="*.googleapis.com." \ --type=CNAME restricted.googleapis.com. \ --zone=${prefix}-googleapis \ --ttl=300$ gcloud dns record-sets transaction add --name="*.googleapis.com." \ --type=CNAME restricted.googleapis.com. \ --zone=${prefix}-googleapis \ --ttl=300Copy to Clipboard Copied! Toggle word wrap Toggle overflow gcloud dns record-sets transaction add 199.36.153.4 199.36.153.5 199.36.153.6 199.36.153.7 \ --name=restricted.googleapis.com. \ --type=A \ --zone=${prefix}-googleapis \ --ttl=300$ gcloud dns record-sets transaction add 199.36.153.4 199.36.153.5 199.36.153.6 199.36.153.7 \ --name=restricted.googleapis.com. \ --type=A \ --zone=${prefix}-googleapis \ --ttl=300Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the staged record set transaction you started above by running the following command:
gcloud dns record-sets transaction execute \ --zone=$prefix-googleapis$ gcloud dns record-sets transaction execute \ --zone=$prefix-googleapisCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Creating the firewall rules Copy linkLink copied to clipboard!
Create a blanket allow rule for private IP (RFC 1918) address space by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an allow rule for HTTPS (tcp/443) domains required for OpenShift Dedicated by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf there is not a matching rule that allows the traffic, it will be blocked by the firewall. To allow access to other resources, such as internal networks or other external endpoints, create additional rules with a priority of less than 1000. For more information on how to create firewall rules, see Use global network firewall policies and rules.
3.8. Creating your cluster Copy linkLink copied to clipboard!
You are now ready to create your OpenShift Dedicated on Google Cloud cluster. For more information, see Creating a cluster on Google Cloud with Workload Identity Federation authentication.
3.9. Deleting your cluster Copy linkLink copied to clipboard!
To delete your cluster, see Deleting an OpenShift Dedicated cluster on Google Cloud.
3.10. Cleaning up resources Copy linkLink copied to clipboard!
To prevent ongoing charges, after you delete your cluster you must manually delete the Google Cloud networking infrastructure you created as part of this tutorial. Deleting the cluster will not automatically remove these underlying resources. You can clean up these resources using a combination of gcloud CLI commands and actions within the Google Cloud console.
Before you begin the process of cleaning up the the resources you created for this tutorial, run the following commands and complete any prompts.
To authenticate your identity run the following command:
gcloud init
$ gcloud initCopy to Clipboard Copied! Toggle word wrap Toggle overflow To log in to your Google Cloud account, run the following command:
gcloud auth application-default login
$ gcloud auth application-default loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow To log in to the OpenShift Cluster manager CLI tool, run the following command:
ocm login --use-auth-code
$ ocm login --use-auth-codeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You are now ready to clean up the resources you created as part of this tutorial. To respect resource dependencies, delete them in the reverse order of their creation.
Delete the firewall policy’s association with the VPC by running the following command:
gcloud compute network-firewall-policies associations delete \ --firewall-policy=${prefix} \ --network=${prefix}-vpc \ --global-firewall-policy$ gcloud compute network-firewall-policies associations delete \ --firewall-policy=${prefix} \ --network=${prefix}-vpc \ --global-firewall-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the global network firewall policy by running the following command:
gcloud compute network-firewall-policies delete ${prefix} --global$ gcloud compute network-firewall-policies delete ${prefix} --globalCopy to Clipboard Copied! Toggle word wrap Toggle overflow A managed DNS zone in Google Cloud cannot be deleted until all user-defined record sets are removed. Define variables to target the specific Google Cloud project and the managed DNS zone being cleaned up by running the following command:
cat /tmp/delete_records.sh PROJECT_ID=<your-project-id> ZONE_NAME=<your-managed-zone-name>
$ cat /tmp/delete_records.sh PROJECT_ID=<your-project-id> ZONE_NAME=<your-managed-zone-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the record sets that are included within the Private DNS zone by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the record sets that are included within that Private DNS Zone by running the following command:
gcloud --project=$PROJECT_ID dns record-sets delete "$name" --zone=$ZONE_NAME --type="$type"
$ gcloud --project=$PROJECT_ID dns record-sets delete "$name" --zone=$ZONE_NAME --type="$type"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Private DNS Zone by running the following command:
gcloud dns managed-zones delete ${prefix}-googleapis$ gcloud dns managed-zones delete ${prefix}-googleapisCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Cloud NAT gateway:
gcloud compute routers nats delete ${prefix}-cloudnat-${region} \ --router=${prefix}-router \ --router-region=${region}$ gcloud compute routers nats delete ${prefix}-cloudnat-${region} \ --router=${prefix}-router \ --router-region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Cloud Router by running the following command:
gcloud compute routers delete ${prefix}-router --region=${region}$ gcloud compute routers delete ${prefix}-router --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the reserved IP address by running the following command:
gcloud compute addresses delete ${prefix}-${region}-cloudnatip --region=${region}$ gcloud compute addresses delete ${prefix}-${region}-cloudnatip --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the worker subnet by running the following command:
gcloud compute networks subnets delete ${prefix}-worker --region=${region}$ gcloud compute networks subnets delete ${prefix}-worker --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the control plane subnet by running the following command:
gcloud compute networks subnets delete ${prefix}-control-plane --region=${region}$ gcloud compute networks subnets delete ${prefix}-control-plane --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PSC subnet by running the following command:
gcloud compute networks subnets delete ${prefix}-psc --region=${region}$ gcloud compute networks subnets delete ${prefix}-psc --region=${region}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the VPC by running the following command:
gcloud compute networks delete ${prefix}-vpc$ gcloud compute networks delete ${prefix}-vpcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.