Installing, accessing, and deleting OpenShift Dedicated clusters
Installing, accessing, and deleting OpenShift Dedicated clusters
Abstract
Chapter 1. Creating a cluster on AWS
You can install OpenShift Dedicated on Amazon Web Services (AWS) by using your own AWS account through the Customer Cloud Subscription (CCS) model or by using an AWS infrastructure account that is owned by Red Hat.
1.1. Prerequisites
- You reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts.
- You reviewed the OpenShift Dedicated cloud deployment options.
1.2. Creating a cluster on AWS with CCS
By using the Customer Cloud Subscription (CCS) billing model, you can create an OpenShift Dedicated cluster in an existing Amazon Web Services (AWS) account that you own.
You must meet several prerequisites if you use the CCS model to deploy and manage OpenShift Dedicated into your AWS account.
Prerequisites
- You have configured your AWS account for use with OpenShift Dedicated.
- You have not deployed any services in your AWS account.
- You have configured the AWS account quotas and limits that are required to support the desired cluster size.
-
You have an
osdCcsAdmin
AWS Identity and Access Management (IAM) user with theAdministratorAccess
policy attached. - You have set up a service control policy (SCP) in your AWS organization. For more information, see Minimum required service control policy (SCP).
- Consider having Business Support or higher from AWS.
- If you are configuring a cluster-wide proxy, you have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- On the Create an OpenShift cluster page, select Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
NoteThe subscription types that are available to you depend on your OpenShift Dedicated subscriptions and resource quotas. For more information, contact your sales representative or Red Hat support.
- Select the Customer Cloud Subscription infrastructure type to deploy OpenShift Dedicated in an existing cloud provider account that you own.
- Click Next.
- Select Run on Amazon Web Services.
- Review and complete the listed Prerequisites.
- Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
Provide your AWS account details:
- Enter your AWS account ID.
Enter your AWS access key ID and AWS secret access key for your AWS IAM user account.
NoteRevoking these credentials in AWS results in a loss of access to any cluster created with these credentials.
Optional: You can select Bypass AWS service control policy (SCP) checks to disable the SCP checks.
NoteSome AWS SCPs can cause the installation to fail, even if you have the required permissions. Disabling the SCP checks allows an installation to proceed. The SCP is still enforced even if the checks are bypassed.
- Click Next to validate your cloud provider account and go to the Cluster details page.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string.To customize the subdomain, select the Create customize domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
- Select a cluster version from the Version drop-down menu.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Accept the default setting Use default KMS Keys to use your default AWS KMS key, or select Use Custom KMS keys to use a custom KMS key.
- With Use Custom KMS keys selected, enter the AWS Key Management Service (KMS) custom key Amazon Resource Name (ARN) ARN in the Key ARN field. The key is used for encrypting all control plane, infrastructure, worker node root volumes, and persistent volumes in your cluster.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
Choose your preference for the Instance Metadata Service (IMDS) type, either using both IMDSv1 and IMDSv2 types or requiring your EC2 instances to use only IMDSv2. You can access instance metadata from a running instance in two ways:
- Instance Metadata Service Version 1 (IMDSv1) - a request/response method
Instance Metadata Service Version 2 (IMDSv2) - a session-oriented method
ImportantThe Instance Metadata Service settings cannot be changed after your cluster is created.
NoteIMDSv2 uses session-oriented requests. With session-oriented requests, you create a session token that defines the session duration, which can range from a minimum of one second to a maximum of six hours. During the specified duration, you can use the same session token for subsequent requests. After the specified duration expires, you must create a new session token to use for future requests.
For more information regarding IMDS, see Instance metadata and user data in the AWS documentation.
- Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
On the Network configuration page, select Public or Private to use either public or private API endpoints and application routes for your cluster.
ImportantIf you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account.
Optional: To install the cluster in an existing AWS Virtual Private Cloud (VPC):
- Select Install into an existing VPC.
If you are installing into an existing VPC and opted to use private API endpoints, you can select Use a PrivateLink. This option enables connections to the cluster by Red Hat Site Reliability Engineering (SRE) using only AWS PrivateLink endpoints.
NoteThe Use a PrivateLink option cannot be changed after a cluster is created.
- If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
- Click Next.
If you opted to install the cluster in an existing AWS VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs.
NoteYou must ensure that your VPC is configured with a public and a private subnet for each availability zone that you want the cluster installed into. If you opted to use PrivateLink, only private subnets are required.
Optional: Expand Additional security groups and select additional custom security groups to apply to nodes in the machine pools that are created by default. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups to the default machine pools after you create the cluster.
By default, the security groups you specify are added for all node types. Clear the Apply the same security groups to all node types checkbox to apply different security groups for each node type.
For more information, see the requirements for Security groups under Additional resources.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
-
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
NoteIf you are installing into a VPC, the Machine CIDR range must match the VPC subnets.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
- Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
1.3. Creating a cluster on AWS with a Red Hat cloud account
Through OpenShift Cluster Manager, you can create an OpenShift Dedicated cluster on Amazon Web Services (AWS) using a standard cloud provider account owned by Red Hat.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
Select the Annual subscription type. Only the Annual subscription type is available when you deploy a cluster using a Red Hat cloud account.
For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
NoteYou must have the required resource quota for the Annual subscription type to be available. For more information, contact your sales representative or Red Hat support.
- Select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat.
- Click Next.
- Select Run on Amazon Web Services and click Next.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
- Select a cluster version from the Version drop-down menu.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
- Select a Persistent storage capacity for the cluster. For more information, see the Storage section in the OpenShift Dedicated service definition.
- Specify the number of Load balancers that you require for your cluster. For more information, see the Load balancers section in the OpenShift Dedicated service definition.
- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a machine pool. For clusters that use the CCS model, you can add machine pools after installation that use a different instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
- Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
- In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster.
- Click Next.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
1.4. Additional resources
- For information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
- For details about the AWS service control policies required for CCS deployments, see Minimum required service control policy (SCP).
- For information about persistent storage for OpenShift Dedicated, see the Storage section in the OpenShift Dedicated service definition.
- For information about load balancers for OpenShift Dedicated, see the Load balancers section in the OpenShift Dedicated service definition.
- For more information about etcd encryption, see the etcd encryption service definition.
- For information about the end-of-life dates for OpenShift Dedicated versions, see the OpenShift Dedicated update life cycle.
- For information about the requirements for custom additional security groups, see Additional custom security groups.
Chapter 2. Creating a GCP Private Service Connect enabled private cluster
You can create a private OpenShift Dedicated cluster on Google Cloud Platform (GCP) using Google Cloud’s security-enhanced networking feature Private Service Connect (PSC).
2.1. Private Service Connect overview
Private Service Connect (PSC), a capability of Google Cloud networking, enables private communication between services across different projects or organizations within GCP. Users that implement PSC as part of their network connectivity can deploy OpenShift Dedicated clusters in a private and secured environment within Google Cloud Platform (GCP) without any public facing cloud resources. For more information on PSC, see Private Service Connect.
Private Service Connect is supported by the Customer Cloud Subscription (CCS) infrastructure type only.
2.1.1. Private Service Connect architecture
The PSC architecture includes producer services and consumer services. Using PSC, the consumers can access producer services privately from inside their VPC network. Similarly, it allows producers to host services in their own separate VPC networks and offer a private connect to their consumers.
The following image depicts how Red HAT SREs and other internal resources access and support clusters created using PSC.
- A unique PSC Service Attachment is created for each OSD cluster in the customer GCP project. The PSC Service Attachment points to the cluster API server load balancer created in the customer GCP project.
- Similar to Service Attachments, a unique PSC Service Endpoint is created in the Red Hat Management GCP project for each OSD cluster.
- A dedicated subnet for GCP Private Service Connect is created in the cluster’s network within the customer GCP project. This is a special subnet type where the producer services are published via PSC Service Attachments. This subnet is used to Source NAT (SNAT) incoming requests to the cluster API server. Additionally, the PSC subnet must be within the Machine CIDR range and cannot be used in more than one Service Attachment.
- Red Hat internal resources and SREs access private OSD clusters using the connectivity between a PSC Endpoint and Service Attachment. Even though the traffic transits multiple VPC networks, it remains entirely within Google Cloud.
- Access to PSC Service Attachments is possible only via the Red Hat Management project.
Figure 2.1. PSC architecture overview
2.2. Prerequisites
In addition to the prerequisites that you need to complete before deploying any OpenShift Dedicated on Google Cloud Platform (GCP) cluster, you must also complete the following prerequisites to deploy a private cluster using Private Service Connect (PSC):
A pre-created Virtual Private Cloud (VPC) with the following subnets in the same Google Cloud Platform (GCP) region where your cluster will be deployed:
- A control plane subnet
- A worker subnet
A subnet used for the PSC service attachment with the purpose set to Private Service Connect.
ImportantThe subnet mask for the PSC service attachment must be /29 or larger and must be dedicated to an individual OpenShift Dedicated cluster. Additionally, the subnet must be contained within the Machine CIDR range used while provisioning the OpenShift Dedicated cluster.
For information about how to create a VPC on Google Cloud Platform (GCP), see Create and manage VPC networks in the Google Cloud documentation.
- Provide a path from the OpenShift Dedicated cluster to the internet for the domains and ports listed in the GCP firewall prerequisites in the Additional resources section.
- Enabled Cloud Identity-Aware Proxy API at the Google Cloud Platform (GCP) project level.
In addition to the requirements listed above, clusters configured with the Service Account authentication type must grant the IAP-Secured Tunnel User
role to osd-ccs-admin
service account.
For more information about the prerequisites that must be completed before deploying an OpenShift Dedicated on Google Cloud Platform (GCP), see Additional resources.
2.3. Creating a private cluster with Private Service Connect
Private Service Connect is supported with the Customer Cloud Subscription (CCS) infrastructure type only. To create an OpenShift Dedicated on Google Cloud Platform (GCP) using PSC, see Creating a cluster on GCP with Google Cloud Marketplace.
2.4. Additional resources
For information on OpenShift Dedicated on Google Cloud Platform (GCP) cluster prerequisites, see Customer Requirements.
For information about configuring your firewalls , see GCP firewall prerequisites.
Chapter 3. Creating a cluster on GCP with Workload Identity Federation
3.1. Workload Identity Federation Overview
Workload Identity Federation (WIF) is a Google Cloud Platform (GCP) Identity and Access Management (IAM) feature that provides third parties a secure method to access resources on a customer’s cloud account. WIF eliminates the need for service account keys, and is Google Cloud’s preferred method of credential authentication.
While service account keys can provide powerful access to your Google Cloud resources, they must be maintained by the end user and can be a security risk if they are not managed properly. WIF does not use service keys as an access method for your Google cloud resources. Instead, WIF grants access by using credentials from external identity providers to generate short-lived credentials for workloads. The workloads can then use these credentials to temporarily impersonate service accounts and access Google Cloud resources. This removes the burden of having to properly maintain service account keys, and removes the risk of unauthorized users gaining access to service account keys.
The following bulleted items provides a basic overview of the Workload Identity Federation process:
- The owner of the Google Cloud Platform (GCP) project configures a workload identity pool with an identity provider, allowing OpenShift Dedicated to access the project’s associated service accounts using short-lived credentials.
- This workload identity pool is configured to authenticate requests using an Identity Provider (IP) that the user defines.
- For applications to get access to cloud resources, they first pass credentials to Google’s Security Token Service (STS). STS uses the specified identity provider to verify the credentials.
- Once the credentials are verified, STS returns a temporary access token to the caller, giving the application the ability to impersonate the service account bound to that identity.
Operators also need access to cloud resources. By using WIF instead of service account keys to grant this access, cluster security is further strengthened, as service account keys are no longer stored in the cluster. Instead, operators are given temporary access tokens that impersonate the service accounts. These tokens are short-lived and regularly rotated.
For more information about Workload Identity Federation, see the Google Cloud Platform documentation.
Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.
3.2. Prerequisites
You must complete the following prerequisites before Creating a Workload Identity Federation cluster using OpenShift Cluster Manager and Creating a Workload Identity Federation cluster using the OCM CLI.
You have confirmed your Google Cloud account has the necessary resource quotas and limits to support your desired cluster size according to the cluster resource requirements.
NoteFor more information regarding resource quotas and limits, see Additional resources.
- You have reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts.
- You have reviewed the OpenShift Dedicated cloud deployment options.
- You have read and completed the Required customer procedure.
WIF supports the deployment of a private OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Private Service Connect (PSC). Red Hat recommends using PSC when deploying private clusters. For more information about the prerequisites for PSC, see Prerequisites for Private Service Connect.
3.3. Creating a Workload Identity Federation cluster using OpenShift Cluster Manager
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster on the OpenShift Dedicated card.
Under Billing model, configure the subscription type and infrastructure type.
ImportantWorkload Identity Federation is supported by the Customer Cloud Subscription (CCS) infrastructure type only.
- Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
- Select the Customer cloud subscription infrastructure type.
- Click Next.
- Select Run on Google Cloud Platform.
Select Workload Identity Federation as the Authentication type.
- Read and complete all the required prerequisites.
- Click the checkbox indicating that you have read and completed all the required prerequisites.
To create a new WIF configuration, open a terminal window and run the following OCM CLI command.
$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2
- Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.
- Click Next.
On the Details page, provide a name for your cluster and specify the cluster details:
- In the Cluster name field, enter a name for your cluster.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.To customize the subdomain prefix, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
Select a cluster version from the Version drop-down menu.
NoteWorkload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
ImportantTo successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint
constraints/compute.requireShieldedVm
enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
- Select Use custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.
With Use Custom KMS keys selected:
- Select a key ring location from the Key ring location drop-down menu.
- Select a key ring from the Key ring drop-down menu.
- Select a key name from the Key name drop-down menu.
- Provide the KMS Service Account.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
- On the Machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels.
ImportantThis step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors.
- Click Next.
- In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default, and cannot be disabled. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature.
Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):
Select Install into an existing VPC.
ImportantPrivate Service Connect is supported only with Install into an existing VPC.
If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
ImportantIn order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information.
Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.
- Optional: Provide route selector.
- Optional: Provide excluded namespaces.
- Select a namespace ownership policy.
Select a wildcard policy.
For more information about custom application ingress settings, click on the information icon provided for each setting.
- Click Next.
Optional: To install the cluster into a GCP Shared VPC, follow these steps.
ImportantThe VPC owner of the host project must enable a project as a host project in their Google Cloud console and add the Computer Network Administrator, Compute Security Administrator, and DNS Administrator roles to the following service accounts prior to cluster installation:
- osd-deployer
- osd-control-plane
- openshift-machine-api-gcp
Failure to do so will cause the cluster go into the "Installation Waiting" state. If this occurs, you must contact the VPC owner of the host project to assign the roles to the service accounts listed above. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For more information, see Enable a host project and Provision Shared VPC.
- Select Install into GCP Shared VPC.
- Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.
If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See Additional resources for information about Cloud NATs and Google VPCs.
NoteIf you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.
- Click Next.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
-
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
3.4. Creating a Workload Identity Federation cluster using the OCM CLI
You can create an OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Workload Identity Federation (WIF) using the OpenShift Cluster Manager CLI (ocm
) in interactive or non-interactive mode.
To create a WIF-enabled cluster, the OpenShift Cluster Manager CLI (ocm
) must be version 1.0.2 or greater.
Before creating the cluster, you must first create a WIF configuration.
Migrating an existing non-WIF cluster to a WIF configuration is not supported. This feature can only be enabled during new cluster creation.
3.4.1. Creating a WIF configuration
Procedure
You can create a WIF configuration using the auto
mode or the manual
mode.
The auto
mode enables you to automatically create the service accounts for OpenShift Dedicated components as well as other IAM resources.
Alternatively, you can use the manual
mode. In manual
mode, you are provided with commands within a script.sh
file which you use to manually create the service accounts for OpenShift Dedicated components as well as other IAM resources.
Based on your mode preference, run one of the following commands to create a WIF configuration:
Create a WIF configuration in auto mode by running the following command:
$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2
Example output
2024/09/26 13:05:41 Creating workload identity configuration... 2024/09/26 13:05:47 Workload identity pool created with name 2e1kcps6jtgla8818vqs8tbjjls4oeub 2024/09/26 13:05:47 workload identity provider created with name oidc 2024/09/26 13:05:48 IAM service account osd-worker-oeub created 2024/09/26 13:05:49 IAM service account osd-control-plane-oeub created 2024/09/26 13:05:49 IAM service account openshift-gcp-ccm-oeub created 2024/09/26 13:05:50 IAM service account openshift-gcp-pd-csi-driv-oeub created 2024/09/26 13:05:50 IAM service account openshift-image-registry-oeub created 2024/09/26 13:05:51 IAM service account openshift-machine-api-gcp-oeub created 2024/09/26 13:05:51 IAM service account osd-deployer-oeub created 2024/09/26 13:05:52 IAM service account cloud-credential-operator-oeub created 2024/09/26 13:05:52 IAM service account openshift-cloud-network-c-oeub created 2024/09/26 13:05:53 IAM service account openshift-ingress-gcp-oeub created 2024/09/26 13:05:55 Role "osd_deployer_v4.17" updated
Create a WIF configuration in manual mode by running the following command:
$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2 --mode=manual
Once the WIF is configured, the following service accounts, roles, and groups are created.
Table 3.1. WIF configuration service accounts, group and roles Service Account/Group GCP pre-defined roles and Red Hat custom roles osd-deployer
osd_deployer_v4.17
osd-control-plane
- compute.instanceAdmin
- compute.networkAdmin
- compute.securityAdmin
- compute.storageAdmin
osd-worker
- compute.storageAdmin
- compute.viewer
cloud-credential-operator-gcp-ro-creds
cloud_credential_operator_gcp_ro_creds_v4.17
openshift-cloud-network-config-controller-gcp
openshift_cloud_network_config_controller_gcp_v4.17
openshift-gcp-ccm
openshift_gcp_ccm_v4.17
openshift-gcp-pd-csi-driver-operator
- compute.storageAdmin
- iam.serviceAccountUser
- resourcemanager.tagUser
- openshift_gcp_pd_csi_driver_operator_v4.17
openshift-image-registry-gcp
openshift_image_registry_gcs_v4.17
openshift-ingress-gcp
openshift_ingress_gcp_v4.17
openshift-machine-api-gcp
openshift_machine_api_gcp_v4.17
Access via SRE group:sd-sre-platform-gcp-access
sre_managed_support
For further details about WIF configuration roles and their assigned permissions, see managed-cluster-config.
3.4.2. Creating a WIF cluster
Procedure
You can create a WIF cluster using the interactive
mode or the non-interactive
mode.
In interactive
mode, cluster attributes are displayed automatically as prompts during the creation of the cluster. You enter the values for those prompts based on specified requirements in the fields provided.
In non-interactive
mode, you specify the values for specific parameters within the command.
Based on your mode preference, run one of the following commands to create an OpenShift Dedicated on (GCP) cluster with WIF configuration:
Create a cluster in interactive mode by running the following command:
$ ocm create cluster --interactive 1
- 1
interactive
mode enables you to specify configuration options at the interactive prompts.
Create a cluster in non-interactive mode by running the following command:
NoteThe following example is made up optional and required parameters and may differ from your
non-interactive
mode command. Parameters not identified as optional are required. For additional details about these and other parameters, run theocm create cluster --help flag
command in you terminal window.$ ocm create cluster <cluster_name> \ 1 --provider=gcp \ 2 --ccs=true \ 3 --wif-config <wif_name> \ 4 --region <gcp_region> \ 5 --subscription-type=marketplace-gcp \ 6 --marketplace-gcp-terms=true \ 7 --version <version> \ 8 --multi-az=true \ 9 --enable-autoscaling=true \ 10 --min-replicas=3 \ 11 --max-replicas=6 \ 12 --secure-boot-for-shielded-vms=true 13
- 1
- Replace
<cluster_name>
with a name for your cluster. - 2
- Set value to
gcp
. - 3
- Set value to
true
. - 4
- Replace
<wif_name>
with the name of your WIF configuration. - 5
- Replace
<gcp_region>
with the Google Cloud Platform (GCP) region where the new cluster will be deployed. - 6
- Optional: The subscription billing model for the cluster.
- 7
- Optional: If you provided a value of
marketplace-gcp
for thesubscription-type
parameter,marketplace-gcp-terms
must be equal totrue
. - 8
- Optional: The desired OpenShift version.
- 9
- Optional: Deploy to multiple data centers.
- 10
- Optional: Enable autoscaling of compute nodes.
- 11
- Optional: Minimum number of compute nodes.
- 12
- Optional: Maximum number of compute nodes.
- 13
- Optional: Secure Boot enables the use of Shielded VMs in the Google Cloud Platform.
3.4.3. Updating a WIF configuration
Updating a WIF configuration is only applicable for y-stream updates. For an overview of the update process, including details regarding version semantics, see The Ultimate Guide to OpenShift Release and Upgrade Process for Cluster Administrators.
Before updating a WIF-enabled OpenShift Dedicated cluster to a newer version, you must update the wif-config to that version as well. If you do not update the wif-config version before attempting to update the cluster version, the cluster version update will fail.
You can update a wif-config to a specific OpenShift Dedicated version by running the following command:
ocm gcp update wif-config --version <version> \ 1 --name <wif_name> 2
3.4.4. List WIF clusters
To list all of your OpenShift Dedicated clusters that have been deployed using the WIF authentication type, run the following command:
$ ocm list clusters --parameter search="gcp.authentication.wif_config_id != ''"
To list all of your OpenShift Dedicated clusters that have been deployed using a specific wif-config, run the following command:
$ ocm list clusters --parameter search="gcp.authentication.wif_config_id = '<wif_config_id>'" 1
- 1
- Replace
<wif_config_id>
with the ID of the WIF configuration to list the clusters that have been deployed using that WIF configuration.
3.5. Additional resources
- For information about OpenShift Dedicated clusters using a Customer Cloud Subscription (CCS) model on Google Cloud Platform (GCP), see Customer requirements.
- For information about resource quotas, Resource quotas per project.
- For information about limits, GCP account limits.
- For information about required APIs, see Required customer procedure.
- For information about managing workload identity pools, see Manage workload identity pools and providers.
- For information about managing roles and permissions in your Google Cloud account, see Roles and permissions.
- For a list of the supported maximums, see Cluster maximums.
Chapter 4. Creating a cluster on GCP
The following topic addresses creating an OpenShift Dedicated on Google Cloud Platform (GCP) cluster using a service account key, which creates credentials required for cluster access. Service account keys produce long-lived credentials. To install and interact with an OpenShift Dedicated on Google Cloud Platform (GCP) cluster using Workload Identity Federation (WIF), which is the recommended authentication type because it provides enhanced security, see the topic Creating a cluster on GCP with Workload Identity Federation.
You can install OpenShift Dedicated on Google Cloud Platform (GCP) by using your own GCP account through the Customer Cloud Subscription (CCS) model or by using a GCP infrastructure account that is owned by Red Hat.
4.1. Prerequisites
- You reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts.
- You reviewed the OpenShift Dedicated cloud deployment options.
4.2. Creating a cluster on GCP with CCS
By using the Customer Cloud Subscription (CCS) billing model, you can create an OpenShift Dedicated cluster in an existing Google Cloud Platform (GCP) account that you own.
You must meet several prerequisites if you use the CCS model to deploy and manage OpenShift Dedicated into your GCP account.
Prerequisites
- You have configured your GCP account for use with OpenShift Dedicated.
- You have configured the GCP account quotas and limits that are required to support the desired cluster size.
- You have created a GCP project.
- You have enabled the Google Cloud Resource Manager API in your GCP project. For more information about enabling APIs for your project, see the Google Cloud documentation.
You have an IAM service account in GCP called
osd-ccs-admin
with the following roles attached:- Compute Admin
- DNS Administrator
- Security Admin
- Service Account Admin
- Service Account Key Admin
- Service Account User
- Organization Policy Viewer
- Service Management Administrator
- Service Usage Admin
- Storage Admin
- Compute Load Balancer Admin
- Role Viewer
- Role Administrator
You have created a key for your
osd-ccs-admin
GCP service account and exported it to a file namedosServiceAccount.json
.NoteFor more information about creating a key for your GCP service account and exporting it to a JSON file, see Creating service account keys in the Google Cloud documentation.
- Consider having Enhanced Support or higher from GCP.
- To prevent potential conflicts, consider having no other resources provisioned in the project prior to installing OpenShift Dedicated.
- If you are configuring a cluster-wide proxy, you have verified that the proxy is accessible from the VPC that the cluster is being installed into.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- On the Create an OpenShift cluster page, select Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
NoteThe subscription types that are available to you depend on your OpenShift Dedicated subscriptions and resource quotas. For more information, contact your sales representative or Red Hat support.
- Select the Customer Cloud Subscription infrastructure type to deploy OpenShift Dedicated in an existing cloud provider account that you own.
- Click Next.
- Select Run on Google Cloud Platform.
Select either Service account or Workload Identity Federation as the Authentication type.
NoteFor more information about authentication types, click the question icon located next to Authentication type.
- Review and complete the listed Prerequisites.
- Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
- If you selected Service account as the Authentication type, provide your GCP service account private key in JSON format. You can either click Browse to locate and attach a JSON file or add the details in the Service account JSON field.
If you selected Workload Identity Federation as the Authentication type, you will first need to create a new WIF configuration. Open a terminal window and run the following
ocm
CLI command.$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> 2
- Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.
- Click Next to validate your cloud provider account and go to the Cluster details page.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string.To customize the subdomain, select the Create customize domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
Select a cluster version from the Version drop-down menu.
NoteWorkload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
ImportantTo successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint
constraints/compute.requireShieldedVm
enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Select Use Custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.
ImportantTo use custom KMS keys, the IAM service account
osd-ccs-admin
must be granted the Cloud KMS CryptoKey Encrypter/Decrypter role. For more information about granting roles on a resource, see Granting roles on a resource.With Use Custom KMS keys selected:
- Select a key ring location from the Key ring location drop-down menu.
- Select a key ring from the Key ring drop-down menu.
- Select a key name from the Key name drop-down menu.
- Provide the KMS Service Account.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
ImportantThis step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors.
On the Network configuration page, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature. You can disable PSC by clicking the Use Private Service Connect checkbox.
NoteRed Hat recommends using Private Service Connect when deploying a private OpenShift Dedicated cluster on Google Cloud. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private OpenShift Dedicated clusters.
ImportantIf you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account.
Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):
Select Install into an existing VPC.
ImportantPrivate Service Connect is supported only with Install into an existing VPC.
If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
ImportantIn order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information.
Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.
- Optional: Provide route selector.
- Optional: Provide excluded namespaces.
- Select a namespace ownership policy.
Select a wildcard policy.
For more information about custom application ingress settings, click on the information icon provided for each setting.
- Click Next.
Optional: To install the cluster into a GCP Shared VPC:
ImportantTo install a cluster into a Shared VPC, you must use OpenShift Dedicated version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their Google Cloud console. For more information, see Enable a host project.
- Select Install into GCP Shared VPC.
Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.
ImportantOnce you complete the steps within the cluster configuration wizard and click Create Cluster, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: Compute Network Administrator, Compute Security Administrator, Project IAM Admin, and DNS Administrator. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For information about Shared VPC permissions, see Provision Shared VPC.
If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs.
NoteIf you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
-
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
NoteIf you are installing into a VPC, the Machine CIDR range must match the VPC subnets.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
NoteIf you delete a cluster that was installed into a GCP Shared VPC, inform the VPC owner of the host project to remove the IAM policy roles granted to the service account that was referenced during cluster creation.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
4.3. Creating a cluster on GCP with Google Cloud Marketplace
When creating an OpenShift Dedicated (OSD) cluster on Google Cloud through the OpenShift Cluster Manager Hybrid Cloud Console, customers can select Google Cloud Marketplace as their preferred billing model. This billing model allows Red Hat customers to take advantage of their Google Committed Use Discounts (CUD) towards OpenShift Dedicated purchased through the Google Cloud Marketplace. Additionally, OSD pricing is consumption-based and customers are billed directly through their Google Cloud account.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
- Select the On-Demand subscription type.
- From the drop-down menu, select Google Cloud Marketplace.
- Select the Customer Cloud Subscription infrastructure type.
- Click Next.
- On the Cloud provider page, select Run on Google Cloud Platform.
Select either Service account or Workload Identity Federation as the Authentication type.
NoteFor more information about authentication types, click the question icon located next to Authentication type.
- Review and complete the listed Prerequisites.
- Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
- If you selected Service account as the Authentication type, provide your GCP service account private key in JSON format. You can either click Browse to locate and attach a JSON file or add the details in the Service account JSON field.
If you selected Workload Identity Federation as the Authentication type, you will first need to create a new WIF configuration. Open a terminal window and run the following
ocm
CLI command.$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> 2
- Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.
- Click Next to validate your cloud provider account and go to the Cluster details page.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
Select a cluster version from the Version drop-down menu.
NoteWorkload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
ImportantTo successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint
constraints/compute.requireShieldedVm
enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Select Use Custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.
ImportantTo use custom KMS keys, the IAM service account
osd-ccs-admin
must be granted the Cloud KMS CryptoKey Encrypter/Decrypter role. For more information about granting roles on a resource, see Granting roles on a resource.With Use Custom KMS keys selected:
- Select a key ring location from the Key ring location drop-down menu.
- Select a key ring from the Key ring drop-down menu.
- Select a key name from the Key name drop-down menu.
- Provide the KMS Service Account.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a created machine pool. You can add machine pools after installation that use a customized instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels.
ImportantThis step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors.
- Click Next.
In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature. You can disable PSC by clicking the Use Private Service Connect checkbox.
NoteRed Hat recommends using Private Service Connect when deploying a private OpenShift Dedicated cluster on Google Cloud. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private OpenShift Dedicated clusters.
Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):
Select Install into an existing VPC.
ImportantPrivate Service Connect is supported only with Install into an existing VPC.
If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
ImportantIn order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information.
Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.
- Optional: Provide route selector.
- Optional: Provide excluded namespaces.
- Select a namespace ownership policy.
Select a wildcard policy.
For more information about custom application ingress settings, click on the information icon provided for each setting.
- Click Next.
Optional: To install the cluster into a GCP Shared VPC:
ImportantTo install a cluster into a Shared VPC, you must use OpenShift Dedicated version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their Google Cloud console. For more information, see Enable a host project.
- Select Install into GCP Shared VPC.
Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.
ImportantOnce you complete the steps within the cluster configuration wizard and click Create Cluster, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: Compute Network Administrator, Compute Security Administrator, Project IAM Admin, and DNS Administrator. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For information about Shared VPC permissions, see Provision Shared VPC.
If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next.
NoteIf you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.
- Click Next.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
-
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
4.4. Creating a cluster on GCP with a Red Hat cloud account
Through OpenShift Cluster Manager, you can create an OpenShift Dedicated cluster on Google Cloud Platform (GCP) using a standard cloud provider account owned by Red Hat.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
Select the Annual subscription type. Only the Annual subscription type is available when you deploy a cluster using a Red Hat cloud account.
For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
NoteYou must have the required resource quota for the Annual subscription type to be available. For more information, contact your sales representative or Red Hat support.
- Select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat.
- Click Next.
- Select Run on Google Cloud Platform and click Next.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
- Select a cluster version from the Version drop-down menu.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
- Select a Persistent storage capacity for the cluster. For more information, see the Storage section in the OpenShift Dedicated service definition.
- Specify the number of Load balancers that you require for your cluster. For more information, see the Load balancers section in the OpenShift Dedicated service definition.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
ImportantTo successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint
constraints/compute.requireShieldedVm
enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a machine pool. For clusters that use the CCS model, you can add machine pools after installation that use a different instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
- Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
- In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster.
- Click Next.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
4.5. Creating a cluster on GCP with Red Hat Marketplace
When creating an OpenShift Dedicated (OSD) cluster on Google Cloud through the OpenShift Cluster Manager Hybrid Cloud Console, customers can select Red Hat Marketplace as their preferred billing model. OSD pricing is consumption-based and customers are billed directly through their Red Hat Marketplace account.
Procedure
- Log in to OpenShift Cluster Manager and click Create cluster.
- In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.
Under Billing model, configure the subscription type and infrastructure type:
- Select the On-Demand subscription type.
- From the drop-down menu, select Red Hat Marketplace.
- Click Next.
- On the Cloud provider page, select Run on Google Cloud Platform.
Select either Service account or Workload Identity Federation as the Authentication type.
NoteFor more information about authentication types, click the question icon located next to Authentication type.
- Review and complete the listed Prerequisites.
- Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
- If you selected Service account as the Authentication type, provide your GCP service account private key in JSON format. You can either click Browse to locate and attach a JSON file or add the details in the Service account JSON field.
If you selected Workload Identity Federation as the Authentication type, you will first need to create a new WIF configuration. Open a terminal window and run the following
ocm
CLI command.$ ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> 2
Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.
- Click Next to validate your cloud provider account and go to the Cluster details page.
On the Cluster details page, provide a name for your cluster and specify the cluster details:
- Add a Cluster name.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on
openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
Select a cluster version from the Version drop-down menu.
NoteWorkload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later.
- Select a cloud provider region from the Region drop-down menu.
- Select a Single zone or Multi-zone configuration.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
ImportantTo successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint
constraints/compute.requireShieldedVm
enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints.- Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Select Use Custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.
ImportantTo use custom KMS keys, the IAM service account
osd-ccs-admin
must be granted the Cloud KMS CryptoKey Encrypter/Decrypter role. For more information about granting roles on a resource, see Granting roles on a resource.With Use Custom KMS keys selected:
- Select a key ring location from the Key ring location drop-down menu.
- Select a key ring from the Key ring drop-down menu.
- Select a key name from the Key name drop-down menu.
- Provide the KMS Service Account.
Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.
NoteIf Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
NoteBy enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
- Click Next.
On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
NoteAfter your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a created machine pool. You can add machine pools after installation that use a customized instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.
Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels.
ImportantThis step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors.
- Click Next.
In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature. You can disable PSC by clicking the Use Private Service Connect checkbox.
NoteRed Hat recommends using Private Service Connect when deploying a private OpenShift Dedicated cluster on Google Cloud. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private OpenShift Dedicated clusters.
Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):
Select Install into an existing VPC.
ImportantPrivate Service Connect is supported only with Install into an existing VPC.
If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
ImportantIn order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information.
Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.
- Optional: Provide route selector.
- Optional: Provide excluded namespaces.
- Select a namespace ownership policy.
Select a wildcard policy.
For more information about custom application ingress settings, click on the information icon provided for each setting.
- Click Next.
Optional: To install the cluster into a GCP shared VPC:
ImportantTo install a cluster into a GCP shared VPC, you must use OpenShift Dedicated version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their Google Cloud console. For more information, see Enable a host project.
- Select Install into GCP Shared VPC.
Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.
ImportantOnce you complete the steps within the cluster configuration wizard and click Create Cluster, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: Compute Network Administrator, Compute Security Administrator, Project IAM Admin, and DNS Administrator. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For information about Shared VPC permissions, see Provision Shared VPC.
If you opted to install the cluster into an existing VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next.
NoteIf you are installing a cluster into a GCP Shared VPC, the VPC name and subnets are shared from the host project.
- Click Next.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
- Specify a valid HTTP proxy URL.
- Specify a valid HTTPS proxy URL.
-
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the
http-proxy
andhttps-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
ImportantCIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
- Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
NoteYou can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.
Provide administrator approval based on your cluster update method:
- Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
- Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
- If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
- Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
NoteIn the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.
- Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
Verification
- You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
4.6. Additional resources
- For information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
- For information about persistent storage for OpenShift Dedicated, see the Storage section in the OpenShift Dedicated service definition.
- For information about load balancers for OpenShift Dedicated, see the Load balancers section in the OpenShift Dedicated service definition.
- For more information about etcd encryption, see the etcd encryption service definition.
- For information about the end-of-life dates for OpenShift Dedicated versions, see the OpenShift Dedicated update life cycle.
- For general information on Cloud network address translation(NAT) that is required for cluster-wide proxy, see Cloud NAT overview in the Google documentation.
- For general information on Cloud routers that are required for the cluster-wide proxy, see Cloud Router overview in the Google documentation.
- For information on creating VPCs within your Google Cloud Provider account, see Create and manage VPC networks in the Google documentation.
Chapter 5. Configuring identity providers
After your OpenShift Dedicated cluster is created, you must configure identity providers to determine how users log in to access the cluster.
5.1. Understanding identity providers
OpenShift Dedicated includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. Configuring identity providers allows users to log in and access the cluster.
5.1.1. Supported identity providers
You can configure the following types of identity providers:
Identity provider | Description |
---|---|
GitHub or GitHub Enterprise | Configure a GitHub identity provider to validate usernames and passwords against GitHub or GitHub Enterprise’s OAuth authentication server. |
GitLab | Configure a GitLab identity provider to use GitLab.com or any other GitLab instance as an identity provider. |
| Configure a Google identity provider using Google’s OpenID Connect integration. |
LDAP | Configure an LDAP identity provider to validate usernames and passwords against an LDAPv3 server, using simple bind authentication. |
OpenID Connect | Configure an OpenID Connect (OIDC) identity provider to integrate with an OIDC identity provider using an Authorization Code Flow. |
htpasswd | Configure an htpasswd identity provider for a single, static administration user. You can log in to the cluster as the user to troubleshoot issues. Important The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for OpenShift Dedicated. For the steps to configure the single user, see Configuring an htpasswd identity provider. |
5.1.2. Identity provider parameters
The following parameters are common to all identity providers:
Parameter | Description |
---|---|
| The provider name is prefixed to provider user names to form an identity name. |
| Defines how new identities are mapped to users when they log in. Enter one of the following values:
|
When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod
parameter to add
.
5.2. Configuring a GitHub identity provider
Configure a GitHub identity provider to validate user names and passwords against GitHub or GitHub Enterprise’s OAuth authentication server and access your OpenShift Dedicated cluster. OAuth facilitates a token exchange flow between OpenShift Dedicated and GitHub or GitHub Enterprise.
Configuring GitHub authentication allows users to log in to OpenShift Dedicated with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Dedicated cluster, you must restrict access to only those in specific GitHub organizations or teams.
Prerequisites
- The OAuth application must be created directly within the GitHub organization settings by the GitHub organization administrator.
- GitHub organizations or teams are set up in your GitHub account.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select the cluster that you need to configure identity providers for.
- Click the Access control tab.
Click Add identity provider.
NoteYou can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers.
- Select GitHub from the drop-down menu.
Enter a unique name for the identity provider. This name cannot be changed later.
An OAuth callback URL is automatically generated in the provided field. You will use this to register the GitHub application.
https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/github
- Register an application on GitHub.
- Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases.
- Enter the Client ID and Client secret provided by GitHub.
- Enter a hostname. A hostname must be entered when using a hosted instance of GitHub Enterprise.
- Optional: You can use a certificate authority (CA) file to validate server certificates for the configured GitHub Enterprise URL. Click Browse to locate and attach a CA file to the identity provider.
- Select Use organizations or Use teams to restrict access to a particular GitHub organization or a GitHub team.
- Enter the name of the organization or team you would like to restrict access to. Click Add more to specify multiple organizations or teams that users can be a member of.
- Click Confirm.
Verification
- The configured identity provider is now visible on the Access control tab of the Cluster List page.
5.3. Configuring a GitLab identity provider
Configure a GitLab identity provider to use GitLab.com or any other GitLab instance as an identity provider.
Prerequisites
- If you use GitLab version 7.7.0 to 11.0, you connect using the OAuth integration. If you use GitLab version 11.1 or later, you can use OpenID Connect (OIDC) to connect instead of OAuth.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select the cluster that you need to configure identity providers for.
- Click the Access control tab.
Click Add identity provider.
NoteYou can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers.
- Select GitLab from the drop-down menu.
Enter a unique name for the identity provider. This name cannot be changed later.
An OAuth callback URL is automatically generated in the provided field. You will provide this URL to GitLab.
https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/gitlab
- Add a new application in GitLab.
- Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases.
- Enter the Client ID and Client secret provided by GitLab.
- Enter the URL of your GitLab provider.
- Optional: You can use a certificate authority (CA) file to validate server certificates for the configured GitLab URL. Click Browse to locate and attach a CA file to the identity provider.
- Click Confirm.
Verification
- The configured identity provider is now visible on the Access control tab of the Cluster List page.
5.4. Configuring a Google identity provider
Configure a Google identity provider to allow users to authenticate with their Google credentials.
Using Google as an identity provider allows any Google user to authenticate to your server. You can limit authentication to members of a specific hosted domain with the hostedDomain
configuration attribute.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select the cluster that you need to configure identity providers for.
- Click the Access control tab.
Click Add identity provider.
NoteYou can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers.
- Select Google from the drop-down menu.
Enter a unique name for the identity provider. This name cannot be changed later.
An OAuth callback URL is automatically generated in the provided field. You will provide this URL to Google.
https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/google
- Configure a Google identity provider using Google’s OpenID Connect integration.
- Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases.
- Enter the Client ID of a registered Google project and the Client secret issued by Google.
- Enter a hosted domain to restrict users to a Google Apps domain.
- Click Confirm.
Verification
- The configured identity provider is now visible on the Access control tab of the Cluster List page.
5.5. Configuring a LDAP identity provider
Configure the LDAP identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication.
Prerequisites
When configuring a LDAP identity provider, you will need to enter a configured LDAP URL. The configured URL is an RFC 2255 URL, which specifies the LDAP host and search parameters to use. The syntax of the URL is:
ldap://host:port/basedn?attribute?scope?filter
URL component Description ldap
For regular LDAP, use the string
ldap
. For secure LDAP (LDAPS), useldaps
instead.host:port
The name and port of the LDAP server. Defaults to
localhost:389
for ldap andlocalhost:636
for LDAPS.basedn
The DN of the branch of the directory where all searches should start from. At the very least, this must be the top of your directory tree, but it could also specify a subtree in the directory.
attribute
The attribute to search for. Although RFC 2255 allows a comma-separated list of attributes, only the first attribute will be used, no matter how many are provided. If no attributes are provided, the default is to use
uid
. It is recommended to choose an attribute that will be unique across all entries in the subtree you will be using.scope
The scope of the search. Can be either
one
orsub
. If the scope is not provided, the default is to use a scope ofsub
.filter
A valid LDAP search filter. If not provided, defaults to
(objectClass=*)
When doing searches, the attribute, filter, and provided user name are combined to create a search filter that looks like:
(&(<filter>)(<attribute>=<username>))
ImportantIf the LDAP directory requires authentication to search, specify a
bindDN
andbindPassword
to use to perform the entry search.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select the cluster that you need to configure identity providers for.
- Click the Access control tab.
Click Add identity provider.
NoteYou can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers.
- Select LDAP from the drop-down menu.
- Enter a unique name for the identity provider. This name cannot be changed later.
- Select a mapping method from the drop-down menu. Claim is recommended in most cases.
- Enter a LDAP URL to specify the LDAP search parameters to use.
- Optional: Enter a Bind DN and Bind password.
Enter the attributes that will map LDAP attributes to identities.
- Enter an ID attribute whose value should be used as the user ID. Click Add more to add multiple ID attributes.
- Optional: Enter a Preferred username attribute whose value should be used as the display name. Click Add more to add multiple preferred username attributes.
- Optional: Enter an Email attribute whose value should be used as the email address. Click Add more to add multiple email attributes.
- Optional: Click Show advanced Options to add a certificate authority (CA) file to your LDAP identity provider to validate server certificates for the configured URL. Click Browse to locate and attach a CA file to the identity provider.
Optional: Under the advanced options, you can choose to make the LDAP provider Insecure. If you select this option, a CA file cannot be used.
ImportantIf you are using an insecure LDAP connection (ldap:// or port 389), then you must check the Insecure option in the configuration wizard.
- Click Confirm.
Verification
- The configured identity provider is now visible on the Access control tab of the Cluster List page.
5.6. Configuring an OpenID identity provider
Configure an OpenID identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.
The Authentication Operator in OpenShift Dedicated requires that the configured OpenID Connect identity provider implements the OpenID Connect Discovery specification.
Claims are read from the JWT id_token
returned from the OpenID identity provider and, if specified, from the JSON returned by the Issuer URL.
At least one claim must be configured to use as the user’s identity.
You can also indicate which claims to use as the user’s preferred user name, display name, and email address. If multiple claims are specified, the first one with a non-empty value is used. The standard claims are:
Claim | Description |
---|---|
|
The preferred user name when provisioning a user. A shorthand name that the user wants to be referred to as, such as |
| Email address. |
| Display name. |
See the OpenID claims documentation for more information.
Prerequisites
- Before you configure OpenID Connect, check the installation prerequisites for any Red Hat product or service you want to use with your OpenShift Dedicated cluster.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select the cluster that you need to configure identity providers for.
- Click the Access control tab.
Click Add identity provider.
NoteYou can also click the Add Oauth configuration link in the warning message displayed after cluster creation to configure your identity providers.
- Select OpenID from the drop-down menu.
Enter a unique name for the identity provider. This name cannot be changed later.
An OAuth callback URL is automatically generated in the provided field.
https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
For example:
https://oauth-openshift.apps.openshift-cluster.example.com/oauth2callback/openid
- Register a new OpenID Connect client in the OpenID identity provider by following the steps to create an authorization request.
- Return to OpenShift Dedicated and select a mapping method from the drop-down menu. Claim is recommended in most cases.
- Enter a Client ID and Client secret provided from OpenID.
- Enter an Issuer URL. This is the URL that the OpenID provider asserts as the Issuer Identifier. It must use the https scheme with no URL query parameters or fragments.
- Enter an Email attribute whose value should be used as the email address. Click Add more to add multiple email attributes.
- Enter a Name attribute whose value should be used as the preferred username. Click Add more to add multiple preferred usernames.
- Enter a Preferred username attribute whose value should be used as the display name. Click Add more to add multiple display names.
- Optional: Click Show advanced Options to add a certificate authority (CA) file to your OpenID identity provider.
-
Optional: Under the advanced options, you can add Additional scopes. By default, the
OpenID
scope is requested. - Click Confirm.
Verification
- The configured identity provider is now visible on the Access control tab of the Cluster List page.
5.7. Configuring an htpasswd identity provider
Configure an htpasswd identity provider to create a single, static user with cluster administration privileges. You can log in to your cluster as the user to troubleshoot issues.
The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for OpenShift Dedicated.
Procedure
- From OpenShift Cluster Manager, navigate to the Cluster List page and select your cluster.
- Select Access control → Identity providers.
- Click Add identity provider.
- Select HTPasswd from the Identity Provider drop-down menu.
- Add a unique name in the Name field for the identity provider.
Use the suggested username and password for the static user, or create your own.
NoteThe credentials defined in this step are not visible after you select Add in the following step. If you lose the credentials, you must recreate the identity provider and define the credentials again.
- Select Add to create the htpasswd identity provider and the single, static user.
Grant the static user permission to manage the cluster:
- Under Access control → Cluster Roles and Access, select Add user.
- Enter the User ID of the static user that you created in the preceding step.
Select a Group.
-
If you are installing OpenShift Dedicated using the Customer Cloud Subscription (CCS) infrastructure type, choose either the
dedicated-admins
orcluster-admins
group. Users in thededicated-admins
group have standard administrative privileges for OpenShift Dedicated. Users in thecluster-admins
group have full administrative access to the cluster. -
If you are installing OpenShift Dedicated using the Red Hat cloud account infrastructure type, the
dedicated-admins
group is automatically selected.
-
If you are installing OpenShift Dedicated using the Customer Cloud Subscription (CCS) infrastructure type, choose either the
- Select Add user to grant the administration privileges to the user.
Verification
The configured htpasswd identity provider is visible on the Access control → Identity providers page.
NoteAfter creating the identity provider, synchronization usually completes within two minutes. You can log in to the cluster as the user after the htpasswd identity provider becomes available.
- The single, administrative user is visible on the Access control → Cluster Roles and Access page. The administration group membership of the user is also displayed.
Additional resources
5.8. Accessing your cluster
After you have configured your identity providers, users can access the cluster from Red Hat OpenShift Cluster Manager.
Prerequisites
- You logged in to OpenShift Cluster Manager.
- You created an OpenShift Dedicated cluster.
- You configured an identity provider for your cluster.
- You added your user account to the configured identity provider.
Procedure
- From OpenShift Cluster Manager, click on the cluster you want to access.
- Click Open Console.
- Click on your identity provider and provide your credentials to log into the cluster.
- Click Open console to open the web console for your cluster.
- Click on your identity provider and provide your credentials to log in to the cluster. Complete any authorization requests that are presented by your provider.
Chapter 6. Revoking privileges and access to an OpenShift Dedicated cluster
As cluster owner, you can revoke admin privileges and user access to a OpenShift Dedicated cluster.
6.1. Revoking administrator privileges from a user
Follow the steps in this section to revoke dedicated-admin
privileges from a user.
Prerequisites
- You logged in to OpenShift Cluster Manager.
- You created an OpenShift Dedicated cluster.
- You have configured a GitHub identity provider for your cluster and added an identity provider user.
-
You granted
dedicated-admin
privileges to a user.
Procedure
- Navigate to OpenShift Cluster Manager and select your cluster.
- Click the Access control tab.
- In the Cluster Roles and Access tab, select next to a user and click Delete.
Verification
-
After revoking the privileges, the user is no longer listed as part of the
dedicated-admins
group under Access control → Cluster Roles and Access on the OpenShift Cluster Manager page for your cluster.
6.2. Revoking user access to a cluster
You can revoke cluster access from an identity provider user by removing them from your configured identity provider.
You can configure different types of identity providers for your OpenShift Dedicated cluster. The following example procedure revokes cluster access for a member of a GitHub organization or team that is configured for identity provision to the cluster.
Prerequisites
- You have an OpenShift Dedicated cluster.
- You have a GitHub user account.
- You have configured a GitHub identity provider for your cluster and added an identity provider user.
Procedure
- Navigate to github.com and log in to your GitHub account.
Remove the user from your GitHub organization or team:
- If your identity provider configuration uses a GitHub organization, follow the steps in Removing a member from your organization in the GitHub documentation.
- If your identity provider configuration uses a team within a GitHub organization, follow the steps in Removing organization members from a team in the GitHub documentation.
Verification
- After removing the user from your identity provider, the user cannot authenticate into the cluster.
Chapter 7. Deleting an OpenShift Dedicated cluster
As cluster owner, you can delete your OpenShift Dedicated clusters.
7.1. Deleting your cluster
You can delete your OpenShift Dedicated cluster in Red Hat OpenShift Cluster Manager.
- You logged in to OpenShift Cluster Manager.
- You created an OpenShift Dedicated cluster.
Procedure
- From OpenShift Cluster Manager, click on the cluster you want to delete.
- Select Delete cluster from the Actions drop-down menu.
Type the name of the cluster highlighted in bold, then click Delete. Cluster deletion occurs automatically.
NoteIf you delete a cluster that was installed into a GCP Shared VPC, inform the VPC owner of the host project to remove the IAM policy roles granted to the service account that was referenced during cluster creation.
Legal Notice
Copyright © 2024 Red Hat, Inc.
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.