Este conteúdo não está disponível no idioma selecionado.

Getting started


OpenShift Dedicated 4

Getting started with OpenShift Dedicated

Red Hat OpenShift Documentation Team

Abstract

Getting started with your OpenShift Dedicated cluster.

Chapter 1. Understanding your cloud deployment options

You can install OpenShift Dedicated on Amazon Web Services (AWS) or Google Cloud using a cloud account that you own or using a cloud account that is owned by Red Hat. This document provides details about the cloud deployment options for OpenShift Dedicated clusters.

1.1. Overview of the OpenShift Dedicated cloud deployment options

OpenShift Dedicated offers OpenShift Container Platform clusters as a managed service on Amazon Web Services (AWS) or Google Cloud.

Through the Customer Cloud Subscription (CCS) model, you can deploy clusters in an existing AWS or Google Cloud cloud account that you own.

Alternatively, you can install OpenShift Dedicated in a cloud account that is owned by Red Hat.

1.1.1. Deploying clusters using the Customer Cloud Subscription (CCS) model

The Customer Cloud Subscription (CCS) model enables you to deploy Red Hat managed OpenShift Dedicated clusters in an existing Amazon Web Services (AWS) or Google Cloud account that you own. Red Hat requires several prerequisites be met to provide this service, and this service is supported by Red Hat Site Reliability Engineers (SRE).

In the CCS model, the customer pays the cloud infrastructure provider directly for cloud costs, and the cloud infrastructure account is part of an organization owned by the customer, with specific access granted to Red Hat. In this model, the customer pays Red Hat for the CCS subscription and pays the cloud provider for the cloud costs.

By using the CCS model, you can use the services that are provided by your cloud provider, in addition to the services provided by Red Hat.

1.1.2. Deploying clusters in Red Hat cloud accounts

As an alternative to the CCS model, you can deploy OpenShift Dedicated clusters in AWS or Google Cloud cloud accounts that are owned by Red Hat. With this model, Red Hat is responsible for the cloud account and the cloud infrastructure costs are paid directly by Red Hat. The customer only pays the Red Hat subscription costs.

1.2. Additional resources

Chapter 2. Getting started with OpenShift Dedicated

Follow this getting started document to create a OpenShift Dedicated cluster, grant user access, deploy your first application, and learn how to scale and delete your cluster.

For OpenShift Dedicated clusters deployed on Google Cloud, Red Hat recommends using Google Cloud Workload Identity Federation (WIF) as the authentication type for installing and interacting with the OpenShift Dedicated cluster deployed on Google Cloud because it provides enhanced security.

Red Hat also recommends creating an OpenShift Dedicated cluster deployed on Google Cloud in Private cluster mode with Private Service Connect (PSC) to manage and monitor a cluster to avoid all public ingress network traffic. For more information, see Private Service Connect overview.

2.1. Prerequisites

2.2. Creating a Workload Identity Federation cluster using the OCM CLI

You can create an OpenShift Dedicated on Google Cloud cluster with Workload Identity Federation (WIF) using the OpenShift Cluster Manager CLI (ocm) in interactive or non-interactive mode.

Prerequisites

  • You have created a WIF configuration. For more information, see "Creating a Workload Identity Federation configuration".
  • You have downloaded the latest version of the OpenShift Cluster Manager CLI (ocm) for your operating system from the Downloads page on OpenShift Cluster Manager.

Procedure

You can create a WIF cluster using the interactive mode or the non-interactive mode.

In interactive mode, cluster attributes are displayed automatically as prompts during the creation of the cluster. You enter the values for those prompts based on specified requirements in the fields provided.

In non-interactive mode, you specify the values for specific parameters within the command.

  • Based on your mode preference, run one of the following commands to create an OpenShift Dedicated cluster on Google Cloud with WIF configuration:

    • Create a cluster in interactive mode by running the following command:

      $ ocm create cluster --interactive
      Copy to Clipboard Toggle word wrap

      where:

      --interactive
      Specifies that the cluster is created in interactive mode. This mode prompts you to enter the required configuration options during cluster creation. If you do not include this parameter, the cluster is created in non-interactive mode by default.
    • Create a cluster in non-interactive mode by running the following command:

      Note

      The following example is made up optional and required parameters and may differ from your non-interactive mode command. Parameters not identified as optional are required. For additional details about these and other parameters, run the ocm create cluster --help flag command in you terminal window.

      $ ocm create cluster <cluster_name> \
      --provider=gcp \
      --ccs=true \
      --wif-config <wif_name> \
      --region <gcp_region> \
      --subscription-type=marketplace-gcp \
      --marketplace-gcp-terms=true \
      --version <version> \
      --multi-az=true  \
      --enable-autoscaling=true \
      --min-replicas=3 \
      --max-replicas=6 \
      --secure-boot-for-shielded-vms=true
      --channel-group <channel_group_name>
      Copy to Clipboard Toggle word wrap

      where:

      <cluster_name>
      Specifies the name of the cluster. Replace <cluster_name> with a name for your cluster.
      --provider=gcp
      Specifies the cloud provider for the cluster.
      --ccs=true
      Specifies that the cluster is a Customer Cloud Subscription (CCS) cluster.
      --wif-config <wif_name>
      Specifies the name of the WIF configuration to assign to the cluster. Replace <wif_name> with the name of your WIF configuration.
      --region <gcp_region>
      Specifies the Google Cloud region where the new cluster will be deployed. Replace <gcp_region> with the desired Google Cloud region.
      --subscription-type=marketplace-gcp
      Specifies the subscription billing model for the cluster. This parameter is optional.
      --marketplace-gcp-terms=true
      Confirms that you have accepted the Google Cloud Marketplace terms and agreements for the OpenShift Dedicated product listing. This parameter is required if you provided a value of marketplace-gcp for the subscription-type parameter.
      --version <version>

      Specifies the desired OpenShift Dedicated version. This parameter is optional. However, if an OpenShift Dedicated version is specified, the version must also be supported by the assigned WIF configuration. If a version is specified that is not supported by the assigned WIF configuration, cluster creation will fail. If this occurs, update the assigned WIF configuration to the desired version or create a new WIF configuration with the desired version. If you do not specify a version, the cluster is created with the default version for the assigned WIF configuration.

      For more information about supported versions for WIF configurations, see "Creating a Workload Identity Federation configuration".

      --multi-az=true
      Specifies that the cluster is deployed to multiple data centers. This parameter is optional.
      --enable-autoscaling=true
      Enables autoscaling of compute nodes. This parameter is optional.
      --min-replicas=3
      Specifies the minimum number of compute nodes. This parameter is optional.
      --max-replicas=6
      Specifies the maximum number of compute nodes. This parameter is optional.
      --secure-boot-for-shielded-vms=true
      Enables Secure Boot, which allows the use of Shielded VMs in the Google Cloud. This parameter is optional.
      --channel-group <channel_group_name>
      Specifies the name of the channel group you want to assign the cluster to. Channel group options include stable and eus. Replace <channel_group_name> with the desired channel group. This parameter is optional.
Important

If your cluster deployment fails during installation, certain resources created during the installation process are not automatically removed from your Google Cloud account. To remove these resources from your Google Cloud account, you must delete the failed cluster. For more information, see "Deleting an OpenShift Dedicated cluster on Google Cloud".

Verification

  • To verify that the cluster was created successfully, run the following command:

    $ ocm get cluster <cluster_name>
    Copy to Clipboard Toggle word wrap

    If the cluster was created successfully, the output displays the cluster state as ready.

2.3. Creating a cluster on AWS

By using the Customer Cloud Subscription (CCS) billing model, you can create an OpenShift Dedicated cluster in an existing Amazon Web Services (AWS) account that you own.

You can also select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat.

Complete the following prerequisites to use the CCS model to deploy and manage OpenShift Dedicated into your AWS account.

Prerequisites

  • You have configured your AWS account for use with OpenShift Dedicated.
  • You have not deployed any services in your AWS account.
  • You have configured the AWS account quotas and limits that are required to support the desired cluster size.
  • You have an osdCcsAdmin AWS Identity and Access Management (IAM) user with the AdministratorAccess policy attached.
  • You have set up a service control policy (SCP) in your AWS organization. For more information, see Minimum required service control policy (SCP).
  • Consider having Business Support or higher from AWS.
  • If you are configuring a cluster-wide proxy, you have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC.

Procedure

  1. Log in to OpenShift Cluster Manager.
  2. On the Overview page, select Create cluster in the Red Hat OpenShift Dedicated card.
  3. Under Billing model, configure the subscription type and infrastructure type:

    1. Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.

      Note

      The subscription types that are available to you depend on your OpenShift Dedicated subscriptions and resource quotas. For more information, contact your sales representative or Red Hat support.

    2. Select the Customer Cloud Subscription infrastructure type to deploy OpenShift Dedicated in an existing cloud provider account that you own or select Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat.
    3. Click Next.
  4. Select Run on Amazon Web Services. If you are provisioning your cluster in an AWS account, complete the following substeps:

    1. Review and complete the listed Prerequisites.
    2. Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
    3. Provide your AWS account details:

      1. Enter your AWS account ID.
      2. Enter your AWS access key ID and AWS secret access key for your AWS IAM user account.

        Note

        Revoking these credentials in AWS results in a loss of access to any cluster created with these credentials.

      3. Optional: You can select Bypass AWS service control policy (SCP) checks to disable the SCP checks.

        Note

        Some AWS SCPs can cause the installation to fail, even if you have the required permissions. Disabling the SCP checks allows an installation to proceed. The SCP is still enforced even if the checks are bypassed.

  5. Click Next to validate your cloud provider account and go to the Cluster details page.
  6. On the Cluster details page, provide a name for your cluster and specify the cluster details:

    1. Add a Cluster name.
    2. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string.

      To customize the subdomain, select the Create customize domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.

    3. Select a cluster version from the Version drop-down menu.
    4. Select a cloud provider region from the Region drop-down menu.
    5. Select a Single zone or Multi-zone configuration.
    6. Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
    7. Optional: Expand Advanced Encryption to make changes to encryption settings.

      1. Accept the default setting Use default KMS Keys to use your default AWS KMS key, or select Use Custom KMS keys to use a custom KMS key.

        1. With Use Custom KMS keys selected, enter the AWS Key Management Service (KMS) custom key Amazon Resource Name (ARN) ARN in the Key ARN field. The key is used for encrypting all control plane, infrastructure, worker node root volumes, and persistent volumes in your cluster.
      2. Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.

        Note

        If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.

      3. Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.

        Note

        By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.

    8. Click Next.
  7. On the Default machine pool page, select a Compute node instance type from the drop-down menu.
  8. Optional: Select the Enable autoscaling checkbox to enable autoscaling.

    1. Click Edit cluster autoscaling settings to make changes to the autoscaling settings.
    2. Once you have made your desired changes, click Close.
    3. Select a minimum and maximum node count. Node counts can be selected by engaging the available plus and minus signs or inputting the desired node count into the number input field.
  9. Select a Compute node count from the drop-down menu.

    Note

    After your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.

  10. Choose your preference for the Instance Metadata Service (IMDS) type, either using both IMDSv1 and IMDSv2 types or requiring your EC2 instances to use only IMDSv2. You can access instance metadata from a running instance in two ways:

    • Instance Metadata Service Version 1 (IMDSv1) - a request/response method
    • Instance Metadata Service Version 2 (IMDSv2) - a session-oriented method

      Important

      The Instance Metadata Service settings cannot be changed after your cluster is created.

      Note

      IMDSv2 uses session-oriented requests. With session-oriented requests, you create a session token that defines the session duration, which can range from a minimum of one second to a maximum of six hours. During the specified duration, you can use the same session token for subsequent requests. After the specified duration expires, you must create a new session token to use for future requests.

      For more information regarding IMDS, see Instance metadata and user data in the AWS documentation.

  11. Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
  12. On the Network configuration page, select Public or Private to use either public or private API endpoints and application routes for your cluster.

    Important

    If you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account.

  13. Optional: To install the cluster in an existing AWS Virtual Private Cloud (VPC):

    Note

    Installing a new OpenShift Dedicated cluster into a VPC that was automatically created by the installer for a different cluster is not supported.

    1. Select Install into an existing VPC.
    2. If you are installing into an existing VPC and opted to use private API endpoints, you can select Use a PrivateLink. This option enables connections to the cluster by Red Hat Site Reliability Engineering (SRE) using only AWS PrivateLink endpoints.

      Note

      The Use a PrivateLink option cannot be changed after a cluster is created.

    3. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
  14. If you opted to install the cluster in an existing AWS VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs.

    Note

    You must ensure that your VPC is configured with a public and a private subnet for each availability zone that you want the cluster installed into. If you opted to use PrivateLink, only private subnets are required.

    1. Optional: Expand Additional security groups and select additional custom security groups to apply to nodes in the machine pools that are created by default. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups to the default machine pools after you create the cluster.

      By default, the security groups you specify are added for all node types. Clear the Apply the same security groups to all node types checkbox to apply different security groups for each node type.

      For more information, see the requirements for Security groups under Additional resources.

  15. Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.

    1. Optional: Provide route selector.
    2. Optional: Provide excluded namespaces.
    3. Select a namespace ownership policy.
    4. Select a wildcard policy.

      For more information about custom application ingress settings, click the information icon provided for each setting.

  16. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:

    1. Enter a value in at least one of the following fields:

      • Specify a valid HTTP proxy URL.
      • Specify a valid HTTPS proxy URL.
      • In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments.
    2. Click Next.

      For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.

  17. In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.

    Note

    If you are installing into a VPC, the Machine CIDR range must match the VPC subnets.

    Important

    CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.

  18. On the Cluster update strategy page, configure your update preferences:

    1. Choose a cluster update method:

      • Select Individual updates if you want to schedule each update individually. This is the default option.
      • Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.

        Note

        You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.

    2. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
    3. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
    4. Click Next.

      Note

      If critical security concerns that significantly impact the security or stability of a cluster occur, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.

  19. Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
  20. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.

Verification

  • You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.

Through OpenShift Cluster Manager, you can create an OpenShift Dedicated cluster on Google Cloud using a standard cloud provider account owned by Red Hat.

Procedure

  1. Log in to OpenShift Cluster Manager and click Create cluster.
  2. In the Cloud tab, click Create cluster in the Red Hat OpenShift Dedicated row.
  3. Under Billing model, configure the subscription type and infrastructure type:

    1. Select the Annual subscription type. Only the Annual subscription type is available when you deploy a cluster using a Red Hat cloud account.

      For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.

      Note

      You must have the required resource quota for the Annual subscription type to be available. For more information, contact your sales representative or Red Hat support.

    2. Select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat.
    3. Click Next.
  4. Select Run on Google Cloud and click Next.
  5. On the Cluster details page, provide a name for your cluster and specify the cluster details:

    1. Add a Cluster name.
    2. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.

      To customize the subdomain, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.

    3. Select a cluster version from the Version drop-down menu.
    4. Select a channel group from the Channel group drop-down menu.

      Note

      Channel group options include Stable (default option) and EUS. For more information about the Stable and EUS channel group options, see Understanding update channels and releases.

    5. Select a cloud provider region from the Region drop-down menu.
    6. Select a Single zone or Multi-zone configuration.
    7. Select a Persistent storage capacity for the cluster. For more information, see the Storage section in the OpenShift Dedicated service definition.
    8. Specify the number of Load balancers that you require for your cluster. For more information, see the Load balancers section in the OpenShift Dedicated service definition.
    9. Optional: Select Enable Secure Boot support for Shielded VMs to use Shielded VMs when installing your cluster. Once you create your cluster, the Enable Secure Boot support for Shielded VMs setting cannot be changed. For more information, see Shielded VMs.

      Important

      To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint constraints/compute.requireShieldedVm enabled. For more information regarding Google Cloud organizational policy constraints, see Organization policy constraints.

      Important

      Enable Secure Boot support for Shielded VMs is not supported for OpenShift Dedicated on Google Cloud clusters created using bare-metal instance types. For more information, see Limitations in the Google Cloud documentation.

    10. Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
  6. Optional: Expand Advanced Encryption to make changes to encryption settings.

    1. Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated.

      Note

      If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography.

    2. Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.

      Note

      By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.

    3. Click Next.
  7. On the Default machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.

    Note

    After your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a machine pool. For clusters that use the CCS model, you can add machine pools after installation that use a different instance type. The number and types of nodes available to you depend on your OpenShift Dedicated subscription.

  8. Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select Next.
  9. In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster.
  10. Click Next.
  11. In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.

    Important

    CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.

    If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider.

  12. On the Cluster update strategy page, configure your update preferences:

    1. Choose a cluster update method:

      • Select Individual updates if you want to schedule each update individually. This is the default option.
      • Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.

        Note

        You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle.

    2. Provide administrator approval based on your cluster update method:

      • Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
      • Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
    3. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
    4. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
    5. Click Next.

      Note

      In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings.

  13. Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
  14. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.

Verification

  • You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
Important

If your cluster deployment fails during installation, certain resources created during the installation process are not automatically removed from your Google Cloud account. To remove these resources from your Google Cloud account, you must delete the failed cluster.

2.5. Configuring an identity provider

After you have installed OpenShift Dedicated, you must configure your cluster to use an identity provider. You can then add members to your identity provider to grant them access to your cluster.

You can configure different identity provider types for your OpenShift Dedicated cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect, and htpasswd identity providers.

Important

The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for OpenShift Dedicated.

The following procedure configures a GitHub identity provider as an example.

Warning

Configuring GitHub authentication allows users to log in to OpenShift Dedicated with their GitHub credentials. To prevent anyone with any GitHub user ID from logging in to your OpenShift Dedicated cluster, you must restrict access to only those in specific GitHub organizations or teams.

Prerequisites

  • You logged in to OpenShift Cluster Manager.
  • You created an OpenShift Dedicated cluster.
  • You have a GitHub user account.
  • You created a GitHub organization in your GitHub account. For more information, see Creating a new organization from scratch in the GitHub documentation.
  • If you are restricting user access to a GitHub team, you have created a team within your GitHub organization. For more information, see Creating a team in the GitHub documentation.

Procedure

  1. Navigate to OpenShift Cluster Manager and select your cluster.
  2. Select Access controlIdentity providers.
  3. Select the GitHub identity provider type from the Add identity provider drop-down menu.
  4. Enter a unique name for the identity provider. The name cannot be changed later.
  5. Register an OAuth application in your GitHub organization by following the steps in the GitHub documentation.

    Note

    You must register the OAuth app under your GitHub organization. If you register an OAuth application that is not owned by the organization that contains your cluster users or teams, then user authentication to the cluster will not succeed.

    • For the homepage URL in your GitHub OAuth app configuration, specify the https://oauth-openshift.apps.<cluster_name>.<cluster_domain> portion of the OAuth callback URL that is automatically generated in the Add a GitHub identity provider page on OpenShift Cluster Manager.

      The following is an example of a homepage URL for a GitHub identity provider:

      https://oauth-openshift.apps.openshift-cluster.example.com
      Copy to Clipboard Toggle word wrap
    • For the authorization callback URL in your GitHub OAuth app configuration, specify the full OAuth callback URL that is automatically generated in the Add a GitHub identity provider page on OpenShift Cluster Manager. The full URL has the following syntax:

      https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name>
      Copy to Clipboard Toggle word wrap
  6. Return to the Edit identity provider: GitHub dialog in OpenShift Cluster Manager and select Claim from the Mapping method drop-down menu.
  7. Enter the Client ID and Client secret for your GitHub OAuth application. The GitHub page for your OAuth app provides the ID and secret.
  8. Optional: Enter a hostname.

    Note

    A hostname must be entered when using a hosted instance of GitHub Enterprise.

  9. Optional: You can specify a certificate authority (CA) file to validate server certificates for a configured GitHub Enterprise URL. Click Browse to locate and attach a CA file to the identity provider.
  10. Select Use organizations or Use teams to restrict access to a GitHub organization or a GitHub team within an organization.
  11. Enter the name of the organization or team you want to restrict access to. Click Add more to specify multiple organizations or teams.

    Note

    Specified organizations must own an OAuth app that was registered by using the preceding steps. If you specify a team, it must exist within an organization that owns an OAuth app that was registered by using the preceding steps.

  12. Click Add to apply the identity provider configuration.

    Note

    It might take approximately two minutes for the identity provider configuration to become active.

Verification

  • After the configuration becomes active, the identity provider is listed under Access controlIdentity providers on the OpenShift Cluster Manager page for your cluster.

2.6. Granting administrator privileges to a user

After you have configured an identity provider for your cluster and added a user to the identity provider, you can grant dedicated-admin cluster privileges to the user.

Prerequisites

  • You logged in to OpenShift Cluster Manager.
  • You created an OpenShift Dedicated cluster.
  • You configured an identity provider for your cluster.

Procedure

  1. Navigate to OpenShift Cluster Manager and select your cluster.
  2. Click the Access control tab.
  3. In the Cluster Roles and Access tab, click Add user.
  4. Enter the user ID of an identity provider user.
  5. Click Add user to grant dedicated-admin cluster privileges to the user.

Verification

  • After granting the privileges, the user is listed as part of the dedicated-admins group under Access controlCluster Roles and Access on the OpenShift Cluster Manager page for your cluster.

2.7. Accessing your cluster

After you have configured your identity providers, users can access the cluster from Red Hat OpenShift Cluster Manager.

Prerequisites

  • You logged in to OpenShift Cluster Manager.
  • You created an OpenShift Dedicated cluster.
  • You configured an identity provider for your cluster.
  • You added your user account to the configured identity provider.

Procedure

  1. From OpenShift Cluster Manager, select the cluster you want to access.
  2. Click Open console to open the web console for your cluster.
  3. Select your identity provider and enter your credentials to log in to the cluster. Complete any authorization requests from your provider.

2.8. Deploying an application from the Developer Catalog

From the OpenShift Dedicated web console, you can deploy a test application from the Developer Catalog and expose it with a route.

Prerequisites

  • You logged in to the Red Hat Hybrid Cloud Console.
  • You created a OpenShift Dedicated cluster.
  • You configured an identity provider for your cluster.
  • You added your user account to the configured identity provider.

Procedure

  1. Go to the Cluster List page in OpenShift Cluster Manager.
  2. Click the options icon (⋮) next to the cluster you want to view.
  3. Click Open console.
  4. Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials.
  5. In the Administrator perspective, select HomeProjectsCreate Project.
  6. Enter a name for your project and optionally add a Display Name and Description.
  7. Click Create to create the project.
  8. Switch to the Developer perspective and select +Add. Verify that the selected Project is the one that you just created.
  9. In the Developer Catalog dialog, select All services.
  10. In the Developer Catalog page, select LanguagesJavaScript from the menu.
  11. Click Node.js, and then click Create to open the Create Source-to-Image application page.

    Note

    You might need to click Clear All Filters to display the Node.js option.

  12. In the Git section, click Try sample.
  13. Add a unique name in the Name field. The value will be used to name the associated resources.
  14. Confirm that Deployment and Create a route are selected.
  15. Click Create to deploy the application. It will take a few minutes for the pods to deploy.
  16. Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the nodejs build to complete and for the nodejs pod to be in a Running state before continuing.
  17. When the deployment is complete, click the route URL for the application, which has a format similar to the following:

    https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/
    Copy to Clipboard Toggle word wrap

    A new tab in your browser opens with a message similar to the following:

    Welcome to your Node.js application on OpenShift
    Copy to Clipboard Toggle word wrap
  18. Optional: Delete the application and clean up the resources that you created:

    1. In the Administrator perspective, navigate to HomeProjects.
    2. Click the action menu for your project and select Delete Project.

2.9. Scaling your cluster

You can scale the number of load balancers, the persistent storage capacity, and the node count for your OpenShift Dedicated cluster from OpenShift Cluster Manager.

Prerequisites

Procedure

  • To scale the number of load balancers or the persistent storage capacity:

    1. Navigate to OpenShift Cluster Manager and select your cluster.
    2. Select Edit load balancers and persistent storage from the Actions drop-down menu.
    3. Select how many Load balancers that you want to scale to.
    4. Select the Persistent storage capacity that you want to scale to.
    5. Click Apply. Scaling occurs automatically.
  • To scale the node count:

    1. Navigate to OpenShift Cluster Manager and select your cluster.
    2. Select Edit node count from the Actions drop-down menu.
    3. Select a Machine pool.
    4. Select a Node count per zone.
    5. Click Apply. Scaling occurs automatically.

Verification

  • In the Overview tab under the Details heading, you can review the load balancer configuration, persistent storage details, and actual and required node counts.

2.10. Revoking administrator privileges from a user

After you have granted dedicated-admin privileges to a user, you can revoke those privileges when they are no longer needed.

Prerequisites

  • You logged in to OpenShift Cluster Manager.
  • You created an OpenShift Dedicated cluster.
  • You have configured a GitHub identity provider for your cluster and added an identity provider user.
  • You granted dedicated-admin privileges to a user.

Procedure

  1. Navigate to OpenShift Cluster Manager and select your cluster.
  2. Click the Access control tab.
  3. In the Cluster Roles and Access tab, select kebab next to a user and click Delete.

Verification

  • After revoking the privileges, the user is no longer listed as part of the dedicated-admins group under Access controlCluster Roles and Access on the OpenShift Cluster Manager page for your cluster.

2.11. Revoking user access to a cluster

You can revoke cluster access from an identity provider user by removing them from your configured identity provider.

You can configure different types of identity providers for your OpenShift Dedicated cluster. The following example procedure revokes cluster access for a member of a GitHub organization or team that is configured for identity provision to the cluster.

Prerequisites

  • You have an OpenShift Dedicated cluster.
  • You have a GitHub user account.
  • You have configured a GitHub identity provider for your cluster and added an identity provider user.

Procedure

  1. Navigate to github.com and log in to your GitHub account.
  2. Remove the user from your GitHub organization or team:

Verification

  • After removing the user from your identity provider, the user cannot authenticate into the cluster.

2.12. Deleting your cluster

You can delete your OpenShift Dedicated cluster in Red Hat OpenShift Cluster Manager.

Prerequisites

Procedure

  1. From OpenShift Cluster Manager, select the cluster you want to delete.
  2. Select Delete cluster from the Actions drop-down menu.
  3. Type the name of the cluster highlighted in bold, then click Delete. Cluster deletion occurs automatically.

    Note

    If you delete a cluster that was installed into a Google Cloud Shared VPC, inform the VPC owner of the host project to remove the IAM policy roles granted to the service account that was referenced during cluster creation.

2.13. Next steps

2.14. Additional resources

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo