Questo contenuto non è disponibile nella lingua selezionata.

Install clusters


Red Hat OpenShift Service on AWS 4

Installing, accessing, and deleting Red Hat OpenShift Service on AWS (ROSA) clusters.

Red Hat OpenShift Documentation Team

Abstract

This document provides information on how to install Red Hat OpenShift Service on AWS (ROSA) clusters that use hosted control planes.

Chapter 1. Red Hat OpenShift Service on AWS quick start guide

Follow this guide to quickly create a Red Hat OpenShift Service on AWS cluster using the ROSA command-line interface (CLI) (rosa), grant user access, deploy your first application, and learn how to revoke user access and delete your cluster.

Overview of the default cluster specifications

You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.

The following summary describes the default cluster specifications.

Expand
Table 1.1. Default Red Hat OpenShift Service on AWS cluster specifications
ComponentDefault specifications

Accounts and roles

  • Default IAM role prefix: HCP-ROSA

Cluster settings

  • Default cluster version: Latest
  • Default AWS region for installations using the ROSA CLI (rosa): Defined by your aws CLI configuration
  • Default EC2 IMDS endpoints (both v1 and v2) are enabled
  • Availability: Single zone for the data plane
  • Monitoring for user-defined projects: Enabled
  • No cluster admin role created

Compute node machine pool

  • Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM)
  • Compute node count: 2
  • Autoscaling: Not enabled
  • No additional node labels

Networking configuration

  • Cluster privacy: Public
  • No cluster-wide proxy is configured

Classless Inter-Domain Routing (CIDR) ranges

  • Machine CIDR: 10.0.0.0/16
  • Service CIDR: 172.30.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host prefix: /23

    Note

    The static IP address 172.20.0.1 is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.

Cluster roles and policies

  • Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto

    Note

    For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role (ocm-role).

  • Default Operator role prefix: <cluster_name>-<4_digit_random_string>

Storage

  • Node volumes:

    • Type: AWS EBS GP3
    • Default size: 300GiB (adjustable at creation time)
  • Workload persistent volumes:

    • Default StorageClass: gp3-csi
    • Provisioner: ebs.csi.aws.com
    • Dynamic persistent volume provisioning

Cluster update strategy

  • Individual updates
  • 1 hour grace period for node draining

1.1. Setting up the environment

Before you create a Red Hat OpenShift Service on AWS cluster, you must set up your environment by completing the following tasks:

  • Verify Red Hat OpenShift Service on AWS prerequisites against your AWS and Red Hat accounts.
  • Install and configure the required command-line interface (CLI) tools.
  • Verify the configuration of the CLI tools.

You can follow the procedures in this section to complete these setup requirements.

Verifying Red Hat OpenShift Service on AWS prerequisites

Use the steps in this procedure to enable Red Hat OpenShift Service on AWS in your AWS account.

Prerequisites

  • You have a Red Hat account.
  • You have an AWS account.

    Note

    Consider using a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.

Procedure

  1. Sign in to the AWS Management Console.
  2. Navigate to the ROSA service.
  3. Click Get started.

    The Verify ROSA prerequisites page opens.

  4. Under ROSA enablement, ensure that a green check mark and You previously enabled ROSA are displayed.

    If not, follow these steps:

    1. Select the checkbox beside I agree to share my contact information with Red Hat.
    2. Click Enable ROSA.

      After a short wait, a green check mark and You enabled ROSA message are displayed.

  5. Under Service Quotas, ensure that a green check and Your quotas meet the requirements for ROSA are displayed.

    If you see Your quotas don’t meet the minimum requirements, take note of the quota type and the minimum listed in the error message. See Amazon’s documentation on requesting a quota increase for guidance. It may take several hours for Amazon to approve your quota request.

  6. Under ELB service-linked role, ensure that a green check mark and AWSServiceRoleForElasticLoadBalancing already exists are displayed.
  7. Click Continue to Red Hat.

    The Get started with Red Hat OpenShift Service on AWS (ROSA) page opens in a new tab. You have already completed Step 1 on this page, and can now continue with Step 2.

Installing and configuring the required CLI tools

Several command-line interface (CLI) tools are required to deploy and work with your cluster.

Prerequisites

  • You have an AWS account.
  • You have a Red Hat account.

Procedure

  1. Log in to your Red Hat and AWS accounts to access the download page for each required tool.

    1. Log in to your Red Hat account at console.redhat.com.
    2. Log in to your AWS account at aws.amazon.com.
  2. Install and configure the latest AWS CLI (aws).

    1. Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation.
    2. Configure the AWS CLI by specifying your aws_access_key_id, aws_secret_access_key, and region in the .aws/credentials file. For more information, see AWS Configuration basics in the AWS documentation.

      Note

      You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region.

    3. Query the AWS API to verify if the AWS CLI is installed and configured correctly:

      $ aws sts get-caller-identity  --output text
      Copy to Clipboard Toggle word wrap

      Example output

      <aws_account_id>    arn:aws:iam::<aws_account_id>:user/<username>  <aws_user_id>
      Copy to Clipboard Toggle word wrap

  3. Install and configure the latest ROSA CLI.

    1. Navigate to Downloads.
    2. Find Red Hat OpenShift Service on AWS command line interface (rosa) in the list of tools and click Download.

      The rosa-linux.tar.gz file is downloaded to your default download location.

    3. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive:

      $ tar xvf rosa-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    4. Move the rosa binary file to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv rosa /usr/local/bin/rosa
      Copy to Clipboard Toggle word wrap
    5. Verify that the ROSA CLI is installed correctly by querying the rosa version:

      $ rosa version
      Copy to Clipboard Toggle word wrap

      Example output

      1.2.47
      Your ROSA CLI is up to date.
      Copy to Clipboard Toggle word wrap

  4. Log in to the ROSA CLI using an offline access token.

    1. Run the login command:

      $ rosa login
      Copy to Clipboard Toggle word wrap

      Example output

      To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
      ? Copy the token and paste it here:
      Copy to Clipboard Toggle word wrap

    2. Navigate to the URL listed in the command output to view your offline access token.
    3. Enter the offline access token at the command-line prompt to log in.

      ? Copy the token and paste it here: *******************
      [full token length omitted]
      Copy to Clipboard Toggle word wrap
      Note

      In the future you can specify the offline access token by using the --token="<offline_access_token>" argument when you run the rosa login command.

    4. Verify that you are logged in and confirm that your credentials are correct before proceeding:

      $ rosa whoami
      Copy to Clipboard Toggle word wrap

      Example output

      AWS Account ID:               <aws_account_number>
      AWS Default Region:           us-east-1
      AWS ARN:                      arn:aws:iam::<aws_account_number>:user/<aws_user_name>
      OCM API:                      https://api.openshift.com
      OCM Account ID:               <red_hat_account_id>
      OCM Account Name:             Your Name
      OCM Account Username:         you@domain.com
      OCM Account Email:            you@domain.com
      OCM Organization ID:          <org_id>
      OCM Organization Name:        Your organization
      OCM Organization External ID: <external_org_id>
      Copy to Clipboard Toggle word wrap

  5. Install and configure the latest OpenShift CLI (oc).

    1. Use the ROSA CLI to download the oc CLI.

      The following command downloads the latest version of the CLI to the current working directory:

      $ rosa download openshift-client
      Copy to Clipboard Toggle word wrap
    2. Extract the oc binary file from the downloaded archive. The following example extracts the files from a Linux tar archive:

      $ tar xvf openshift-client-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    3. Move the oc binary to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv oc /usr/local/bin/oc
      Copy to Clipboard Toggle word wrap
    4. Verify that the oc CLI is installed correctly:

      $ rosa verify openshift-client
      Copy to Clipboard Toggle word wrap

      Example output

      I: Verifying whether OpenShift command-line tool is available...
      I: Current OpenShift Client Version: 4.17.3
      Copy to Clipboard Toggle word wrap

Next steps

Before you can use the Red Hat Hybrid Cloud Console to deploy Red Hat OpenShift Service on AWS clusters, you must associate your AWS account with your Red Hat organization and create the required account-wide AWS IAM STS roles and policies for Red Hat OpenShift Service on AWS.

1.2. Creating the account-wide STS roles and policies

Before using the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS clusters that use the AWS Security Token Service (STS), create the required account-wide STS roles and policies, including the Operator policies.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide AWS IAM STS roles and policies:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap

    Select the default values at the prompts to quickly create the roles and policies.

You must have an AWS Virtual Private Cloud (VPC) to create a Red Hat OpenShift Service on AWS cluster. You can use the following methods to create a VPC:

  • Create a VPC using the ROSA CLI
  • Create a VPC by using a Terraform template
  • Manually create the VPC resources in the AWS console
Note

The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this linked Terraform configuration, it is in the same region that you intend to install your cluster. In these examples, us-east-2 is used.

Creating an AWS VPC using the ROSA CLI

The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.

If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:

Expand
VPC parameterValue

Availability zones

1

Region

us-east-1

VPC CIDR

10.0.0.0/16

You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.

Prerequisites

  • You have configured your AWS account
  • You have configured your Red Hat accounts
  • You have installed the ROSA CLI and configured it to the latest version

Procedure

  1. Create an AWS VPC using the default CloudFormations template by running the following command:

    $ rosa create network
    Copy to Clipboard Toggle word wrap
  2. Optional: Customize your VPC by specifying additional parameters.

    You can use the --param flag to specify changes to the default VPC template. The following example command specifies custom values for region, Name, AvailabilityZoneCount and VpcCidr.

    $ rosa create network --param Region=us-east-2 --param Name=quickstart-stack --param AvailabilityZoneCount=3 --param VpcCidr=10.0.0.0/16
    Copy to Clipboard Toggle word wrap

    The command takes about 5 minutes to run and provides regular status updates from AWS as resources are created. If there is an issue with CloudFormation, a rollback is attempted. For all other errors that are encountered, please follow the error message instructions or contact AWS support.

Verification

  • When completed, you receive a summary of the created resources:

    INFO[0140] Resources created in stack:
    INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id>
    INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e
    INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id>
    INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id>
    INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ 
    1
    
    INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ 
    2
    
    INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id>
    INFO[0140] Stack rosa-network-stack-5555 created \ 
    3
    Copy to Clipboard Toggle word wrap
    1 2
    These two subnet IDs are used to create your cluster when using the rosa create cluster command.
    3
    The network stack name is used to delete the resource later.
Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.

Prerequisites

  • You have installed Terraform version 1.4.0 or newer on your machine.
  • You have installed Git on your machine.

Procedure

  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  3. Initiate the Terraform file by running the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region. You can choose to specify a cluster name. A rosa.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa.tfplan -var region=<region>
    Copy to Clipboard Toggle word wrap
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa.tfplan
    Copy to Clipboard Toggle word wrap
    1. Optional: You can capture the values of the Terraform-provisioned private, public, and machinepool subnet IDs as environment variables to use when creating your Red Hat OpenShift Service on AWS cluster by running the following commands:

      $ export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)
      Copy to Clipboard Toggle word wrap
    2. Verify that the variables were correctly set with the following command:

      $ echo $SUBNET_IDS
      Copy to Clipboard Toggle word wrap

      Example output

      $ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
      Copy to Clipboard Toggle word wrap

Creating an AWS Virtual Private Cloud manually

If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.

Your VPC must meet the requirements shown in the following table.

Expand
Table 1.2. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

1.3.1. Troubleshooting

If your cluster fails to install, troubleshoot these common issues:

  • Make sure your DHCP option set includes a domain name, and ensure that the domain name does not include any spaces or capital letters.
  • If your VPC uses a custom DNS resolver (the domain name servers field of your DHCP option set is not AmazonProvideDNS), make sure it is able to properly resolve the private hosted zones configured in Route53.

For more information about troubleshooting Red Hat OpenShift Service on AWS cluster installations, see Troubleshooting Red Hat OpenShift Service on AWS cluster installations.

1.3.1.1. Get support

If you need additional support, visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.

Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources for a cluster. The following table shows how your resources should be tagged:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 (or no value)

Private subnet

kubernetes.io/role/internal-elb

1 (or no value)

Note

You must tag at least one private subnet and, if applicable, one public subnet.

Prerequisites

  • You have created a VPC.
  • You have installed the aws CLI.

Procedure

  1. Tag your resources in your terminal by running the following commands:

    1. For public subnets, run:

      $ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run:

      $ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correctly applied by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet-id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet-id>        subnet  1
    Copy to Clipboard Toggle word wrap

1.4. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

1.5. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

1.6. Creating a Red Hat OpenShift Service on AWS cluster using the CLI

When using the ROSA CLI, rosa, to create a cluster, you can select the default options to create the cluster quickly.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
  • You have logged in to your Red Hat account by using the ROSA CLI.
  • You have created an OIDC configuration.
  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure

  1. Use one of the following commands to create your Red Hat OpenShift Service on AWS cluster:

    Note

    When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see CIDR range definitions.

    • If you did not set environmental variables, run the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \ 
      1
      
          --mode=auto --hosted-cp [--private] \ 
      2
      
          --operator-roles-prefix <operator-role-prefix> \ 
      3
      
          --external-id <external-id> \ 
      4
      
          --oidc-config-id <id-of-oidc-configuration> \
          --subnet-ids=<public-subnet-id>,<private-subnet-id>
      Copy to Clipboard Toggle word wrap
      1
      Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
      2
      Optional: The --private argument is used to create private Red Hat OpenShift Service on AWS clusters. If you use this argument, ensure that you only use your private subnet ID for --subnet-ids.
      3
      By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.
      Note

      If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.

      4
      Optional: A unique identifier that might be required when you assume a role in another account.
    • If you set the environmental variables, create a cluster with a single, initial machine pool, using either a publicly or privately available API, and a publicly or privately available Ingress by running the following command:

      $ rosa create cluster --private --cluster-name=<cluster_name> \
          --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
    • If you set the environmental variables, create a cluster with a single, initial machine pool, a publicly available API, and a publicly available Ingress by running the following command:

      $ rosa create cluster --cluster-name=<cluster_name> --mode=auto \
          --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    The following State field changes are listed in the output as the cluster installation progresses:

    • pending (Preparing account)
    • installing (DNS setup in progress)
    • installing
    • ready

      Note

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the progress of the cluster creation by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: To watch for new log messages as the installation progresses, use the --watch argument.

1.7. Granting user access to a cluster

You can grant a user access to your Red Hat OpenShift Service on AWS cluster by adding them to your configured identity provider.

You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure adds a user to a GitHub organization that is configured for identity provision to the cluster.

Procedure

  1. Navigate to github.com and log in to your GitHub account.
  2. Invite users that require access to the Red Hat OpenShift Service on AWS cluster to your GitHub organization. Follow the steps in Inviting users to join your organization in the GitHub documentation.

1.8. Granting administrator privileges to a user

After you have added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin privileges for your Red Hat OpenShift Service on AWS cluster.

Procedure

  • To configure cluster-admin privileges for an identity provider user:

    1. Grant the user cluster-admin privileges:

      $ rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name.

      Example output

      I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify if the user is listed as a member of the cluster-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID                 GROUPS
      <idp_user_name>    cluster-admins
      Copy to Clipboard Toggle word wrap

  • To configure dedicated-admin privileges for an identity provider user:

    1. Grant the user dedicated-admin privileges:

      $ rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify if the user is listed as a member of the dedicated-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID                 GROUPS
      <idp_user_name>    dedicated-admins
      Copy to Clipboard Toggle word wrap

1.9. Accessing a cluster through the web console

After you have created a cluster administrator user or added a user to your configured identity provider, you can log into your Red Hat OpenShift Service on AWS cluster through the web console.

Procedure

  1. Obtain the console URL for your cluster:

    $ rosa describe cluster -c <cluster_name> | grep Console 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_name> with the name of your cluster.

    Example output

    Console URL:                https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com
    Copy to Clipboard Toggle word wrap

  2. Go to the console URL in the output of the preceding step and log in.

    • If you created a cluster-admin user, log in by using the provided credentials.
    • If you configured an identity provider for your cluster, select the identity provider name in the Log in with…​ dialog and complete any authorization requests that are presented by your provider.

1.10. Deploying an application from the Developer Catalog

From the Red Hat OpenShift Service on AWS web console, you can deploy a test application from the Developer Catalog and expose it with a route.

Prerequisites

  • You logged in to the Red Hat Hybrid Cloud Console.
  • You created a Red Hat OpenShift Service on AWS cluster.
  • You configured an identity provider for your cluster.
  • You added your user account to the configured identity provider.

Procedure

  1. Go to the Cluster List page in OpenShift Cluster Manager.
  2. Click the options icon (⋮) next to the cluster you want to view.
  3. Click Open console.
  4. Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials.
  5. In the Administrator perspective, select HomeProjectsCreate Project.
  6. Enter a name for your project and optionally add a Display Name and Description.
  7. Click Create to create the project.
  8. Switch to the Developer perspective and select +Add. Verify that the selected Project is the one that you just created.
  9. In the Developer Catalog dialog, select All services.
  10. In the Developer Catalog page, select LanguagesJavaScript from the menu.
  11. Click Node.js, and then click Create to open the Create Source-to-Image application page.

    Note

    You might need to click Clear All Filters to display the Node.js option.

  12. In the Git section, click Try sample.
  13. Add a unique name in the Name field. The value will be used to name the associated resources.
  14. Confirm that Deployment and Create a route are selected.
  15. Click Create to deploy the application. It will take a few minutes for the pods to deploy.
  16. Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the nodejs build to complete and for the nodejs pod to be in a Running state before continuing.
  17. When the deployment is complete, click the route URL for the application, which has a format similar to the following:

    https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/
    Copy to Clipboard Toggle word wrap

    A new tab in your browser opens with a message similar to the following:

    Welcome to your Node.js application on OpenShift
    Copy to Clipboard Toggle word wrap
  18. Optional: Delete the application and clean up the resources that you created:

    1. In the Administrator perspective, navigate to HomeProjects.
    2. Click the action menu for your project and select Delete Project.

1.11. Revoking administrator privileges and user access

You can revoke cluster-admin or dedicated-admin privileges from a user by using the ROSA CLI, rosa.

To revoke cluster access from a user, you must remove the user from your configured identity provider.

Follow the procedures in this section to revoke administrator privileges or cluster access from a user.

Revoking administrator privileges from a user

Follow the steps in this section to revoke cluster-admin or dedicated-admin privileges from a user.

Procedure

  • To revoke cluster-admin privileges from an identity provider user:

    1. Revoke the cluster-admin privilege:

      $ rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name.

      Example output

      ? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes
      I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify that the user is not listed as a member of the cluster-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      W: There are no users configured for cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

  • To revoke dedicated-admin privileges from an identity provider user:

    1. Revoke the dedicated-admin privilege:

      $ rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes
      I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify that the user is not listed as a member of the dedicated-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      W: There are no users configured for cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

Revoking user access to a cluster

You can revoke cluster access for an identity provider user by removing them from your configured identity provider.

You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure revokes cluster access for a member of a GitHub organization that is configured for identity provision to the cluster.

Procedure

  1. Navigate to github.com and log in to your GitHub account.
  2. Remove the user from your GitHub organization. Follow the steps in Removing a member from your organization in the GitHub documentation.

You can delete a Red Hat OpenShift Service on AWS cluster by using the ROSA CLI, rosa. You can also use the ROSA CLI to delete the AWS Identity and Access Management (IAM) account-wide roles, the cluster-specific Operator roles, and the OpenID Connect (OIDC) provider. To delete the account-wide and Operator policies, you can use the AWS IAM Console or the AWS CLI.

Important

Account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.

Procedure

  1. Delete a cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster:

    $ rosa delete cluster --cluster=<cluster_name> --watch
    Copy to Clipboard Toggle word wrap
    Important

    You must wait for the cluster deletion to complete before you remove the IAM roles, policies, and OIDC provider. The account-wide roles are required to delete the resources created by the installer. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate with AWS APIs.

  2. After the cluster is deleted, delete the OIDC provider that the cluster Operators use to authenticate:

    $ rosa delete oidc-provider -c <cluster_id> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster.
    Note

    You can use the -y option to automatically answer yes to the prompts.

  3. Delete the cluster-specific Operator IAM roles:

    $ rosa delete operator-roles -c <cluster_id> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster.
  4. Delete the account-wide roles:

    Important

    Account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.

    $ rosa delete account-roles --prefix <prefix> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, depending on how they were created, HCP-ROSA or ManagedOpenShift.
  5. Delete the account-wide and Operator IAM policies that you created for Red Hat OpenShift Service on AWS deployments:

    1. Log in to the AWS IAM Console.
    2. Navigate to Access managementPolicies and select the checkbox for one of the account-wide policies.
    3. With the policy selected, click on ActionsDelete to open the delete policy dialog.
    4. Enter the policy name to confirm the deletion and select Delete to delete the policy.
    5. Repeat this step to delete each of the account-wide and Operator policies for the cluster.

Red Hat OpenShift Service on AWS that use hosted control planes offer a more efficient and reliable architecture for creating Red Hat OpenShift Service on AWS clusters. With hosted control planes, each cluster has a dedicated control plane that is isolated in the AWS account.

Create a Red Hat OpenShift Service on AWS cluster quickly by using the default options and automatic AWS Identity and Access Management (IAM) resource creation. You can deploy your cluster by using the ROSA CLI (rosa).

Important

Since it is not possible to upgrade or convert existing Red Hat OpenShift Service on AWS (classic architecture) clusters to hosted control plane architecture, you must create a new cluster to use Red Hat OpenShift Service on AWS functionality.

Note

Red Hat OpenShift Service on AWS clusters only support AWS IAM Security Token Service (STS) authentication.

Further reading

Considerations regarding auto creation mode

The procedures in this document use the auto mode in the ROSA CLI to immediately create the required IAM resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider.

Alternatively, you can use manual mode, which outputs the aws commands needed to create the IAM resources instead of deploying them automatically.

Next steps

2.1. Overview of the default cluster specifications

You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.

The following summary describes the default cluster specifications.

Expand
Table 2.1. Default Red Hat OpenShift Service on AWS cluster specifications
ComponentDefault specifications

Accounts and roles

  • Default IAM role prefix: HCP-ROSA

Cluster settings

  • Default cluster version: Latest
  • Default AWS region for installations using the ROSA CLI (rosa): Defined by your aws CLI configuration
  • Default EC2 IMDS endpoints (both v1 and v2) are enabled
  • Availability: Single zone for the data plane
  • Monitoring for user-defined projects: Enabled
  • No cluster admin role created

Compute node machine pool

  • Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM)
  • Compute node count: 2
  • Autoscaling: Not enabled
  • No additional node labels

Networking configuration

  • Cluster privacy: Public
  • No cluster-wide proxy is configured

Classless Inter-Domain Routing (CIDR) ranges

  • Machine CIDR: 10.0.0.0/16
  • Service CIDR: 172.30.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host prefix: /23

    Note

    The static IP address 172.20.0.1 is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.

Cluster roles and policies

  • Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto

    Note

    For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role (ocm-role).

  • Default Operator role prefix: <cluster_name>-<4_digit_random_string>

Storage

  • Node volumes:

    • Type: AWS EBS GP3
    • Default size: 300GiB (adjustable at creation time)
  • Workload persistent volumes:

    • Default StorageClass: gp3-csi
    • Provisioner: ebs.csi.aws.com
    • Dynamic persistent volume provisioning

Cluster update strategy

  • Individual updates
  • 1 hour grace period for node draining

2.2. Red Hat OpenShift Service on AWS Prerequisites

To create a Red Hat OpenShift Service on AWS cluster, you must have the following items:

  • A configured virtual private cloud (VPC)
  • Account-wide roles
  • An OIDC configuration
  • Operator roles

You must have a Virtual Private Cloud (VPC) to create Red Hat OpenShift Service on AWS cluster. You can use the following methods to create a VPC:

  • Create a VPC using the ROSA CLI
  • Create a VPC by using a Terraform template
  • Manually create the VPC resources in the AWS console
Note

The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this Terraform configuration, it is in the same region that you intend to install your cluster. In these examples, us-east-2 is used.

Creating an AWS VPC using the ROSA CLI

The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.

If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:

Expand
VPC parameterValue

Availability zones

1

Region

us-east-1

VPC CIDR

10.0.0.0/16

You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.

Prerequisites

  • You have configured your AWS account
  • You have configured your Red Hat accounts
  • You have installed the ROSA CLI and configured it to the latest version

Procedure

  1. Create an AWS VPC using the default CloudFormations template by running the following command:

    $ rosa create network
    Copy to Clipboard Toggle word wrap
  2. Optional: Customize your VPC by specifying additional parameters.

    You can use the --param flag to specify changes to the default VPC template. The following example command specifies custom values for region, Name, AvailabilityZoneCount and VpcCidr.

    $ rosa create network --param Region=us-east-2 --param Name=quickstart-stack --param AvailabilityZoneCount=3 --param VpcCidr=10.0.0.0/16
    Copy to Clipboard Toggle word wrap

    The command takes about 5 minutes to run and provides regular status updates from AWS as resources are created. If there is an issue with CloudFormation, a rollback is attempted. For all other errors that are encountered, please follow the error message instructions or contact AWS support.

Verification

  • When completed, you receive a summary of the created resources:

    INFO[0140] Resources created in stack:
    INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id>
    INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e
    INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id>
    INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id>
    INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ 
    1
    
    INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ 
    2
    
    INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id>
    INFO[0140] Stack rosa-network-stack-5555 created \ 
    3
    Copy to Clipboard Toggle word wrap
    1 2
    These two subnet IDs are used to create your cluster when using the rosa create cluster command.
    3
    The network stack name is used to delete the resource later.
Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.

Prerequisites

  • You have installed Terraform version 1.4.0 or newer on your machine.
  • You have installed Git on your machine.

Procedure

  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  3. Initiate the Terraform file by running the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region. You can choose to specify a cluster name. A rosa.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa.tfplan -var region=<region>
    Copy to Clipboard Toggle word wrap
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa.tfplan
    Copy to Clipboard Toggle word wrap
    1. Optional: You can capture the values of the Terraform-provisioned private, public, and machinepool subnet IDs as environment variables to use when creating your Red Hat OpenShift Service on AWS cluster by running the following commands:

      $ export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)
      Copy to Clipboard Toggle word wrap
    2. Verify that the variables were correctly set with the following command:

      $ echo $SUBNET_IDS
      Copy to Clipboard Toggle word wrap

      Example output

      $ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
      Copy to Clipboard Toggle word wrap

Creating an AWS Virtual Private Cloud manually

If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.

Your VPC must meet the requirements shown in the following table.

Expand
Table 2.2. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

2.2.2. Troubleshooting

If your cluster fails to install, troubleshoot these common issues:

  • Make sure your DHCP option set includes a domain name, and ensure that the domain name does not include any spaces or capital letters.
  • If your VPC uses a custom DNS resolver (the domain name servers field of your DHCP option set is not AmazonProvideDNS), make sure it is able to properly resolve the private hosted zones configured in Route53.

For more information about troubleshooting Red Hat OpenShift Service on AWS cluster installations, see Troubleshooting Red Hat OpenShift Service on AWS cluster installations.

2.2.2.1. Get support

If you need additional support, visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.

Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources for a cluster. The following table shows how your resources should be tagged:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 (or no value)

Private subnet

kubernetes.io/role/internal-elb

1 (or no value)

Note

You must tag at least one private subnet and, if applicable, one public subnet.

Prerequisites

  • You have created a VPC.
  • You have installed the aws CLI.

Procedure

  1. Tag your resources in your terminal by running the following commands:

    1. For public subnets, run:

      $ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run:

      $ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correctly applied by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet-id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet-id>        subnet  1
    Copy to Clipboard Toggle word wrap

2.2.3. Creating the account-wide STS roles and policies

Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.

Note

Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    Copy to Clipboard Toggle word wrap
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Copy to Clipboard Toggle word wrap

      Example output

      ManagedOpenShift
      Copy to Clipboard Toggle word wrap

For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.

2.2.4. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

2.2.5. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

2.3. Creating a Red Hat OpenShift Service on AWS cluster using the CLI

When using the ROSA CLI, rosa, to create a cluster, you can select the default options to create the cluster quickly.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
  • You have logged in to your Red Hat account by using the ROSA CLI.
  • You have created an OIDC configuration.
  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure

  1. Use one of the following commands to create your Red Hat OpenShift Service on AWS cluster:

    Note

    When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see CIDR range definitions.

    • If you did not set environmental variables, run the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \ 
      1
      
          --mode=auto --hosted-cp [--private] \ 
      2
      
          --operator-roles-prefix <operator-role-prefix> \ 
      3
      
          --external-id <external-id> \ 
      4
      
          --oidc-config-id <id-of-oidc-configuration> \
          --subnet-ids=<public-subnet-id>,<private-subnet-id>
      Copy to Clipboard Toggle word wrap
      1
      Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
      2
      Optional: The --private argument is used to create private Red Hat OpenShift Service on AWS clusters. If you use this argument, ensure that you only use your private subnet ID for --subnet-ids.
      3
      By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.
      Note

      If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.

      4
      Optional: A unique identifier that might be required when you assume a role in another account.
    • If you set the environmental variables, create a cluster with a single, initial machine pool, using either a publicly or privately available API, and a publicly or privately available Ingress by running the following command:

      $ rosa create cluster --private --cluster-name=<cluster_name> \
          --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
    • If you set the environmental variables, create a cluster with a single, initial machine pool, a publicly available API, and a publicly available Ingress by running the following command:

      $ rosa create cluster --cluster-name=<cluster_name> --mode=auto \
          --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    The following State field changes are listed in the output as the cluster installation progresses:

    • pending (Preparing account)
    • installing (DNS setup in progress)
    • installing
    • ready

      Note

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the progress of the cluster creation by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: To watch for new log messages as the installation progresses, use the --watch argument.

Chapter 3. Creating a ROSA cluster using Terraform

Create a Red Hat OpenShift Service on AWS cluster quickly by using a Terraform cluster template that is configured with the default cluster options.

The cluster creation process described below uses a Terraform configuration that prepares a Red Hat OpenShift Service on AWS cluster with the following resources:

  • An OIDC provider with a managed oidc-config configuration
  • Prerequisite IAM Operator roles with associated AWS Managed Red Hat OpenShift Service on AWS Policies
  • IAM account roles with associated AWS Managed Red Hat OpenShift Service on AWS Policies
  • All other AWS resources required to create a Red Hat OpenShift Service on AWS cluster

3.1.1. Overview of Terraform

Terraform is an infrastructure-as-code tool that provides a way to configure your resources once and replicate those resources as desired. Terraform accomplishes the creation tasks by using declarative language. You declare what you want the final state of the infrastructure resource to be, and Terraform creates these resources to your specifications.

Prerequisites

To use the Red Hat Cloud Services provider inside your Terraform configuration, you must meet the following prerequisites:

  • You have installed the ROSA CLI tool.
  • You have your offline Red Hat OpenShift Cluster Manager token.
  • You have installed Terraform version 1.4.6 or newer.
  • You have created your AWS account-wide IAM roles.

    The specific account-wide IAM roles and policies provide the STS permissions required for Red Hat OpenShift Service on AWS support, installation, control plane, and compute functionality. This includes account-wide Operator policies. See the Additional resources for more information on the AWS account roles.

  • You have an AWS account and associated credentials that allow you to create resources. The credentials are configured for the AWS provider. See the Authentication and Configuration section in AWS Terraform provider documentation.
  • You have, at minimum, the following permissions in your AWS IAM role policy that is operating Terraform. Check for these permissions in the AWS console.

    Example 3.1. Minimum AWS permissions for Terraform

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "iam:GetPolicyVersion",
            "iam:DeletePolicyVersion",
            "iam:CreatePolicyVersion",
            "iam:UpdateAssumeRolePolicy",
            "secretsmanager:DescribeSecret",
            "iam:ListRoleTags",
            "secretsmanager:PutSecretValue",
            "secretsmanager:CreateSecret",
            "iam:TagRole",
            "secretsmanager:DeleteSecret",
            "iam:UpdateOpenIDConnectProviderThumbprint",
            "iam:DeletePolicy",
            "iam:CreateRole",
            "iam:AttachRolePolicy",
            "iam:ListInstanceProfilesForRole",
            "secretsmanager:GetSecretValue",
            "iam:DetachRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListPolicyTags",
            "iam:ListRolePolicies",
            "iam:DeleteOpenIDConnectProvider",
            "iam:DeleteInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:ListEntitiesForPolicy",
            "iam:DeleteRole",
            "iam:TagPolicy",
            "iam:CreateOpenIDConnectProvider",
            "iam:CreatePolicy",
            "secretsmanager:GetResourcePolicy",
            "iam:ListPolicyVersions",
            "iam:UpdateRole",
            "iam:GetOpenIDConnectProvider",
            "iam:TagOpenIDConnectProvider",
            "secretsmanager:TagResource",
            "sts:AssumeRoleWithWebIdentity",
            "iam:ListRoles"
          ],
          "Resource": [
            "arn:aws:secretsmanager:*:<ACCOUNT_ID>:secret:*",
            "arn:aws:iam::<ACCOUNT_ID>:instance-profile/*",
            "arn:aws:iam::<ACCOUNT_ID>:role/*",
            "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/*",
            "arn:aws:iam::<ACCOUNT_ID>:policy/*"
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "s3:*"
            ],
          "Resource": "*"
        }
      ]
    }
    Copy to Clipboard Toggle word wrap
Considerations when using Terraform

In general, using Terraform to manage cloud resources should be done with the expectation that any changes should be done using the Terraform methodology. Use caution when using tools outside of Terraform, such as the AWS console or Red Hat console, to modify cloud resources created by Terraform. Using tools outside Terraform to manage cloud resources that are already managed by Terraform introduces configuration drift from your declared Terraform configuration.

For example, if you upgrade your Terraform-created cluster by using the Red Hat Hybrid Cloud Console, you need to reconcile your Terraform state before applying any forthcoming configuration changes. For more information, see Manage resources in Terraform state in the HashiCorp Developer documentation.

3.1.2. Overview of the default cluster specifications

You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.

The following summary describes the default cluster specifications.

Expand
Table 3.1. Default Red Hat OpenShift Service on AWS cluster specifications
ComponentDefault specifications

Accounts and roles

  • Default IAM role prefix: rosa-<6-digit-alphanumeric-string>

Cluster settings

  • Default cluster version: 4.14
  • Cluster name: rosa-<6-digit-alphanumeric-string>
  • Default AWS region for installations using the Red Hat OpenShift Cluster Manager Hybrid Cloud Console: us-east-2 (US East, Ohio)
  • Availability: Multi zone for the data plane
  • EC2 Instance Metadata Service (IMDS) is enabled and allows the use of IMDSv1 or IMDSv2 (token optional)
  • Availability: Single zone for the data plane
  • Monitoring for user-defined projects: Enabled
  • No cluster admin role created

Compute node machine pool

  • Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM)
  • Compute node count: 3
  • Autoscaling: Not enabled
  • No additional node labels

Networking configuration

  • Cluster privacy: public or private
  • You can choose to create a new VPC during the Terraform cluster creation process.
  • No cluster-wide proxy is configured

Classless Inter-Domain Routing (CIDR) ranges

  • Machine CIDR: 10.0.0.0/16
  • Service CIDR: 172.30.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host prefix: /23

    Note

    The static IP address 172.20.0.1 is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.

Cluster roles and policies

  • Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto

    Note

    For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role (ocm-role).

  • Default Operator role prefix: rosa-<6-digit-alphanumeric-string>

Storage

  • Node volumes:

    • Type: AWS EBS GP3
    • Default size: 300GiB (adjustable at creation time)
  • Workload persistent volumes:

    • Default StorageClass: gp3-csi
    • Provisioner: ebs.csi.aws.com
    • Dynamic persistent volume provisioning

Cluster update strategy

  • Individual updates
  • 1 hour grace period for node draining

The cluster creation process outlined below shows how to use Terraform to create your account-wide IAM roles and a Red Hat OpenShift Service on AWS cluster with a managed OIDC configuration.

3.1.3.1. Preparing your environment for Terraform

Before you can create your Red Hat OpenShift Service on AWS cluster by using Terraform, you need to export your offline Red Hat OpenShift Cluster Manager token.

Procedure

  1. Optional: Because the Terraform files get created in your current directory during this procedure, you can create a new directory to store these files and navigate into it by running the following command:

    $ mkdir terraform-cluster && cd terraform-cluster
    Copy to Clipboard Toggle word wrap
  2. Grant permissions to your account by using an offline Red Hat OpenShift Cluster Manager token.
  3. Copy your offline token, and set the token as an environmental variable by running the following command:

    $ export RHCS_TOKEN=<your_offline_token>
    Copy to Clipboard Toggle word wrap
    Note

    This environmental variable resets at the end of each session, such as restarting your machine or closing the terminal.

Verification

  • After you export your token, verify the value by running the following command:

    $ echo $RHCS_TOKEN
    Copy to Clipboard Toggle word wrap
3.1.3.2. Creating your Terraform files locally

After you set up your offline Red Hat OpenShift Cluster Manager token, you need to create the Terraform files locally to build your cluster. You can create these files by using the following code templates.

Procedure

  1. Create the main.tf file by running the following command:

    $ cat<<-EOF>main.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = ">= 4.20.0"
        }
        rhcs = {
          version = ">= 1.6.3"
          source  = "terraform-redhat/rhcs"
        }
      }
    }
    
    # Export token using the RHCS_TOKEN environment variable
    provider "rhcs" {}
    
    provider "aws" {
      region = var.aws_region
      ignore_tags {
        key_prefixes = ["kubernetes.io/"]
      }
      default_tags {
        tags = var.default_aws_tags
      }
    }
    
    data "aws_availability_zones" "available" {}
    
    locals {
      # Extract availability zone names for the specified region, limit it to 3 if multi az or 1 if single
      region_azs = var.multi_az ? slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 3) : slice([for zone in data.aws_availability_zones.available.names : format("%s", zone)], 0, 1)
    }
    
    resource "random_string" "random_name" {
      length  = 6
      special = false
      upper   = false
    }
    
    locals {
      worker_node_replicas = var.multi_az ? 3 : 2
      # If cluster_name is not null, use that, otherwise generate a random cluster name
      cluster_name = coalesce(var.cluster_name, "rosa-\${random_string.random_name.result}")
    }
    
    # The network validator requires an additional 60 seconds to validate Terraform clusters.
    resource "time_sleep" "wait_60_seconds" {
      count = var.create_vpc ? 1 : 0
      depends_on = [module.vpc]
      create_duration = "60s"
    }
    
    module "rosa-hcp" {
      source                 = "terraform-redhat/rosa-hcp/rhcs"
      version                = "1.6.3"
      cluster_name           = local.cluster_name
      openshift_version      = var.openshift_version
      account_role_prefix    = local.cluster_name
      operator_role_prefix   = local.cluster_name
      replicas               = local.worker_node_replicas
      aws_availability_zones = local.region_azs
      create_oidc            = true
      private                = var.private_cluster
      aws_subnet_ids         = var.create_vpc ? var.private_cluster ? module.vpc[0].private_subnets : concat(module.vpc[0].public_subnets, module.vpc[0].private_subnets) : var.aws_subnet_ids
      create_account_roles   = true
      create_operator_roles  = true
    # Optional: Configure a cluster administrator user \ 
    1
    
    #
    # Option 1: Default cluster-admin user
    # Create an administrator user (cluster-admin) and automatically
    # generate a password by uncommenting the following parameter:
    #  create_admin_user = true
    # Generated administrator credentials are displayed in terminal output.
    #
    # Option 2: Specify administrator username and password
    # Create an administrator user and define your own password
    # by uncommenting and editing the values of the following parameters:
    #  admin_credentials_username = <username>
    #  admin_credentials_password = <password>
    
      depends_on = [time_sleep.wait_60_seconds]
    }
    EOF
    Copy to Clipboard Toggle word wrap
    1
    Optional: Create an administrator user during cluster creation by uncommenting the appropriate parameters and editing their values if required.
  2. Create the variables.tf file by running the following command:

    Note

    Copy and edit this file before running the command to build your cluster.

    $ cat<<-EOF>variables.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    variable "openshift_version" {
      type        = string
      default     = "4.14.20"
      description = "Desired version of OpenShift for the cluster, for example '4.14.20'. If version is greater than the currently running version, an upgrade will be scheduled."
    }
    
    variable "create_vpc" {
      type        = bool
      description = "If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'."
    }
    
    # ROSA Cluster info
    variable "cluster_name" {
      default     = null
      type        = string
      description = "The name of the ROSA cluster to create"
    }
    
    variable "additional_tags" {
      default = {
        Terraform   = "true"
        Environment = "dev"
      }
      description = "Additional AWS resource tags"
      type        = map(string)
    }
    
    variable "multi_az" {
      type        = bool
      description = "Multi AZ Cluster for High Availability"
      default     = true
    }
    
    variable "worker_node_replicas" {
      default     = 3
      description = "Number of worker nodes to provision. Single zone clusters need at least 2 nodes, multizone clusters need at least 3 nodes"
      type        = number
    }
    
    variable "aws_subnet_ids" {
      type        = list(any)
      description = "A list of either the public or public + private subnet IDs to use for the cluster blocks to use for the cluster"
      default     = ["subnet-01234567890abcdef", "subnet-01234567890abcdef", "subnet-01234567890abcdef"]
    }
    
    variable "private_cluster" {
      type        = bool
      description = "If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'."
    }
    
    #VPC Info
    variable "vpc_name" {
      type        = string
      description = "VPC Name"
      default     = "tf-qs-vpc"
    }
    
    variable "vpc_cidr_block" {
      type        = string
      description = "value of the CIDR block to use for the VPC"
      default     = "10.0.0.0/16"
    }
    
    variable "private_subnet_cidrs" {
      type        = list(any)
      description = "The CIDR blocks to use for the private subnets"
      default     = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    }
    
    variable "public_subnet_cidrs" {
      type        = list(any)
      description = "The CIDR blocks to use for the public subnets"
      default     = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
    }
    
    variable "single_nat_gateway" {
      type        = bool
      description = "Single NAT or per NAT for subnet"
      default     = false
    }
    
    #AWS Info
    variable "aws_region" {
      type    = string
      default = "us-east-2"
    }
    
    variable "default_aws_tags" {
      type        = map(string)
      description = "Default tags for AWS"
      default     = {}
    }
    EOF
    Copy to Clipboard Toggle word wrap
  3. Create the vpc.tf file by running the following command:

    $ cat<<-EOF>vpc.tf
    #
    # Copyright (c) 2023 Red Hat, Inc.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    module "vpc" {
      source  = "terraform-aws-modules/vpc/aws"
      version = "5.1.2"
    
      count = var.create_vpc ? 1 : 0
      name  = var.vpc_name
      cidr  = var.vpc_cidr_block
    
      azs             = local.region_azs
      private_subnets = var.multi_az ? var.private_subnet_cidrs : [var.private_subnet_cidrs[0]]
      public_subnets  = var.multi_az ? var.public_subnet_cidrs : [var.public_subnet_cidrs[0]]
    
      enable_nat_gateway   = true
      single_nat_gateway   = var.single_nat_gateway
      enable_dns_hostnames = true
      enable_dns_support   = true
    
      tags = var.additional_tags
    }
    EOF
    Copy to Clipboard Toggle word wrap

    You are ready to initiate Terraform.

After you create the Terraform files, you must initiate Terraform to provide all of the required dependencies. Then apply the Terraform plan.

Important

Do not modify Terraform state files. For more information, see Considerations when using Terraform

Procedure

  1. Set up Terraform to create your resources based on your Terraform files, run the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap
  2. Optional: Verify that the Terraform you copied is correct by running the following command:

    $ terraform validate
    Copy to Clipboard Toggle word wrap

    Example output

    Success! The configuration is valid.
    Copy to Clipboard Toggle word wrap

  3. Create your cluster with Terraform by running the following command:

    $ terraform apply
    Copy to Clipboard Toggle word wrap

    The Terraform interface asks two questions to create your cluster, similar to the following:

    Example output

    var.create_vpc
      If you would like to create a new VPC, set this value to 'true'. If you do not want to create a new VPC, set this value to 'false'.
    
      Enter a value:
    
    var.private_cluster
      If you want to create a private cluster, set this value to 'true'. If you want a publicly available cluster, set this value to 'false'.
    
      Enter a value:
    Copy to Clipboard Toggle word wrap

  4. Enter yes to proceed or no to cancel when the Terraform interface lists the resources to be created or changed and prompts for confirmation:

    Example output

    Plan: 63 to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only 'yes' will be accepted to approve.
    Copy to Clipboard Toggle word wrap

    If you enter yes, your Terraform plan starts, creating your AWS account roles, Operator roles, and your Red Hat OpenShift Service on AWS cluster.

Verification

  1. Verify that your cluster was created by running the following command:

    $ rosa list clusters
    Copy to Clipboard Toggle word wrap

    Example output showing a cluster’s ID, name, and status

    ID                                NAME          STATE  TOPOLOGY
    27c3snjsupa9obua74ba8se5kcj11269  rosa-tf-demo  ready  Hosted CP
    Copy to Clipboard Toggle word wrap

  2. Verify that your account roles were created by running the following command:

    $ rosa list account-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching account roles
    ROLE NAME                                   ROLE TYPE      ROLE ARN                                                           OPENSHIFT VERSION  AWS Managed
    ROSA-demo-Installer-Role                    Installer      arn:aws:iam::<ID>:role/ROSA-demo-Installer-Role                    4.14               No
    ROSA-demo-Support-Role                      Support        arn:aws:iam::<ID>:role/ROSA-demo-Support-Role                      4.14               No
    ROSA-demo-Worker-Role                       Worker         arn:aws:iam::<ID>:role/ROSA-demo-Worker-Role                       4.14               No
    Copy to Clipboard Toggle word wrap

  3. Verify that your Operator roles were created by running the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output showing Terraform-created Operator roles

    I: Fetching operator roles
    ROLE PREFIX    AMOUNT IN BUNDLE
    rosa-demo      8
    Copy to Clipboard Toggle word wrap

3.1.3.4. Deleting your Red Hat OpenShift Service on AWS cluster with Terraform

Use the terraform destroy command to remove all of the resources that were created with the terraform apply command.

Note

Do not modify your Terraform .tf files before destroying your resources. These variables are matched to resources to delete.

Procedure

  1. In the directory where you ran the terraform apply command to create your cluster, run the following command to delete the cluster:

    $ terraform destroy
    Copy to Clipboard Toggle word wrap

    The Terraform interface prompts you for two variables. These should match the answers you provided when creating a cluster:

    var.create_vpc
      If you would like to create a new VPC, set this value to 'true.' If you do not want to create a new VPC, set this value to 'false.'
    
      Enter a value:
    
    var.private_cluster
      If you want to create a private cluster, set this value to 'true.' If you want a publicly available cluster, set this value to 'false.'
    
      Enter a value:
    Copy to Clipboard Toggle word wrap
  2. Enter yes to start the role and cluster deletion:

    Example output

    Plan: 0 to add, 0 to change, 63 to destroy.
    
    Do you really want to destroy all resources?
      Terraform will destroy all your managed infrastructure, as shown above.
      There is no undo. Only 'yes' will be accepted to confirm.
    
      Enter a value: yes
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that your cluster was destroyed by running the following command:

    $ rosa list clusters
    Copy to Clipboard Toggle word wrap

    Example output showing no cluster

    I: No clusters available
    Copy to Clipboard Toggle word wrap

  2. Verify that the account roles were destroyed by running the following command:

    $ rosa list account-roles
    Copy to Clipboard Toggle word wrap

    Example output showing no Terraform-created account roles

    I: Fetching account roles
    I: No account roles available
    Copy to Clipboard Toggle word wrap

  3. Verify that the Operator roles were destroyed by running the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output showing no Terraform-created Operator roles

    I: Fetching operator roles
    I: No operator roles available
    Copy to Clipboard Toggle word wrap

Create a Red Hat OpenShift Service on AWS cluster using a custom AWS Key Management Service (KMS) key.

4.1. Red Hat OpenShift Service on AWS Prerequisites

To create a Red Hat OpenShift Service on AWS cluster, you must have the following items:

  • A configured virtual private cloud (VPC)
  • Account-wide roles
  • An OIDC configuration
  • Operator roles

You must have a Virtual Private Cloud (VPC) to create Red Hat OpenShift Service on AWS cluster. Use one of the following methods to create a VPC:

  • Create a VPC using the ROSA command-line interface (CLI)
  • Create a VPC by using a Terraform template
  • Manually create the VPC resources in the AWS console
Note

The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this Terraform script it is in the same region that you intend to install your cluster. In these examples, use us-east-2.

Creating an AWS VPC using the ROSA CLI

The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.

If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:

Expand
VPC parameterValue

Availability zones

1

Region

us-east-1

VPC CIDR

10.0.0.0/16

You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.

Prerequisites

  • You have configured your AWS account
  • You have configured your Red Hat accounts
  • You have installed the ROSA CLI and configured it to the latest version

Procedure

  1. Create an AWS VPC using the default CloudFormations template by running the following command:

    $ rosa create network
    Copy to Clipboard Toggle word wrap
  2. Optional: Customize your VPC by specifying additional parameters.

    You can use the --param flag to specify changes to the default VPC template. The following example command specifies custom values for region, Name, AvailabilityZoneCount and VpcCidr.

    $ rosa create network --param Region=us-east-2 --param Name=quickstart-stack --param AvailabilityZoneCount=3 --param VpcCidr=10.0.0.0/16
    Copy to Clipboard Toggle word wrap

    The command takes about 5 minutes to run and provides regular status updates from AWS as resources are created. If there is an issue with CloudFormation, a rollback is attempted. For all other errors that are encountered, please follow the error message instructions or contact AWS support.

Verification

  • When completed, you receive a summary of the created resources:

    INFO[0140] Resources created in stack:
    INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id>
    INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e
    INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id>
    INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id>
    INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ 
    1
    
    INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ 
    2
    
    INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id>
    INFO[0140] Stack rosa-network-stack-5555 created \ 
    3
    Copy to Clipboard Toggle word wrap
    1 2
    These two subnet IDs are used to create your cluster when using the rosa create cluster command.
    3
    The network stack name is used to delete the resource later.
Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.

Prerequisites

  • You have installed Terraform version 1.4.0 or newer on your machine.
  • You have installed Git on your machine.

Procedure

  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  3. Initiate the Terraform file by running the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region. You can choose to specify a cluster name. A rosa.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa.tfplan -var region=<region>
    Copy to Clipboard Toggle word wrap
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa.tfplan
    Copy to Clipboard Toggle word wrap
    1. Optional: You can capture the values of the Terraform-provisioned private, public, and machinepool subnet IDs as environment variables to use when creating your Red Hat OpenShift Service on AWS cluster by running the following commands:

      $ export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)
      Copy to Clipboard Toggle word wrap
    2. Verify that the variables were correctly set with the following command:

      $ echo $SUBNET_IDS
      Copy to Clipboard Toggle word wrap

      Example output

      $ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
      Copy to Clipboard Toggle word wrap

Creating an AWS Virtual Private Cloud manually

If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.

Your VPC must meet the requirements shown in the following table.

Expand
Table 4.1. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

4.2.1. Troubleshooting

If your cluster fails to install, troubleshoot these common issues:

  • Make sure your DHCP option set includes a domain name, and ensure that the domain name does not include any spaces or capital letters.
  • If your VPC uses a custom DNS resolver (the domain name servers field of your DHCP option set is not AmazonProvideDNS), make sure it is able to properly resolve the private hosted zones configured in Route53.

For more information about troubleshooting Red Hat OpenShift Service on AWS cluster installations, see Troubleshooting Red Hat OpenShift Service on AWS cluster installations.

4.2.1.1. Get support

If you need additional support, visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.

Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources for a cluster. The following table shows how your resources should be tagged:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 (or no value)

Private subnet

kubernetes.io/role/internal-elb

1 (or no value)

Note

You must tag at least one private subnet and, if applicable, one public subnet.

Prerequisites

  • You have created a VPC.
  • You have installed the aws CLI.

Procedure

  1. Tag your resources in your terminal by running the following commands:

    1. For public subnets, run:

      $ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run:

      $ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correctly applied by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet-id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet-id>        subnet  1
    Copy to Clipboard Toggle word wrap

4.2.2. Creating the account-wide STS roles and policies

Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.

Note

Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    Copy to Clipboard Toggle word wrap
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Copy to Clipboard Toggle word wrap

      Example output

      ManagedOpenShift
      Copy to Clipboard Toggle word wrap

For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.

4.2.3. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

4.2.4. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

You can create a Red Hat OpenShift Service on AWS cluster with a customer-provided KMS key that is used to encrypt either node root volumes, the etcd database, or both. A different KMS key ARN can be provided for each option.

Note

Red Hat OpenShift Service on AWS does not automatically configure the default storage class to encrypt persistent volumes with the customer-provided KMS key. This is something that can be configured in-cluster after installation.

Procedure

  1. Create a custom AWS customer-managed KMS key by running the following command:

    $ KMS_ARN=$(aws kms create-key --region $AWS_REGION --description 'Custom ROSA Encryption Key' --tags TagKey=red-hat,TagValue=true --query KeyMetadata.Arn --output text)
    Copy to Clipboard Toggle word wrap

    This command saves the Amazon Resource Name (ARN) output of this custom key for further steps.

    Note

    Customers must provide the --tags TagKey=red-hat,TagValue=true argument that is required for a customer KMS key.

  2. Verify the KMS key has been created by running the following command:

    $ echo $KMS_ARN
    Copy to Clipboard Toggle word wrap
  3. Set your AWS account ID to an environment variable.

    $ AWS_ACCOUNT_ID=<aws_account_id>
    Copy to Clipboard Toggle word wrap
  4. Add the ARN for the account-wide installer role and operator roles that you created in the preceding step to the Statement.Principal.AWS section in the file. In the following example, the ARN for the default ManagedOpenShift-HCP-ROSA-Installer-Role role is added:

    {
      "Version": "2012-10-17",
      "Id": "key-rosa-policy-1",
      "Statement": [
      {
                  "Sid": "Enable IAM User Permissions",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:root"
                  },
                  "Action": "kms:*",
                  "Resource": "*"
              },
            {
                  "Sid": "Installer Permissions",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/ManagedOpenShift-HCP-ROSA-Installer-Role"
                  },
                  "Action": [
                      "kms:CreateGrant",
                      "kms:DescribeKey",
                      "kms:GenerateDataKeyWithoutPlaintext"
                  ],
                  "Resource": "*"
              },
              {
                  "Sid": "ROSA KubeControllerManager Permissions",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/<operator_role_prefix>-kube-system-kube-controller-manager"
    
                  },
                  "Action": "kms:DescribeKey",
                  "Resource": "*"
              },
              {
                  "Sid": "ROSA KMS Provider Permissions",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/<operator_role_prefix>-kube-system-kms-provider"
                  },
                  "Action": [
                      "kms:Encrypt",
                      "kms:Decrypt",
                      "kms:DescribeKey"
                  ],
                  "Resource": "*"
              },
              {
                  "Sid": "ROSA NodeManager Permissions",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/<operator_role_prefix>-kube-system-capa-controller-manager"
                  },
                  "Action": [
                      "kms:DescribeKey",
                      "kms:GenerateDataKeyWithoutPlaintext",
                      "kms:CreateGrant"
                  ],
                  "Resource": "*"
              }
          ]
      }
    Copy to Clipboard Toggle word wrap
  5. Confirm the details of the policy file created by running the following command:

    $ cat rosa-key-policy.json
    Copy to Clipboard Toggle word wrap
  6. Apply the newly generated key policy to the custom KMS key by running the following command:

    $ aws kms put-key-policy --key-id $KMS_ARN \
    --policy file://rosa-key-policy.json \
    --policy-name default
    Copy to Clipboard Toggle word wrap
  7. Create the cluster by running the following command:

    Note

    If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a sub-domain for your provisioned cluster on *.openshiftapps.com.

    To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.

    $ rosa create cluster --cluster-name <cluster_name> \
    --subnet-ids <private_subnet_id>,<public_subnet_id> \
    --sts \
    --mode auto \
    --machine-cidr 10.0.0.0/16 \
    --compute-machine-type m5.xlarge \
    --hosted-cp \
    --region <aws_region> \
    --oidc-config-id $OIDC_ID \
    --kms-key-arn $KMS_ARN \ 
    1
    
    --etcd-encryption-kms-arn $KMS_ARN \ 
    2
    
    --operator-roles-prefix $OPERATOR_ROLES_PREFIX
    Copy to Clipboard Toggle word wrap
    1
    This KMS key ARN is used to encrypt all worker node root volumes. It is not required if only etcd database encryption is needed.
    2
    This KMS key ARN is used to encrypt the etcd database. The etcd database is always encrypted by default with an AES cipher block, but can be encrypted instead with a KMS key. It is not required if only node root volume encryption is needed.

Verification

You can verify that your KMS key works by using OpenShift Cluster Manager.

  1. Navigate to OpenShift Cluster Manager and select Instances.
  2. Select your instance.
  3. Click the Storage tab.
  4. Copy the KMS key ID.
  5. Search and select Key Management Service.
  6. Enter your copied KMS key ID in the Filter field.

You can create Red Hat OpenShift Service on AWS clusters in shared, centrally-managed AWS virtual private clouds (VPCs).

Note

Installing a new Red Hat OpenShift Service on AWS cluster into a VPC that was automatically created by the installer for a different cluster is not supported.

Note
  • This process requires two separate AWS accounts that belong to the same AWS organization. One account functions as the VPC-owning AWS account (VPC Owner), while the other account creates the cluster in the cluster-creating AWS account (Cluster Creator).
  • Installing a cluster in a shared VPC is supported only for OpenShift 4.17.9 and later.

* The hosted zones can be created in either the centrally-managed VPC account or in the workload account in which the cluster is deployed.

Note

Only certain cluster-to-VPC relationships are supported. Multiple Red Hat OpenShift Service on AWS clusters in a single VPC are not supported. For more information, see Multiple Red Hat OpenShift Service on AWS clusters in a single VPC

Prerequisites for the VPC Owner

Prerequisites for the Cluster Creator

You can share subnets within a VPC with another AWS account in your AWS organization.

Procedure

  1. Create or modify a VPC to your specifications in the VPC section of the AWS console. Make sure you have selected the correct region.
  2. Create the Route 53 role.

    Note

    You must create the Route 53 role in the same account where you plan to create the Amazon Route 53 hosted zones (which are created in Step 3). For example, if you want to create the hosted zones in the centrally-managed VPC account, you must create the Route 53 role in the VPC Owner account. If you want to create the hosted zones in the workload account, you must create the Route 53 role in the Cluster Creator account.

    1. Create a custom trust policy file that grants permission to assume roles:

      $ cat <<EOF > /tmp/route53-role.json
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::<Account-ID>:root"  
      1
      
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }
      EOF
      Copy to Clipboard Toggle word wrap
      1
      The trust policy principals may be scoped down to the ingress Operator role and installer account role rather than root.
    2. Create the IAM role for the AWS managed policy ROSASharedVPCRoute53Policy.

      $ aws iam create-role --role-name <role_name> \  
      1
      
          --assume-role-policy-document file:///tmp/route53-role.json
      Copy to Clipboard Toggle word wrap
      1
      Replace <role_name> with the name of the role you want to create.
    3. Attach the AWS managed policy ROSASharedVPCRoute53Policy to allow for necessary shared VPC permissions.

      $ aws iam attach-role-policy --role-name <role_name> \  
      1
      
      --policy-arn arn:aws:iam::aws:policy/ROSASharedVPCRoute53Policy
      Copy to Clipboard Toggle word wrap
      1
      Replace <role_name> with the name of the role you created.
  3. Create the VPC endpoint role.

    1. Create a custom trust policy file that grants permission to assume roles:

      $ cat <<EOF > /tmp/shared-vpc-role.json
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::<Account-ID>:root"  
      1
      
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }
      EOF
      Copy to Clipboard Toggle word wrap
      1
      The trust policy principals may be scoped down to the ingress Operator role and installer account role rather than root.
    2. Create the IAM role for the AWS managed policy ROSASharedVPCEndpointPolicy:

      $ aws iam create-role --role-name <role_name> \  
      1
      
          --assume-role-policy-document file:///tmp/vpce-role.json
      Copy to Clipboard Toggle word wrap
      1
      Replace <role_name> with the name of the role you want to create.
    3. Attach the AWS managed policy ROSASharedVPCEndpointPolicy to allow for necessary shared VPC permissions.

      $ aws iam attach-role-policy --role-name <role_name> \  
      1
      
      --policy-arn arn:aws:iam::aws:policy/ROSASharedVPCEndpointPolicy
      Copy to Clipboard Toggle word wrap
      1
      Replace <role_name> with the name of the role you created.
  4. Provide the Route 53 role ARN and the VPC endpoint role ARN to the Cluster Creator to continue configuration.
Additional resources

After the VPC Owner creates a virtual private cloud (VPC), subnets, and an IAM role for sharing the VPC resources, reserve an openshiftapps.com DNS domain and create Operator roles to communicate back to the VPC Owner.

Note

For shared VPC clusters, you can choose to create the Operator roles after the cluster creation steps. The cluster will be in a waiting state until the Ingress Operator role ARN is added to the shared VPC role trusted relationships.

Prerequisites

  • You have the Route 53 role ARN for the IAM role from the VPC Owner.
  • You have the VPC endpoint role ARN for the IAM role from the VPC Owner.

Procedure

  1. Reserve an openshiftapps.com DNS domain with the following command:

    $ rosa create dns-domain --hosted-cp
    Copy to Clipboard Toggle word wrap

    The command creates a reserved openshiftapps.com DNS domain.

    I: DNS domain '14eo.p3.openshiftapps.com' has been created.
    I: To view all DNS domains, run 'rosa list dns-domains'
    Copy to Clipboard Toggle word wrap
  2. Create an OIDC configuration.

    Review this article for more information on the OIDC configuration process. The following command produces the OIDC configuration ID that you need:

    $ rosa create oidc-config
    Copy to Clipboard Toggle word wrap

    You receive confirmation that the command created an OIDC configuration:

    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 25tu67hq45rto1am3slpf5lq6jargg
    Copy to Clipboard Toggle word wrap
  3. Create the account roles by entering the following command:

    $ rosa create account-roles
        --route53-role-arn <Created_Route_53_Role_Arn> 
    1
    
        --vpc-endpoint-role-arn <Created_VPC_Endpoint_Role_Arn> 
    2
    
        --prefix <user_defined_account_role_prefix> 
    3
    
        --hosted-cp
    Copy to Clipboard Toggle word wrap
    1
    Provide the ARN for the Route 53 role that the VPC Owner created.
    2
    Provide the ARN for the VPC endpoint role that the VPC Owner created.
    3
    Provide a prefix for the Operator roles.
  4. Create the Operator roles by entering the following command:

    $ rosa create operator-roles --oidc-config-id <oidc-config-ID> 
    1
    
        --installer-role-arn <Installer_Role> 
    2
    
        --route53-role-arn <Created_Route_53_Role_Arn> 
    3
    
        --vpc-endpoint-role-arn <Created_VPC_Endpoint_Role_Arn> 
    4
    
        --prefix <operator-prefix> 
    5
    
        --hosted-cp
    Copy to Clipboard Toggle word wrap
    1
    Provide the OIDC configuration ID that you created in the previous step.
    2
    Provide your installer ARN that was created as part of the rosa create account-roles process.
    3
    Provide the ARN for the Route 53 role that the VPC Owner created.
    4
    Provide the ARN for the VPC endpoint role that the VPC Owner created.
    5
    Provide a prefix for the Operator roles.
    Note

    The Installer account role and the shared VPC roles must have a one-to-one relationship. If you want to create multiple shared VPC roles, you should create one set of account roles per shared VPC role.

  5. After you create the Operator roles, share your Ingress Operator Cloud Credentials role’s ARN, your Installer role’s ARN, and your Control plane Operator Cloud Credentials role’s ARN with the VPC Owner to continue configuration.

    The shared information resembles these examples:

    • my-rosa-cluster.14eo.p1.openshiftapps.com
    • arn:aws:iam::111122223333:role/ManagedOpenShift-Installer-Role
    • arn:aws:iam::111122223333:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials
    • arn:aws:iam::111122223333:role/my-rosa-cluster-control-plane-operator

After the Cluster Creator provides the DNS domain and the IAM roles, create two hosted zones and update the trust policy on the IAM roles that were created for sharing the VPC.

Note

The hosted zones can be created in either the centrally-managed VPC account or in the workload account.

* The hosted zones can be created in either the centrally-managed VPC account or in the workload account in which the cluster is deployed.

Prerequisites

  • You have the full domain name from the Cluster Creator.
  • You have the Ingress Operator Cloud Credentials role’s ARN from the Cluster Creator.
  • You have the Installer role’s ARN from the Cluster Creator.
  • You have the Control plane Operator Cloud Credentials role’s ARN from the Cluster Creator.
Note

If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a sub-domain for your provisioned cluster on *.openshiftapps.com.

To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.

Procedure

  1. In the Resource Access Manager of the AWS console, create a resource share that shares the previously created VPC’s public and private subnets with the Cluster Creator’s AWS account ID.
  2. Update the Route 53 role and add the Installer and Ingress Operator Cloud Credentials roles to the principal section of the trust policy.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
    	  "Sid": "Statement1",
    	  "Effect": "Allow",
    	  "Principal": {
    	  	"AWS": [
              "arn:aws:iam::<Cluster-Creator's-AWS-Account-ID>:role/<prefix>-ingress-operator-cloud-credentials",
              "arn:aws:iam::<Cluster-Creator's-AWS-Account-ID>:role/<prefix>-hcp-Installer-Role",
              "arn:aws:iam::<Cluster-Creator's-AWS-Account-ID>:role/<prefix>-control-plane-operator-cloud-credentials"
            ]
    	  },
    	  "Action": "sts:AssumeRole"
    	}
      ]
    }
    Copy to Clipboard Toggle word wrap
  3. Update the VPC endpoint role and add the Installer and Ingress Operator Cloud Credentials roles to the principal section of the trust policy.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
    	  "Sid": "Statement1",
    	  "Effect": "Allow",
    	  "Principal": {
    	  	"AWS": [
              "arn:aws:iam::<Cluster-Creator's-AWS-Account-ID>:role/<prefix>-hcp-Installer-Role",
              "arn:aws:iam::<Cluster-Creator's-AWS-Account-ID>:role/<prefix>-control-plane-operator-cloud-credentials"
            ]
    	  },
    	  "Action": "sts:AssumeRole"
    	}
      ]
    }
    Copy to Clipboard Toggle word wrap
  4. Create a private hosted zone in the Route 53 section of the AWS console. In the hosted zone configuration, the domain name is rosa.<cluster-name>.<base-domain>. The private hosted zone must be associated with the network owner’s VPC.
  5. Create a local hosted zone in the Route 53 section of the AWS console. In the hosted zone configuration, the domain name is <cluster-name>.hypershift.local. The local hosted zone must be associated with the network owner’s VPC.
  6. After the hosted zones are created and associated with the network owner’s VPC, provide the following to the Cluster Creator to continue configuration:

    • Hosted zone IDs
    • AWS region
    • Subnet IDs

5.4. Step Four - Cluster Creator: Creating your cluster in a shared VPC

To create a cluster in a shared VPC, complete the following steps.

Note

Installing a cluster in a shared VPC is supported only for OpenShift 4.17.9 and later.

Prerequisites

  • You have the hosted zone IDs from the VPC Owner.
  • You have the AWS region from the VPC Owner.
  • You have the subnet IDs from the VPC Owner.
  • You have the Route 53 role ARN from the VPC Owner.
  • You have the VPC endpoint role ARN from the VPC Owner.

Procedure

  • In a terminal, enter the following command to create the shared VPC:

    $ rosa create cluster --cluster-name <cluster_name> --sts --operator-roles-prefix <prefix> --oidc-config-id <oidc_config_id> --region us-east-1 --subnet-ids <subnet_ids> --hcp-internal-communication-hosted-zone-id <local_hosted_zone_ID> --ingress-private-hosted-zone-id <private_hosted_zone_ID> --route53-role-arn <route_53_role_arn> vpc-endpoint-role-arn <vpc_endpoint_role_arn> --base-domain <dns-domain> --additional-allowed-principals <route53-role-arn>,<vpc-endpoint-role-arn> --hosted-cp
    Copy to Clipboard Toggle word wrap

Chapter 6. Creating a private cluster on Red Hat OpenShift Service on AWS

For Red Hat OpenShift Service on AWS workloads that do not require public internet access, you can create a private cluster.

You can create a private cluster with multiple availability zones (Multi-AZ) on Red Hat OpenShift Service on AWS using the ROSA command-line interface (CLI), rosa.

Prerequisites

  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest version of the ROSA CLI on your installation host.

Procedure

Creating a cluster with hosted control planes can take around 10 minutes.

  1. Create a VPC with at least one private subnet. Ensure that your machine’s classless inter-domain routing (CIDR) matches your virtual private cloud’s CIDR. For more information, see Requirements for using your own VPC and VPC Validation.

    Important

    If you use a firewall, you must configure it so that ROSA can access the sites that required to function.

    For more information, see the "AWS PrivateLink firewall prerequisites" section.

  2. Create the account-wide IAM roles by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  3. Create the OIDC configuration by running the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    Save the OIDC configuration ID because you need it to create the Operator roles.

    Example output

    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 28s4avcdt2l318r1jbk3ifmimkurk384
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::46545644412:user/user'
    I: Created OIDC provider with ARN 'arn:aws:iam::46545644412:oidc-provider/oidc.op1.openshiftapps.com/28s4avcdt2l318r1jbk3ifmimkurk384'
    Copy to Clipboard Toggle word wrap

  4. Create the Operator roles by running the following command:

    $ rosa create operator-roles --hosted-cp --prefix <operator_roles_prefix> --oidc-config-id <oidc_config_id> --installer-role-arn arn:aws:iam::$<account_roles_prefix>:role/$<account_roles_prefix>-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap
  5. Create a private Red Hat OpenShift Service on AWS cluster by running the following command:

    $ rosa create cluster --private --cluster-name=<cluster-name> --sts --mode=auto --hosted-cp --operator-roles-prefix <operator_role_prefix> --oidc-config-id <oidc_config_id> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id1>[,<private-subnet-id2>,<private-subnet-id3>]
    Copy to Clipboard Toggle word wrap
  6. Enter the following command to check the status of your cluster. During cluster creation, the State field from the output will transition from pending to installing, and finally, to ready.

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap
    Note

    If installation fails or the State field does not change to ready after 10 minutes, see the "Troubleshooting Red Hat OpenShift Service on AWS installations" documentation in the Additional resources section.

  7. Enter the following command to follow the OpenShift installer logs to track the progress of your cluster:

    $ rosa logs install --cluster=<cluster_name> --watch
    Copy to Clipboard Toggle word wrap

6.2. Adding additional AWS security groups to the AWS PrivateLink endpoint

With Red Hat OpenShift Service on AWS clusters, the AWS PrivateLink endpoint exposed in the host’s Virtual Private Cloud (VPC) has a security group that limits access to requests that originate from within the cluster’s Machine CIDR range. You must create and attach another security group to the PrivateLink endpoint to grant API access to entities outside of the VPC through VPC peering, transit gateways, or other network connectivity.

Important

Adding additional AWS security groups to the AWS PrivateLink endpoint is only supported on Red Hat OpenShift Service on AWS version 4.17.2 and later.

Prerequisites

  • Your corporate network or other VPC has connectivity.
  • You have permission to create and attach security groups within the VPC.

Procedure

  1. Set your cluster name as an environmental variable by running the following command:

    $ export CLUSTER_NAME=<cluster_name>
    Copy to Clipboard Toggle word wrap

    Verify that the variable exists by running the following command:

    $ echo $CLUSTER_NAME
    Copy to Clipboard Toggle word wrap

    Example output

    hcp-private
    Copy to Clipboard Toggle word wrap

  2. Find the VPC endpoint (VPCE) ID and VPC ID by running the following command:

    $ read -r VPCE_ID VPC_ID <<< $(aws ec2 describe-vpc-endpoints --filters "Name=tag:api.openshift.com/id,Values=$(rosa describe cluster -c ${CLUSTER_NAME} -o yaml | grep '^id: ' | cut -d' ' -f2)" --query 'VpcEndpoints[].[VpcEndpointId,VpcId]' --output text)
    Copy to Clipboard Toggle word wrap
    Warning

    Modifying or removing the default AWS PrivateLink endpoint security group is not supported and might result in unexpected behavior.

  3. Create an additional security group by running the following command:

    $ export SG_ID=$(aws ec2 create-security-group --description "Granting API access to ${CLUSTER_NAME} from outside of VPC" --group-name "${CLUSTER_NAME}-api-sg" --vpc-id $VPC_ID --output text)
    Copy to Clipboard Toggle word wrap
  4. Add an inbound (ingress) rule to the security group by running the following command:

    $ aws ec2 authorize-security-group-ingress --group-id $SG_ID --ip-permissions FromPort=443,ToPort=443,IpProtocol=tcp,IpRanges=[{CidrIp=<cidr-to-allow>}] \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify the CIDR block you want to allow access from.
  5. Add the new security group to the VPCE by running the following command:

    $ aws ec2 modify-vpc-endpoint --vpc-endpoint-id $VPCE_ID --add-security-group-ids $SG_ID
    Copy to Clipboard Toggle word wrap

You can now access the API of your Red Hat OpenShift Service on AWS private cluster from the specified CIDR block.

6.3. Additional principals on your Red Hat OpenShift Service on AWS cluster

You can allow AWS Identity and Access Management (IAM) roles as additional principals to connect to your cluster’s private API server endpoint.

You can access your Red Hat OpenShift Service on AWS cluster’s API Server endpoint from either the public internet or the interface endpoint that was created within the VPC private subnets. By default, you can privately access your Red Hat OpenShift Service on AWS API Server by using the -kube-system-kube-controller-manager Operator role. To be able to access Red Hat OpenShift Service on AWS API server from another account directly without using the primary account where cluster is installed, you must include cross-account IAM roles as additional principals. This feature allows you to simplify your network architecture and reduce data transfer costs by avoiding peering or attaching cross-account VPCs to cluster’s VPC.

In this diagram, the cluster creating account is designated as Account A. This account designates that another account, Account B, should have access to the API server.

Note

After you have configured additional allowed principals, you must create the interface VPC endpoint in the VPC from where you want to access the cross-account Red Hat OpenShift Service on AWS API server. Then, create a private hosted zone in Route53 to route calls made to cross-account Red Hat OpenShift Service on AWS API server to pass through the created VPC endpoint.

Use the --additional-allowed-principals argument to permit access through other roles.

Procedure

  1. Add the --additional-allowed-principals argument to the rosa create cluster command, similar to the following:

    $ rosa create cluster [...] --additional-allowed-principals <arn_string>
    Copy to Clipboard Toggle word wrap

    You can use arn:aws:iam::account_id:role/role_name to approve a specific role.

  2. When the cluster creation command runs, you receive a summary of your cluster with the --additional-allowed-principals specified:

    Example output

    Name:                       mycluster
    Domain Prefix:              mycluster
    Display Name:               mycluster
    ID:                         <cluster-id>
    External ID:                <cluster-id>
    Control Plane:              ROSA Service Hosted
    OpenShift Version:          4.15.17
    Channel Group:              stable
    DNS:                        Not ready
    AWS Account:                <aws_id>
    AWS Billing Account:        <aws_id>
    API URL:
    Console URL:
    Region:                     us-east-2
    Availability:
     - Control Plane:           MultiAZ
     - Data Plane:              SingleAZ
    
    Nodes:
     - Compute (desired):       2
     - Compute (current):       0
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            172.30.0.0/16
     - Machine CIDR:            10.0.0.0/16
     - Pod CIDR:                10.128.0.0/14
     - Host Prefix:             /23
     - Subnets:                 subnet-453e99d40, subnet-666847ce827
    EC2 Metadata Http Tokens:   optional
    Role (STS) ARN:             arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Installer-Role
    Support Role ARN:           arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Support-Role
    Instance IAM Roles:
     - Worker:                  arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Worker-Role
    Operator IAM Roles:
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-control-plane-operator
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-cloud-network-config-controller-cloud-creden
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-image-registry-installer-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-ingress-operator-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-cluster-csi-drivers-ebs-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-kms-provider
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-kube-controller-manager
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-capa-controller-manager
    Managed Policies:           Yes
    State:                      waiting (Waiting for user action)
    Private:                    No
    Delete Protection:          Disabled
    Created:                    Jun 25 2024 13:36:37 UTC
    User Workload Monitoring:   Enabled
    Details Page:               https://console.redhat.com/openshift/details/s/Bvbok4O79q1Vg8
    OIDC Endpoint URL:          https://oidc.op1.openshiftapps.com/vhufi5lap6vbl3jlq20e (Managed)
    Audit Log Forwarding:       Disabled
    External Authentication:    Disabled
    Additional Principals:      arn:aws:iam::<aws_id>:role/additional-user-role
    Copy to Clipboard Toggle word wrap

You can add additional principals to your cluster by using the command-line interface (CLI).

Procedure

  • Run the following command to edit your cluster and add an additional principal who can access this cluster’s endpoint:

    $ rosa edit cluster -c <cluster_name> --additional-allowed-principals <arn_string>
    Copy to Clipboard Toggle word wrap

    You can use arn:aws:iam::account_id:role/role_name to approve a specific role.

6.4. Next steps

Configuring identity providers

Creating Red Hat OpenShift Service on AWS with egress zero provides a way to enhance your cluster’s stability and security by allowing your cluster to use the image registry in the local region if the cluster cannot access the internet. Your cluster first tries to pull the images from Quay, and when they aren’t reached, it instead pulls the images from the image registry in the local region.

All public and private clusters with egress zero get their Red Hat container images from an Amazon Elastic Container Registry (ECR) located in the local region of the cluster instead of gathering these images from various endpoints and registries on the internet. ECR provides storage for OpenShift release images as well as Red Hat Operators. All requests for ECR are kept within your AWS network by serving them over a VPC endpoint within your cluster.

Red Hat OpenShift Service on AWS clusters with egress zero use AWS ECR to provision your clusters without the need for public internet. Because necessary cluster lifecycle processes occur over AWS private networking, AWS ECR serves as a critical service for core cluster platform images. For more information on AWS ECR, see Amazon Elastic Container Registry.

You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the --properties zero_egress:true flag when creating your cluster.

See Upgrading Red Hat OpenShift Service on AWS clusters to upgrade clusters using egress zero.

Note

Clusters created in restricted network environments may be unable to use certain Red Hat OpenShift Service on AWS features including Red Hat Insights and Telemetry. These clusters may also experience potential failures for workloads that require public access to registries such as quay.io. When using clusters installed with egress zero, you can also install Red Hat-owned Operators from OperatorHub. For a complete list of Red Hat-owned Operators, see the Red Hat Ecosystem Catalog. Only the default Operator channel is mirrored for any Operator that is installed with egress zero.

Glossary of network environment terms

Although it is used throughout the Red Hat OpenShift Service on AWS documentation, disconnected environment is a broad term that can refer to environments with various levels of internet connectivity. Other terms are sometimes used to refer to a specific level of internet connectivity, and these environments might require additional unique configurations. These network types differ from a "standard network," which has full access to the internet.

The following table describes the different terms used to refer to environments without a full internet connection:

Expand
Table 7.1. Disconnected environment terms
TermDescription

Air-gapped network

An environment or network that is completely isolated from an external network.

This isolation depends on a physical separation, or an "air gap", between machines on the internal network and any part of an external network. Air-gapped environments are often used in industries with strict security or regulatory requirements.

Disconnected environment

An environment or network that has some level of isolation from an external network.

This isolation could be enabled by physical or logical separation between machines on the internal network and an external network. Regardless of the level of isolation from the external network, a cluster in a disconnected environment does not have access to public services hosted by Red Hat and requires additional setup to maintain full cluster functionality.

Restricted network

An environment or network with limited connection to an external network.

A physical connection might exist between machines on the internal network and an external network, but network traffic is limited by additional configurations, such as firewalls and proxies.

Prequisites

  • You have an AWS account with sufficient permissions to create VPCs, subnets, and other required infrastructure.
  • You have installed the Terraform v1.4.0+ CLI.
  • You have installed the ROSA v1.2.45+ CLI.
  • You have installed and configured the AWS CLI with the necessary credentials.
  • You have installed the git CLI.
  • You have enabled the necessary ROSA CLI firewall rules and Red Hat Hybrid Cloud Console firewall rules.
Important
  • You can use egress zero on all supported versions of Red Hat OpenShift Service on AWS that use the hosted control plane architecture; however, Red Hat suggests using the latest available z-stream release for each OpenShift Container Platform version.
  • While you may install and upgrade your clusters as you would a regular cluster, due to an upstream issue with how the internal image registry functions in disconnected environments, your cluster that uses egress zero will not be able to fully use all platform components, such as the image registry. You can restore these features by using the latest ROSA version when upgrading or installing your cluster.

7.1. Setting Environment Variables

Set the following environment variables to streamline resource creation.

Procedure

  1. Set your environment variable by running the following command:

    $ export <variable_name>=<variable_value>
    Copy to Clipboard Toggle word wrap
  2. You can confirm that your variable has been set by running the following command:

    $ echo <variable_name>
    Copy to Clipboard Toggle word wrap
    Expand
    Table 7.2. Suggested variables for disconnected Red Hat OpenShift Service on AWS clusters
    Variable nameVariable valueNotes

    AWS_ACCOUNT_ID

    $(aws sts get-caller-identity --query Account --output text)

    You must be logged in to your AWS account with rosa login.

    CLUSTER_NAME

    The name you want for your cluster.

    Your cluster name cannot exceed 26 characters.

    OIDC_ID

    The 32-digit ID for your OpenID Connect (OIDC) configuration.

    You generate this ID by running rosa create oidc-config.

    OPERATOR_ROLES_PREFIX

    The Operator role prefix.

    If you want to make your AWS account roles use the same prefix as your Operator roles, you can run ACCOUNT_ROLES_PREFIX=$OPERATOR_ROLES_PREFIX after setting your Operator role prefix variable.

    PRIVATE_SUBNET

    The ID of your private subnets.

    You must enclose this value in quotation marks (") and separate the subnet IDs with commas.

    REGION

    Your AWS region.

    -

    SUBNET_IDS

    The IDs of all your subnets.

    You must enclose this value in quotation marks (") and separate the subnet IDs with commas.

You must have a Virtual Private Cloud (VPC) to create a Red Hat OpenShift Service on AWS cluster. To pull images from the local ECR mirror over your VPC endpoint, you must configure a privatelink service connection and modify the default security groups with specific tags. Use one of the following methods to create a VPC:

  • Create a VPC using the ROSA command-line interface (CLI)
  • Create a VPC by using a Terraform template
  • Create a VPC using the AWS CLI
  • Manually create the VPC resources in the AWS console

7.2.1. Creating an AWS VPC using the ROSA CLI

The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.

If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:

Expand
VPC parameterValue

Availability zones

1

Region

us-east-1

VPC CIDR

10.0.0.0/16

You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.

Prerequisites

  • You have configured your AWS account
  • You have configured your Red Hat accounts
  • You have installed the ROSA CLI and configured it to the latest version

Verification

  • When completed, you receive a summary of the created resources:

    INFO[0140] Resources created in stack:
    INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id>
    INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e
    INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id>
    INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id>
    INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ 
    1
    
    INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ 
    2
    
    INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id>
    INFO[0140] Stack rosa-network-stack-5555 created \ 
    3
    Copy to Clipboard Toggle word wrap
    1 2
    These two subnet IDs are used to create your cluster when using the rosa create cluster command.
    3
    The network stack name is used to delete the resource later.
Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 or no value

Private subnet

kubernetes.io/role/internal-elb

1 or no value

Note

You must tag at least one private subnet and one public subnet, if applicable.

  1. Tag your resources in your terminal:

    1. For public subnets, run the following command:

      $ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run the following command:

      $ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correct by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet_id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet_id>        subnet  1
    Copy to Clipboard Toggle word wrap

7.2.2. Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.

Note

The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this Terraform script, it is in the same region that you intend to install your cluster. These examples use us-east-2.

Prerequisites

  • You have installed Terraform version 1.4.0 or newer on your machine.
  • You have installed Git on your machine.

Procedure

  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example/zero-egress
    Copy to Clipboard Toggle word wrap
  3. Initiate the Terraform file by running the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region, availability zones, CIDR blocks, and private subnets. You can choose to specify a cluster name. A rosa-zero-egress.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \ 
    1
    
          -var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\ 
    2
    
          -var vpc_cidr_block=10.0.0.0/16 \ 
    3
    
          -var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]' 
    4
    Copy to Clipboard Toggle word wrap
    1
    Enter your AWS region.
    2
    Enter the availability zones for the VPC. For example, for a VPC that uses ap-southeast-1, you would use the following as availability zones: ["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"].
    3
    Enter the CIDR block for your VPC.
    4
    Enter each of the subnets that are created for the VPC.
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa-zero-egress.tfplan
    Copy to Clipboard Toggle word wrap
Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 or no value

Private subnet

kubernetes.io/role/internal-elb

1 or no value

Note

You must tag at least one private subnet and one public subnet, if applicable.

  1. Tag your resources in your terminal:

    1. For public subnets, run the following command:

      $ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run the following command:

      $ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correct by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet_id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet_id>        subnet  1
    Copy to Clipboard Toggle word wrap

7.2.3. Creating a VPC using the AWS CLI

You can create a VPC by using the AWS CLI. For information on using this CLI, see the AWS create-vpc documentation.

7.2.4. Creating an AWS Virtual Private Cloud manually

If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.

Your VPC must meet the requirements shown in the following table.

Expand
Table 7.3. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 or no value

Private subnet

kubernetes.io/role/internal-elb

1 or no value

Note

You must tag at least one private subnet and one public subnet, if applicable.

  1. Tag your resources in your terminal:

    1. For public subnets, run the following command:

      $ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run the following command:

      $ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correct by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet_id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet_id>        subnet  1
    Copy to Clipboard Toggle word wrap

7.2.5. Troubleshooting

If your cluster fails to install, troubleshoot these common issues:

  • Make sure your DHCP option set includes a domain name, and ensure that the domain name does not include any spaces or capital letters.
  • If your VPC uses a custom DNS resolver (the domain name servers field of your DHCP option set is not AmazonProvideDNS), make sure it is able to properly resolve the private hosted zones configured in Route53.

For more information about troubleshooting Red Hat OpenShift Service on AWS cluster installations, see Troubleshooting Red Hat OpenShift Service on AWS cluster installations.

7.2.5.1. Get support

If you need additional support, visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.

7.3. Creating the account-wide STS roles and policies

Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.

Note

Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Ensure that the your worker role has the correct AWS policy by running the following command:

    $ aws iam attach-role-policy \
    --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \ 
    1
    
    --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    Copy to Clipboard Toggle word wrap
    1
    This role needs to include the prefix that was created in the previous step.
  3. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    Copy to Clipboard Toggle word wrap
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Copy to Clipboard Toggle word wrap

      Example output

      ManagedOpenShift
      Copy to Clipboard Toggle word wrap

For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.

7.4. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

7.5. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

When using the ROSA CLI, rosa, to create a cluster, you can select the default options to create the cluster quickly.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
  • You have logged in to your Red Hat account by using the ROSA CLI.
  • You have created an OIDC configuration.
  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure

  1. Use one of the following commands to create your Red Hat OpenShift Service on AWS cluster:

    Note

    When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see the CIDR range definitions.

    • If you did not set environment variables, run the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \ 
      1
      
           --mode=auto --hosted-cp [--private] \
           --operator-roles-prefix <operator-role-prefix> \ 
      2
      
           --oidc-config-id <id-of-oidc-configuration> \
           --subnet-ids=<private-subnet-id> --region <region> \
           --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
           --pod-cidr 10.128.0.0/14 --host-prefix 23 \
           --billing-account <root-acct-id> \ 
      3
      
           --properties zero_egress:true
      Copy to Clipboard Toggle word wrap
      1
      Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
      2
      By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.
      Note

      If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.

      3
      If your billing account is different from your user account, add this argument and specify the AWS account that is responsible for all billing.
    • If you set the environment variables, create a cluster with egress zero that has a single, initial machine pool, using a privately available API, and a privately available Ingress by running the following command:

      $ rosa create cluster --private --cluster-name=$CLUSTER_NAME \
          --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS \
          --region $REGION --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
          --pod-cidr 10.128.0.0/14 --host-prefix 23 \
          --private --properties zero_egress:true
      Copy to Clipboard Toggle word wrap
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    The following State field changes are listed in the output as cluster installation progresses:

    • pending (Preparing account)
    • installing (DNS setup in progress)
    • installing
    • ready

      Note

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the cluster creation progress by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: To watch for new log messages as the installation progresses, use the --watch argument.

You can create Red Hat OpenShift Service on AWS clusters that use an external OpenID Connect (OIDC) identity provider to issue tokens for authentication, replacing the built-in OpenShift OAuth server. While the built-in OpenShift OAuth server supports integration with a variety of identity providers, including external OIDC identity providers, it is limited to the capabilities of the OAuth server itself. You can directly integrate external OIDC identity providers with Red Hat OpenShift Service on AWS clusters in order to facilitate machine-to-machine workflows, such as CLI, and provide additional capabilities which are not available when using the built-in OpenShift OAuth server.

Important

Since it is not possible to upgrade or convert existing Red Hat OpenShift Service on AWS (classic architecture) clusters to a hosted control planes architecture, you must create a new cluster to use Red Hat OpenShift Service on AWS functionality. You also cannot convert a cluster that was created to use external authentication providers to use the internal OAuth2 server. You must also create a new cluster.

Note

Red Hat OpenShift Service on AWS clusters only support Security Token Service (STS) authentication.

Further reading

Additional resources

For a full list of the supported certificates, see the Compliance section of "Understanding process and security for Red Hat OpenShift Service on AWS".

8.1. Red Hat OpenShift Service on AWS Prerequisites

To create a Red Hat OpenShift Service on AWS cluster, you must have completed the following steps:

Use the --external-auth-providers-enabled flag in the ROSA CLI to create a cluster that uses an external authentication service.

Note

When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands.

Procedure

  • If you used the OIDC_ID, SUBNET_IDS, and OPERATOR_ROLES_PREFIX variables to prepare your environment, you can continue to use those variables when creating your cluster. For example, run the following command:

    $ rosa create cluster --hosted-cp --subnet-ids=$SUBNET_IDS \
       --oidc-config-id=$OIDC_ID --cluster-name=<cluster_name> \
       --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
       --external-auth-providers-enabled
    Copy to Clipboard Toggle word wrap
  • If you did not set environmental variables, run the following command:

    $ rosa create cluster --cluster-name=<cluster_name> --sts --mode=auto \
        --hosted-cp --operator-roles-prefix <operator-role-prefix> \
        --oidc-config-id <ID-of-OIDC-configuration> \
        --external-auth-providers-enabled \
        --subnet-ids=<public-subnet-id>,<private-subnet-id>
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that your external authentication is enabled in the cluster details by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    Name:                       rosa-ext-test
    Display Name:               rosa-ext-test
    ID:                         <cluster_id>
    External ID:                <cluster_ext_id>
    Control Plane:              ROSA Service Hosted
    OpenShift Version:          4.19.0
    Channel Group:              stable
    DNS:                        <dns>
    AWS Account:                <AWS_id>
    AWS Billing Account:        <AWS_id>
    API URL:                    <ocm_api>
    Console URL:
    Region:                     us-east-1
    Availability:
     - Control Plane:           MultiAZ
     - Data Plane:              SingleAZ
    
    Nodes:
     - Compute (desired):       2
     - Compute (current):       0
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            <service_cidr>
     - Machine CIDR:            <machine_cidr>
     - Pod CIDR:                <pod_cidr>
     - Host Prefix:             /23
     - Subnets:                 <subnet_ids>
    EC2 Metadata Http Tokens:   optional
    Role (STS) ARN:             arn:aws:iam::<AWS_id>:role/<account_roles_prefix>-HCP-ROSA-Installer-Role
    Support Role ARN:           arn:aws:iam::<AWS_id>:role/<account_roles_prefix>-HCP-ROSA-Support-Role
    Instance IAM Roles:
     - Worker:                  arn:aws:iam::<AWS_id>:role/<account_roles_prefix>-HCP-ROSA-Worker-Role
    Operator IAM Roles:
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-openshift-cloud-network-config-controller-clo
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-kube-system-capa-controller-manager
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-kube-system-control-plane-operator
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-kube-system-kms-provider
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-kube-system-kube-controller-manager
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-openshift-image-registry-installer-cloud-cred
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-openshift-ingress-operator-cloud-credentials
     - arn:aws:iam::<AWS_id>:role/<operator_roles_prefix>-openshift-cluster-csi-drivers-ebs-cloud-crede
    Managed Policies:           Yes
    State:                      ready
    Private:                    No
    Created:                    Mar 29 2024 14:25:52 UTC
    User Workload Monitoring:   Enabled
    Details Page:               https://<url>
    OIDC Endpoint URL:          https://<endpoint> (Managed)
    Audit Log Forwarding:       Disabled
    External Authentication:    Enabled 
    1
    Copy to Clipboard Toggle word wrap
    1
    The External Authentication flag is enabled, and you can now create an external authentication provider.

8.3. Creating an external authentication provider

After you have created a Red Hat OpenShift Service on AWS cluster with the enabled option for external authentication providers, you must create a provider using the ROSA CLI.

Note

Similar to the rosa create|delete|list idp[s] command in the ROSA CLI, you cannot edit an existing identity provider that you created using rosa create external-auth-provider. Instead, you must delete the external authentication provider and create a new one.

The following table shows the possible CLI flags you can use when creating your external authentication provider:

Expand
CLI FlagDescription

--cluster

The name or the ID of your cluster.

--name

A name that is used to refer to the external authentication provider.

--console-client-secret

This string is the client secret that is used to associate your account with the application. If you do not include the client secret, this command uses a public OIDC OAuthClient.

--issuer-audiences

This is a comma-separated list of token audiences.

--issuer-url

The URL of the token issuer.

--claim-mapping-username-claim

The name of the claim that should be used to construct user names for the cluster identity.

--claim-mapping-groups-claim

The name of the claim that should be used to construct group names for the cluster identity.

Procedure

  • To use the interactive command-line interface, run the following command:

    Example input

    $ rosa create external-auth-provider -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    I: Enabling interactive mode
    ? Name: 
    1
    
    ? Issuer audiences: 
    2
    
    ? The serving url of the token issuer: 
    3
    
    ? CA file path (optional): 
    4
    
    ? Claim mapping username: 
    5
    
    ? Claim mapping groups: 
    6
    
    ? Claim validation rule (optional): 
    7
    
    ? Console client id (optional): 
    8
    Copy to Clipboard Toggle word wrap
    1
    The name of your external authentication provider. This name should be a lower-case with numbers and dashes.
    2
    The audience IDs that this authentication provider issues tokens for.
    3
    The issuer’s URL that serves the token.
    4
    Optional: The certificate file to use when making requests.
    5
    The name of the claim that is used to construct the user names for cluster identity, such as using email.
    6
    The method with which to transform the ID token into a cluster identity, such as using groups.
    7
    Optional: The rules that help validate token claims which authenticate your users. This field should be formatted as :<required_value>.
    8
    Optional: The application or client ID that your app registration uses for the console.

  • You can include the required IDs to create your external authentication provider with the following command:

    rosa create external-auth-provider --cluster=<cluster_id> \
        --name=<provider_name> --issuer-url=<issuing_url> \
        --issuer-audiences=<audience_id> \
        --claim-mapping-username-claim=email \
        --claim-mapping-groups-claim=groups \
        --console-client-id=<client_id_for_app_registration> \
        --console-client-secret=<client_secret>
    Copy to Clipboard Toggle word wrap

    Example output

    I: Successfully created an external authentication provider for cluster 'ext-auth-test'
    Copy to Clipboard Toggle word wrap

8.3.1. Example external authentication providers

You can use one of the following examples of external authentication provider configurations to set up your own configuration.

Example Microsoft Entra ID configuration

You can use Microsoft Entra ID as an external provider. You must have already configured a Microsoft Entra ID server before using it as an external provider. See the Microsoft Entra ID documentation for more information.

The following example shows a configured Microsoft Entra ID external authentication provider:

Procedure

  1. Create an external authentication provider that uses Microsoft Entra ID by running the following command:

    Note

    You must set your own environment variables with values specific to your Microsoft Entra ID server.

    Example input

    $ rosa create external-auth-provider -c $CLUSTER_NAME \
        --claim-mapping-groups-claim groups \
        --claim-mapping-username-claim <authorized_user_name> \
        --console-client-id $CONSOLE_CLIENT_ID \
        --console-client-secret $CONSOLE_CLIENT_SECRET_VALUE \
        --issuer-audiences "$AUDIENCE_1" \
        --issuer-ca-file ca-bundle.crt --issuer-url $ISSUER_URL \
        --name m-entra-id
    Copy to Clipboard Toggle word wrap

    Example output

    I: Successfully created an external authentication provider for cluster 'ext-auth-test'. It can take a few minutes for the creation of an external authentication provider to become fully effective.
    Copy to Clipboard Toggle word wrap

  2. List the external authentication provider for your cluster to see the issuer URL or use the rosa describe command to see all details related to this external authentication provider by running one of the following commands:

    1. List the external authentication configuration on a specified cluster by running the following command:

      Example input

      $ rosa list external-auth-provider -c <cluster_name> \ 
      1
      Copy to Clipboard Toggle word wrap

      1
      Provide the name of the cluster with the external authentication provider you want to view.

      Example output

      NAME        ISSUER URL
      m-entra-id  https://login.microsoftonline.com/<group_id>/v2.0
      Copy to Clipboard Toggle word wrap

    2. Display the external authentication configuration on a specified cluster by running the following command:

      Example input

      $ rosa describe external-auth-provider \
          -c <cluster_name> --name <name_of_external_authentication> \ 
      1
       
      2
      Copy to Clipboard Toggle word wrap

      1
      Provide the name of the cluster that has the external authentication provider you want to see detailed.
      2
      Provide the name of the authentication provider you want to see detailed.

      Example output

      ID:                          ms-entra-id
      Cluster ID:                  <cluster_id>
      Issuer audiences:
                                   - <audience_id>
      Issuer Url:                  https://login.microsoftonline.com/<group_id>/v2.0
      Claim mappings group:        groups
      Claim mappings username:     email
      Copy to Clipboard Toggle word wrap

Example Keycloak configuration

You can use Keycloak as an external provider. You must have already configured a Keycloak server before using it as an external provider. See the Keycloak documentation for more information.

Procedure

  1. Create an external authentication provider that uses Keycloak by running the following command:

    Note

    You must set your own environment variables with values specific to your Keycloak server.

    Example input

    $ rosa create external-auth-provider -c $CLUSTER_NAME \
    --claim-mapping-groups-claim groups \
        --claim-mapping-username-claim <authorized_user_name> \
        --console-client-id $CONSOLE_CLIENT_ID \
        --console-client-secret $CONSOLE_CLIENT_SECRET_VALUE \
        --issuer-audiences "$AUDIENCE_1,$AUDIENCE_2" \
        --issuer-ca-file ca-bundle.crt --issuer-url $ISSUER_URL --name keycloak
    Copy to Clipboard Toggle word wrap

    Example output

    I: Successfully created an external authentication provider for cluster 'ext-auth-test'. It can take a few minutes for the creation of an external authentication provider to become fully effective.
    Copy to Clipboard Toggle word wrap

  2. List the external authentication provider for your cluster to see the issuer URL or use the rosa describe command to see all details related to this external authentication provider by running one of the following commands:

    1. List the external authentication configuration on a specified cluster by running the following command:

      Example input

      $ rosa list external-auth-provider -c <cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      NAME      ISSUER URL
      keycloak  https://keycloak-keycloak.apps.<keycloak_id>.openshift.org/realms/master
      Copy to Clipboard Toggle word wrap

    2. Display the external authentication configuration on a specified cluster by running the following command:

      Example input

      $ rosa describe external-auth-provider \
          -c <cluster_name> --name <name_of_external_authentication>
      Copy to Clipboard Toggle word wrap

      Example output

      ID:                                    keycloak
      Cluster ID:                            <cluster_id>
      Issuer audiences:
                                             - <audience_id_1>
                                             - <audience_id_2>
      Issuer Url:                            https://keycloak-keycloak.apps.<keycloak_id>.openshift.org/realms/master
      Claim mappings group:                  groups
      Claim mappings username:               <authorized_user_name>
      Console client id:                     console-test
      Copy to Clipboard Toggle word wrap

As a Red Hat OpenShift Service on AWS cluster owner, you can use the break glass credential to create temporary administrative client credentials to access your clusters that are configured with custom OpenID Connect (OIDC) token issuers. Creating a break glass credential generates a new cluster-admin kubeconfig file. The kubeconfig file contains information about the cluster that the CLI uses to connect a client to the correct cluster and API server. You can use the newly generated kubeconfig file to allow access to the Red Hat OpenShift Service on AWS cluster.

Prerequisites

  • You have created a Red Hat OpenShift Service on AWS cluster with external authentication enabled. For more information, see Creating a Red Hat OpenShift Service on AWS with HCP cluster that uses external authentication providers.
  • You have created an external authentication provider. For more information, see Creating an external authentication provider.
  • You have an account with cluster admin permissions.

Procedure

  1. Create a break glass credential by using one of the following commands:

    • To create a break glass credential by using the interactive command interface to interactively specify custom settings, run the following command:

      $ rosa create break-glass-credential -c <cluster_name> -i 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <cluster_name> with the name of your cluster.

      This command starts an interactive CLI process:

      Example output

      I: Enabling interactive mode
      ? Username (optional): 
      1
      
      ? Expiration duration (optional): 
      2
      
      I: Successfully created a break glass credential for cluster 'ac-hcp-test'.
      Copy to Clipboard Toggle word wrap

      1
      If left blank, the value in the username will have a randomly generated username value.
      2
      The minimum validity of the break glass credential is 10 minutes, and the maximum validity is 24 hours. If left blank, the expiration duration value defaults to 24 hours.
    • To create a break glass credential for cluster called mycluster with specified values:

      $ rosa create break-glass-credential -c mycluster --username test-username --expiration 1h
      Copy to Clipboard Toggle word wrap
  2. List the break glass credential IDs, status, and associated users that are available for a cluster called mycluster by running the following command:

    $ rosa list break-glass-credential -c mycluster
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                USERNAME    STATUS
    2a7jli9n4phe6c02ul7ti91djtv2o51d  test-user   issued
    Copy to Clipboard Toggle word wrap

    Note

    You can also view the credentials in a JSON output by adding the -o json argument to the command.

  3. To view the status of a break glass credential, run the following command, replacing <break_glass_credential_id> with the break glass credential ID:

    $ rosa describe break-glass-credential <break_glass_credential_id> -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    ID:                                    2a7jli9n4phe6c02ul7ti91djtv2o51d
    Username:                              test-user
    Expire at:                             Dec 28 2026 10:23:05 EDT
    Status:                                issued
    Copy to Clipboard Toggle word wrap

    The following is a list of possible Status field values:

    • issued The break glass credential has been issued and is ready to use.
    • expired The break glass credential has expired and can no longer be used.
    • failed The break glass credential has failed to create. In this case, you receive a service log detailing the failure. For more information about service logs, see Accessing the service logs for Red Hat OpenShift Service on AWS clusters. For steps to contact Red Hat Support for assistance, see Getting support.
    • awaiting_revocation The break glass credential is currently being revoked, meaning it cannot be used.
    • revoked The break glass credential has been revoked and can no longer be used.
  4. To retrieve the kubeconfig, run the following commands:

    • Create a kubeconfigs directory:

      $ mkdir ~/kubeconfigs
      Copy to Clipboard Toggle word wrap
    • Export the newly generated kubeconfig file, replacing <cluster_name> with the name of your cluster:

      $ export CLUSTER_NAME=<cluster_name> && export KUBECONFIG=~/kubeconfigs/break-glass-${CLUSTER_NAME}.kubeconfig
      Copy to Clipboard Toggle word wrap
    • View the kubeconfig:

      $ rosa describe break-glass-credential <break_glass_credential_id> -c mycluster --kubeconfig
      Copy to Clipboard Toggle word wrap

      Example output

      apiVersion: v1
      clusters:
      - cluster:
          server: <server_url>
        name: cluster
      contexts:
      - context:
          cluster: cluster
          namespace: default
          user: test-username
        name: admin
      current-context: admin
      kind: Config
      preferences: {}
      users:
      - name: test-user
        user:
          client-certificate-data: <client-certificate-data> 
      1
      
          client-key-data: <client-key-data> 
      2
      Copy to Clipboard Toggle word wrap

      1
      The client-certificate contains a certificate for the user signed by the Kubernetes certificate authorities (CA).
      2
      The client-key contains the key that signed the client certificate.
  5. Optional: To save the kubeconfig, run the following command :

    $ rosa describe break-glass-credential <break_glass_credential_id> -c mycluster --kubeconfig > $KUBECONFIG
    Copy to Clipboard Toggle word wrap

Use the new kubeconfig from the break glass credential to gain temporary admin access to a Red Hat OpenShift Service on AWS cluster.

Prerequisites

  • You have access to a Red Hat OpenShift Service on AWS cluster with external authentication enabled. For more information, see Creating a Red Hat OpenShift Service on AWS cluster that uses direct authentication with an external OIDC identity provider.
  • You have installed the oc and the kubectl CLIs.
  • You have configured the new kubeconfig. For more information, see Creating a break glass credential for a Red Hat OpenShift Service on AWS cluster.

Procedure

  1. Access the details for the cluster:

    $ rosa describe break-glass-credential <break_glass_credential_id> -c <cluster_name>  --kubeconfig > $KUBECONFIG
    Copy to Clipboard Toggle word wrap
  2. List the nodes from the cluster:

    $ oc get nodes
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                        STATUS   ROLES   AGE   VERSION
    ip-10-0-0-27.ec2.internal   Ready    worker  8m    v1.28.7+f1b5f6c
    ip-10-0-0-67.ec2.internal   Ready    worker  9m    v1.28.7+f1b5f6c
    Copy to Clipboard Toggle word wrap

  3. Verify you have the correct credentials:

    $ kubectl auth whoami
    Copy to Clipboard Toggle word wrap

    Example output

    ATTRIBUTE    VALUE
    Username     system:customer-break-glass:test-user
    Groups       [system:masters system:authenticated]
    Copy to Clipboard Toggle word wrap

  4. Apply the ClusterRoleBinding for the groups defined in the external OIDC provider. The ClusterRoleBinding maps the rosa-hcp-admins group that is created in Microsoft Entra ID to a group in the Red Hat OpenShift Service on AWS cluster.

    $ oc apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: rosa-hcp-admins
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: f715c264-ab90-45d5-8a29-2e91a609a895
    EOF
    Copy to Clipboard Toggle word wrap

    Example output

    clusterrolebinding.rbac.authorization.k8s.io/rosa-hcp-admins created
    Copy to Clipboard Toggle word wrap

    Note

    After the ClusterRoleBinding has been applied, the Red Hat OpenShift Service on AWS cluster is configured, and the rosa CLI and the Red Hat Hybrid Cloud Console are authenticated through the external OpenID Connect (OIDC) provider. You can now start assigning roles and deploying applications on the cluster.

You can revoke access to any break glass credentials that you have provisioned at any time by using the revoke break-glass-credentials command.

Prerequisites

  • You have created a break glass credential.
  • You are the cluster owner.

Procedure

  • Revoke the break glass credentials for a Red Hat OpenShift Service on AWS cluster by running the following command.

    Important

    Running this command will revoke access for all break glass credentials related to the cluster.

    $ rosa revoke break-glass-credentials -c <cluster_name> 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_name> with the name of your cluster.

    Example output

    ? Are you sure you want to revoke all the break glass credentials on cluster 'my-cluster'?: Yes
    I: Successfully requested revocation for all break glass credentials from cluster 'my-cluster'
    Copy to Clipboard Toggle word wrap

Verification

  • The revocation process can take several minutes. You can verify that the break glass credentials for your clusters have been revoked by running one of the following commands:

    • List all break glass credentials and check the status of each:

      $ rosa list break-glass-credential -c <cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID                                USERNAME    STATUS
      2330dbs0n8m3chkkr25gkkcd8pnj3lk2  test-user   awaiting_revocation
      Copy to Clipboard Toggle word wrap

    • You can also verify the status by checking the individual credential:

      $ rosa describe break-glass-credential <break_glass_credential_id> -c <cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID:                                    2330dbs0n8m3chkkr25gkkcd8pnj3lk2
      Username:                              test-user
      Expire at:                             Dec 28 2026 10:23:05 EDT
      Status:                                issued
      Revoked at:                            Dec 27 2026 15:30:33 EDT
      Copy to Clipboard Toggle word wrap

8.7. Deleting an external authentication provider

Delete external authentication providers by using the ROSA CLI.

Procedure

  1. Display your external authentication provider on your cluster by running the following command:

    $ rosa list external-auth-provider -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME        ISSUER URL
    entra-test  https://login.microsoftonline.com/<group_id>/v2.0
    Copy to Clipboard Toggle word wrap

  2. Delete the external authentication provider by running the following command:

    $ rosa delete external-auth-provider <name_of_provider> -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    ? Are you sure you want to delete external authentication provider entra-test on cluster rosa-ext-test? Yes
    I: Successfully deleted external authentication provider 'entra-test' from cluster 'rosa-ext-test'
    Copy to Clipboard Toggle word wrap

Verification

  1. Query for any external authentication providers on your cluster by running the following command:

    $ rosa list external-auth-provider -c <cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    E: there are no external authentication providers for this cluster
    Copy to Clipboard Toggle word wrap

Chapter 9. Red Hat OpenShift Service on AWS clusters without a CNI plugin

You can use your own Container Network Interface (CNI) plugin when creating a Red Hat OpenShift Service on AWS cluster. You can create a Red Hat OpenShift Service on AWS cluster without a CNI and install your own CNI plugin after cluster creation.

Important

For customers who choose to use their own CNI, the responsibility of CNI plugin support belongs to the customer in coordination with their chosen CNI vendor.

The default plugin for Red Hat OpenShift Service on AWS is the OVN-Kubernetes network plugin. This plugin is the only Red Hat supported CNI plugin for Red Hat OpenShift Service on AWS.

If you choose to use your own CNI for Red Hat OpenShift Service on AWS clusters, it is strongly recommended that you obtain commercial support from the plugin vendor before creating your clusters. Red Hat support cannot assist with CNI-related issues such as pod to pod traffic for customers who choose to use their own CNI. Red Hat still provides support for all non-CNI issues. If you want CNI-related support from Red Hat, you must install the cluster with the default OVN-Kubernetes network plugin. For more information, see the responsibility matrix.

9.1. Creating a Red Hat OpenShift Service on AWS cluster without a CNI plugin

9.1.1. Prerequisites

9.1.2. Creating the account-wide STS roles and policies

Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.

Note

Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    Copy to Clipboard Toggle word wrap
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Copy to Clipboard Toggle word wrap

      Example output

      ManagedOpenShift
      Copy to Clipboard Toggle word wrap

For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.

9.1.3. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

9.1.4. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

9.2. Creating the cluster

When using the ROSA command-line interface (CLI), rosa, to create a cluster, you can add an optional flag --no-cni to create a cluster without a CNI plugin.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
  • You have logged in to your Red Hat account by using the ROSA CLI.
  • You have created an OIDC configuration.
  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure

  1. You can create your Red Hat OpenShift Service on AWS cluster with one of the following commands.

    Note

    When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands.

    • Create a cluster with a single, initial machine pool, publicly available API, publicly available Ingress, and no CNI plugin by running the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \
          --sts --mode=auto --hosted-cp --operator-roles-prefix <operator-role-prefix> \
          --oidc-config-id <ID-of-OIDC-configuration> --subnet-ids=<public-subnet-id>,<private-subnet-id> --no-cni
      Copy to Clipboard Toggle word wrap
    • Create a cluster with a single, initial machine pool, privately available API, privately available Ingress, and no CNI plugin by running the following command:

      $ rosa create cluster --private --cluster-name=<cluster_name> \
          --sts --mode=auto --hosted-cp --subnet-ids=<private-subnet-id> --no-cni
      Copy to Clipboard Toggle word wrap
    • If you used the OIDC_ID, SUBNET_IDS, and OPERATOR_ROLES_PREFIX variables to prepare your environment, you can continue to use those variables when creating your cluster without a CNI plugin. For example, run the following command:

      $ rosa create cluster --hosted-cp --subnet-ids=$SUBNET_IDS --oidc-config-id=$OIDC_ID --cluster-name=<cluster_name> --operator-roles-prefix=$OPERATOR_ROLES_PREFIX --no-cni
      Copy to Clipboard Toggle word wrap
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap
    Important

    When you first log in to the cluster after it reaches ready status, the nodes will still be in the not ready state until you install your own CNI plugin. After CNI installation, the nodes will change to ready.

    The following State field changes are listed in the output as the cluster installation progresses:

    • pending (Preparing account)
    • installing (DNS setup in progress)
    • installing
    • ready

      Note

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the progress of the cluster creation by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: To watch for new log messages as the installation progresses, use the --watch argument.

9.2.1. Expected behavior for clusters without a CNI plugin

Although Red Hat OpenShift Service on AWS cluster installation is complete, the cluster cannot operate without a CNI plugin. Because the nodes are not ready, the workloads cannot deploy. For example, the Red Hat OpenShift Service on AWS cluster web console is not available, so you must use the OpenShift CLI (oc) to log in to the cluster. Additionally, other OpenShift components such as the HAProxy-based Ingress Controller, image registry, and prometheus-based monitoring stack are not running. This is expected behavior until you install a CNI provider.

9.3. Next steps

  • Install your CNI plugin. The nodes will then change from the not ready to ready state.

Chapter 10. Deleting a Red Hat OpenShift Service on AWS cluster

If you want to delete a Red Hat OpenShift Service on AWS cluster, you can use either the Red Hat OpenShift Cluster Manager or the ROSA command-line interface (CLI) (rosa). After deleting your cluster, you can also delete the AWS Identity and Access Management (IAM) resources that are used by the cluster.

You can delete a Red Hat OpenShift Service on AWS cluster by using the ROSA CLI or Red Hat OpenShift Cluster Manager.

After deleting the cluster, you can clean up the cluster-specific Identity and Access Management (IAM) resources in your AWS account by using the ROSA CLI. The cluster-specific resources include the Operator roles and the OpenID Connect (OIDC) provider.

Note

The cluster deletion must complete before you remove the IAM resources, because the resources are used in the cluster deletion and clean up processes.

If add-ons are installed, the cluster deletion takes longer because add-ons are uninstalled before the cluster is deleted. The amount of time depends on the number and size of the add-ons.

Prerequisites

  • You have installed a Red Hat OpenShift Service on AWS cluster.
  • You have installed and configured the latest ROSA CLI on your installation host.

Procedure

  1. Get the cluster ID, the Amazon Resource Names (ARNs) for the cluster-specific Operator roles, and the endpoint URL for the OIDC provider by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    Example output

    Name:                       test_cluster
    Domain Prefix:              test_cluster
    Display Name:               test_cluster
    ID:                         <cluster_id> 
    1
    
    External ID:                <external_id>
    Control Plane:              ROSA Service Hosted
    OpenShift Version:          4.19.0
    Channel Group:              stable
    DNS:                        test_cluster.l3cn.p3.openshiftapps.com
    AWS Account:                <AWS_id>
    AWS Billing Account:        <AWS_id>
    API URL:                    https://api.test_cluster.l3cn.p3.openshiftapps.com:443
    Console URL:
    Region:                     us-east-1
    Availability:
     - Control Plane:           MultiAZ
     - Data Plane:              SingleAZ
    
    Nodes:
     - Compute (desired):       2
     - Compute (current):       0
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            172.30.0.0/16
     - Machine CIDR:            10.0.0.0/16
     - Pod CIDR:                10.128.0.0/14
     - Host Prefix:             /23
     - Subnets:                 <subnet_ids>
    EC2 Metadata Http Tokens:   optional
    Role (STS) ARN:             arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Installer-Role
    Support Role ARN:           arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Support-Role
    Instance IAM Roles:
     - Worker:                  arn:aws:iam::<AWS_id>:role/test_cluster-HCP-ROSA-Worker-Role
    Operator IAM Roles: 
    2
    
     - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cloud-network-config-controller-cloud-crede
     - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-image-registry-installer-cloud-credentials
     - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-ingress-operator-cloud-credentials
     - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-kube-controller-manager
     - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-capa-controller-manager
     - arn:aws:iam::<AWS_id>:role/test_cluster-kube-system-control-plane-operator
     - arn:aws:iam::<AWS_id>:role/hcpcluster-kube-system-kms-provider
     - arn:aws:iam::<AWS_id>:role/test_cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials
    Managed Policies:           Yes
    State:                      ready
    Private:                    No
    Created:                    Apr 16 2024 20:32:06 UTC
    User Workload Monitoring:   Enabled
    Details Page:               https://console.redhat.com/openshift/details/s/<cluster_id>
    OIDC Endpoint URL:          https://oidc.op1.openshiftapps.com/<cluster_id> (Managed) 
    3
    
    Audit Log Forwarding:       Disabled
    External Authentication:    Disabled
    Copy to Clipboard Toggle word wrap

    1
    Lists the cluster ID.
    2
    Specifies the ARNs for the cluster-specific Operator roles. For example, in the sample output the ARN for the role required by the Machine Config Operator is arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials.
    3
    Displays the endpoint URL for the cluster-specific OIDC provider.
    Important

    After the cluster is deleted, you need the cluster ID to delete the cluster-specific STS resources using the ROSA CLI.

  2. Delete the cluster by using either the OpenShift Cluster Manager or the ROSA CLI:

    • To delete the cluster by using the OpenShift Cluster Manager:

      1. Navigate to the OpenShift Cluster Manager.
      2. Click the Options menu kebab next to your cluster and select Delete cluster.
      3. Type the name of your cluster into the prompt and click Delete.
    • To delete the cluster using the ROSA CLI:

      1. Run the following command, replacing <cluster_name> with the name or ID of your cluster:

        $ rosa delete cluster --cluster=<cluster_name> --watch
        Copy to Clipboard Toggle word wrap
        Important

        You must wait for cluster deletion to complete before you remove the Operator roles and the OIDC provider.

  3. Delete the cluster-specific Operator IAM roles by running one of the following commands:

    • For clusters without a shared Virtual Private Cloud (VPC):

      $ rosa delete operator-roles --prefix <operator_role_prefix>
      Copy to Clipboard Toggle word wrap
    • For clusters with a shared VPC:

      $ rosa delete operator-roles --prefix <operator_role_prefix> --delete-hosted-shared-vpc-policies
      Copy to Clipboard Toggle word wrap
  4. Delete the OIDC provider by running the following command:

    $ rosa delete oidc-provider --oidc-config-id <oidc_config_id>
    Copy to Clipboard Toggle word wrap

Troubleshooting

  • Ensure that there are no add-ons for your cluster pending in the Hybrid Cloud Console.
  • Ensure that all AWS resources and dependencies have been deleted in the Amazon Web Console.

10.2. Deleting the account-wide IAM resources

After you have deleted all Red Hat OpenShift Service on AWS clusters that depend on the account-wide AWS Identity and Access Management (IAM) resources, you can delete the account-wide resources.

If you no longer need to install a Red Hat OpenShift Service on AWS cluster by using Red Hat OpenShift Cluster Manager, you can also delete the OpenShift Cluster Manager and user IAM roles.

Important

The account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. Only remove the resources if they are not required by other clusters.

The OpenShift Cluster Manager and user IAM roles are required if you want to install, manage, and delete other Red Hat OpenShift Service on AWS clusters in the same AWS account by using OpenShift Cluster Manager. Only remove the roles if you no longer need to install Red Hat OpenShift Service on AWS clusters in your account by using OpenShift Cluster Manager. For more information about repairing your cluster if these roles are removed before deletion, see "Repairing a cluster that cannot be deleted" in Troubleshooting cluster deployments.

10.2.1. Deleting the account-wide IAM roles and policies

This section provides steps to delete the account-wide IAM roles and policies that you created for Red Hat OpenShift Service on AWS deployments, along with the account-wide Operator policies. You can delete the account-wide AWS Identity and Access Management (IAM) roles and policies only after deleting all of the Red Hat OpenShift Service on AWS clusters that depend on them.

Important

The account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. Only remove the roles if they are not required by other clusters.

Prerequisites

  • You have account-wide IAM roles that you want to delete.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.

Procedure

  1. Delete the account-wide roles:

    1. List the account-wide roles in your AWS account by using the ROSA CLI (rosa):

      $ rosa list account-roles
      Copy to Clipboard Toggle word wrap

      Example output

      I: Fetching account roles
      ROLE NAME                                 ROLE TYPE      ROLE ARN                                                                 OPENSHIFT VERSION  AWS Managed
      ManagedOpenShift-HCP-ROSA-Installer-Role  Installer      arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role  4.19               Yes
      ManagedOpenShift-HCP-ROSA-Support-Role    Support        arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role    4.19               Yes
      ManagedOpenShift-HCP-ROSA-Worker-Role     Worker         arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role     4.19               Yes
      Copy to Clipboard Toggle word wrap

    2. Delete the account-wide roles by running one of the following commands:

      • For clusters without a shared Virtual Private Cloud (VPC):

        $ rosa delete account-roles --prefix <prefix> --mode auto 
        1
        Copy to Clipboard Toggle word wrap
        1
        You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift.
      • For clusters with a shared VPC:

        $ rosa delete account-roles --prefix <prefix> --delete-hosted-shared-vpc-policies --mode auto 
        1
        Copy to Clipboard Toggle word wrap
        1
        You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift.
        Important

        The account-wide IAM roles might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. Only remove the roles if they are not required by other clusters.

        Example output

        W: There are no classic account roles to be deleted
        I: Deleting hosted CP account roles
        ? Delete the account role 'delete-rosa-HCP-ROSA-Installer-Role'? Yes
        I: Deleting account role 'delete-rosa-HCP-ROSA-Installer-Role'
        ? Delete the account role 'delete-rosa-HCP-ROSA-Support-Role'? Yes
        I: Deleting account role 'delete-rosa-HCP-ROSA-Support-Role'
        ? Delete the account role 'delete-rosa-HCP-ROSA-Worker-Role'? Yes
        I: Deleting account role 'delete-rosa-HCP-ROSA-Worker-Role'
        I: Successfully deleted the hosted CP account roles
        Copy to Clipboard Toggle word wrap

  2. Delete the account-wide in-line and Operator policies:

    1. Under the Policies page in the AWS IAM Console, filter the list of policies by the prefix that you specified when you created the account-wide roles and policies.

      Note

      If you did not specify a custom prefix when you created the account-wide roles, search for the default prefix, ManagedOpenShift.

    2. Delete the account-wide policies and Operator policies by using the AWS IAM Console. For more information about deleting IAM policies by using the AWS IAM Console, see Deleting IAM policies in the AWS documentation.

      Important

      The account-wide and Operator IAM policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. Only remove the roles if they are not required by other clusters.

When you install a Red Hat OpenShift Service on AWS cluster by using Red Hat OpenShift Cluster Manager, you also create OpenShift Cluster Manager and user Identity and Access Management (IAM) roles that link to your Red Hat organization. After deleting your cluster, you can unlink and delete the roles by using the ROSA CLI (rosa).

Important

The OpenShift Cluster Manager and user IAM roles are required if you want to use OpenShift Cluster Manager to install and manage other Red Hat OpenShift Service on AWS clusters in the same AWS account. Only remove the roles if you no longer need to use the OpenShift Cluster Manager to install Red Hat OpenShift Service on AWS clusters.

Prerequisites

  • You created OpenShift Cluster Manager and user IAM roles and linked them to your Red Hat organization.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have organization administrator privileges in your Red Hat organization.

Procedure

  1. Unlink the OpenShift Cluster Manager IAM role from your Red Hat organization and delete the role:

    1. List the OpenShift Cluster Manager IAM roles in your AWS account:

      $ rosa list ocm-roles
      Copy to Clipboard Toggle word wrap

      Example output

      I: Fetching ocm roles
      ROLE NAME                                                     ROLE ARN                                                                                         LINKED  ADMIN  AWS Managed
      ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>  arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>  Yes      Yes     Yes
      Copy to Clipboard Toggle word wrap

    2. If your OpenShift Cluster Manager IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization by running the following command:

      $ rosa unlink ocm-role --role-arn <arn> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <arn> with the Amazon Resource Name (ARN) for your OpenShift Cluster Manager IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>.

      Example output

      I: Unlinking OCM role
      ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' role from organization '<red_hat_organization_id>'? Yes
      I: Successfully unlinked role-arn 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' from organization account '<red_hat_organization_id>'
      Copy to Clipboard Toggle word wrap

    3. Delete the OpenShift Cluster Manager IAM role and policies:

      $ rosa delete ocm-role --role-arn <arn>
      Copy to Clipboard Toggle word wrap

      Example output

      I: Deleting OCM role
      ? OCM Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>
      ? Delete 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' ocm role? Yes
      ? OCM role deletion mode: auto 
      1
      
      I: Successfully deleted the OCM role
      Copy to Clipboard Toggle word wrap

      1
      Specifies the deletion mode. You can use auto mode to automatically delete the OpenShift Cluster Manager IAM role and policies. In manual mode, the ROSA CLI generates the aws commands needed to delete the role and policies. manual mode enables you to review the details before running the aws commands manually.
  2. Unlink the user IAM role from your Red Hat organization and delete the role:

    1. List the user IAM roles in your AWS account:

      $ rosa list user-roles
      Copy to Clipboard Toggle word wrap

      Example output

      I: Fetching user roles
      ROLE NAME                                  ROLE ARN                                                                  LINKED
      ManagedOpenShift-User-<ocm_user_name>-Role  arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role  Yes
      Copy to Clipboard Toggle word wrap

    2. If your user IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization:

      $ rosa unlink user-role --role-arn <arn> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <arn> with the Amazon Resource Name (ARN) for your user IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role.

      Example output

      I: Unlinking user role
      ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the current account '<ocm_user_account_id>'? Yes
      I: Successfully unlinked role ARN 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' from account '<ocm_user_account_id>'
      Copy to Clipboard Toggle word wrap

    3. Delete the user IAM role:

      $ rosa delete user-role --role-arn <arn>
      Copy to Clipboard Toggle word wrap

      Example output

      I: Deleting user role
      ? User Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role
      ? Delete the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the AWS account? Yes
      ? User role deletion mode: auto 
      1
      
      I: Successfully deleted the user role
      Copy to Clipboard Toggle word wrap

      1
      Specifies the deletion mode. You can use auto mode to automatically delete the user IAM role. In manual mode, the ROSA CLI generates the aws command needed to delete the role. manual mode enables you to review the details before running the aws command manually.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat