Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 1. Red Hat OpenShift Service on AWS quick start guide


Follow this guide to quickly create a Red Hat OpenShift Service on AWS cluster using the ROSA command-line interface (CLI) (rosa), grant user access, deploy your first application, and learn how to revoke user access and delete your cluster.

Overview of the default cluster specifications

You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.

The following summary describes the default cluster specifications.

Expand
Table 1.1. Default Red Hat OpenShift Service on AWS cluster specifications
ComponentDefault specifications

Accounts and roles

  • Default IAM role prefix: HCP-ROSA

Cluster settings

  • Default cluster version: Latest
  • Default AWS region for installations using the ROSA CLI (rosa): Defined by your aws CLI configuration
  • Default EC2 IMDS endpoints (both v1 and v2) are enabled
  • Availability: Single zone for the data plane
  • Monitoring for user-defined projects: Enabled
  • No cluster admin role created

Compute node machine pool

  • Compute node instance type: m5.xlarge (4 vCPU 16, GiB RAM)
  • Compute node count: 2
  • Autoscaling: Not enabled
  • No additional node labels

Networking configuration

  • Cluster privacy: Public
  • No cluster-wide proxy is configured

Classless Inter-Domain Routing (CIDR) ranges

  • Machine CIDR: 10.0.0.0/16
  • Service CIDR: 172.30.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host prefix: /23

    Note

    The static IP address 172.20.0.1 is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.

Cluster roles and policies

  • Mode used to create the Operator roles and the OpenID Connect (OIDC) provider: auto

    Note

    For installations that use OpenShift Cluster Manager on the Hybrid Cloud Console, the auto mode requires an admin-privileged OpenShift Cluster Manager role (ocm-role).

  • Default Operator role prefix: <cluster_name>-<4_digit_random_string>

Storage

  • Node volumes:

    • Type: AWS EBS GP3
    • Default size: 300GiB (adjustable at creation time)
  • Workload persistent volumes:

    • Default StorageClass: gp3-csi
    • Provisioner: ebs.csi.aws.com
    • Dynamic persistent volume provisioning

Cluster update strategy

  • Individual updates
  • 1 hour grace period for node draining

1.1. Setting up the environment

Before you create a Red Hat OpenShift Service on AWS cluster, you must set up your environment by completing the following tasks:

  • Verify Red Hat OpenShift Service on AWS prerequisites against your AWS and Red Hat accounts.
  • Install and configure the required command-line interface (CLI) tools.
  • Verify the configuration of the CLI tools.

You can follow the procedures in this section to complete these setup requirements.

Verifying Red Hat OpenShift Service on AWS prerequisites

Use the steps in this procedure to enable Red Hat OpenShift Service on AWS in your AWS account.

Prerequisites

  • You have a Red Hat account.
  • You have an AWS account.

    Note

    Consider using a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.

Procedure

  1. Sign in to the AWS Management Console.
  2. Navigate to the ROSA service.
  3. Click Get started.

    The Verify ROSA prerequisites page opens.

  4. Under ROSA enablement, ensure that a green check mark and You previously enabled ROSA are displayed.

    If not, follow these steps:

    1. Select the checkbox beside I agree to share my contact information with Red Hat.
    2. Click Enable ROSA.

      After a short wait, a green check mark and You enabled ROSA message are displayed.

  5. Under Service Quotas, ensure that a green check and Your quotas meet the requirements for ROSA are displayed.

    If you see Your quotas don’t meet the minimum requirements, take note of the quota type and the minimum listed in the error message. See Amazon’s documentation on requesting a quota increase for guidance. It may take several hours for Amazon to approve your quota request.

  6. Under ELB service-linked role, ensure that a green check mark and AWSServiceRoleForElasticLoadBalancing already exists are displayed.
  7. Click Continue to Red Hat.

    The Get started with Red Hat OpenShift Service on AWS (ROSA) page opens in a new tab. You have already completed Step 1 on this page, and can now continue with Step 2.

Installing and configuring the required CLI tools

Several command-line interface (CLI) tools are required to deploy and work with your cluster.

Prerequisites

  • You have an AWS account.
  • You have a Red Hat account.

Procedure

  1. Log in to your Red Hat and AWS accounts to access the download page for each required tool.

    1. Log in to your Red Hat account at console.redhat.com.
    2. Log in to your AWS account at aws.amazon.com.
  2. Install and configure the latest AWS CLI (aws).

    1. Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation.
    2. Configure the AWS CLI by specifying your aws_access_key_id, aws_secret_access_key, and region in the .aws/credentials file. For more information, see AWS Configuration basics in the AWS documentation.

      Note

      You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region.

    3. Query the AWS API to verify if the AWS CLI is installed and configured correctly:

      $ aws sts get-caller-identity  --output text
      Copy to Clipboard Toggle word wrap

      Example output

      <aws_account_id>    arn:aws:iam::<aws_account_id>:user/<username>  <aws_user_id>
      Copy to Clipboard Toggle word wrap

  3. Install and configure the latest ROSA CLI.

    1. Navigate to Downloads.
    2. Find Red Hat OpenShift Service on AWS command line interface (rosa) in the list of tools and click Download.

      The rosa-linux.tar.gz file is downloaded to your default download location.

    3. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive:

      $ tar xvf rosa-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    4. Move the rosa binary file to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv rosa /usr/local/bin/rosa
      Copy to Clipboard Toggle word wrap
    5. Verify that the ROSA CLI is installed correctly by querying the rosa version:

      $ rosa version
      Copy to Clipboard Toggle word wrap

      Example output

      1.2.47
      Your ROSA CLI is up to date.
      Copy to Clipboard Toggle word wrap

  4. Log in to the ROSA CLI using an offline access token.

    1. Run the login command:

      $ rosa login
      Copy to Clipboard Toggle word wrap

      Example output

      To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
      ? Copy the token and paste it here:
      Copy to Clipboard Toggle word wrap

    2. Navigate to the URL listed in the command output to view your offline access token.
    3. Enter the offline access token at the command-line prompt to log in.

      ? Copy the token and paste it here: *******************
      [full token length omitted]
      Copy to Clipboard Toggle word wrap
      Note

      In the future you can specify the offline access token by using the --token="<offline_access_token>" argument when you run the rosa login command.

    4. Verify that you are logged in and confirm that your credentials are correct before proceeding:

      $ rosa whoami
      Copy to Clipboard Toggle word wrap

      Example output

      AWS Account ID:               <aws_account_number>
      AWS Default Region:           us-east-1
      AWS ARN:                      arn:aws:iam::<aws_account_number>:user/<aws_user_name>
      OCM API:                      https://api.openshift.com
      OCM Account ID:               <red_hat_account_id>
      OCM Account Name:             Your Name
      OCM Account Username:         you@domain.com
      OCM Account Email:            you@domain.com
      OCM Organization ID:          <org_id>
      OCM Organization Name:        Your organization
      OCM Organization External ID: <external_org_id>
      Copy to Clipboard Toggle word wrap

  5. Install and configure the latest OpenShift CLI (oc).

    1. Use the ROSA CLI to download the oc CLI.

      The following command downloads the latest version of the CLI to the current working directory:

      $ rosa download openshift-client
      Copy to Clipboard Toggle word wrap
    2. Extract the oc binary file from the downloaded archive. The following example extracts the files from a Linux tar archive:

      $ tar xvf openshift-client-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    3. Move the oc binary to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv oc /usr/local/bin/oc
      Copy to Clipboard Toggle word wrap
    4. Verify that the oc CLI is installed correctly:

      $ rosa verify openshift-client
      Copy to Clipboard Toggle word wrap

      Example output

      I: Verifying whether OpenShift command-line tool is available...
      I: Current OpenShift Client Version: 4.17.3
      Copy to Clipboard Toggle word wrap

Next steps

Before you can use the Red Hat Hybrid Cloud Console to deploy Red Hat OpenShift Service on AWS clusters, you must associate your AWS account with your Red Hat organization and create the required account-wide AWS IAM STS roles and policies for Red Hat OpenShift Service on AWS.

1.2. Creating the account-wide STS roles and policies

Before using the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS clusters that use the AWS Security Token Service (STS), create the required account-wide STS roles and policies, including the Operator policies.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide AWS IAM STS roles and policies:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap

    Select the default values at the prompts to quickly create the roles and policies.

You must have an AWS Virtual Private Cloud (VPC) to create a Red Hat OpenShift Service on AWS cluster. You can use the following methods to create a VPC:

  • Create a VPC using the ROSA CLI
  • Create a VPC by using a Terraform template
  • Manually create the VPC resources in the AWS console
Note

The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this linked Terraform configuration, it is in the same region that you intend to install your cluster. In these examples, us-east-2 is used.

Creating an AWS VPC using the ROSA CLI

The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.

If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:

Expand
VPC parameterValue

Availability zones

1

Region

us-east-1

VPC CIDR

10.0.0.0/16

You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.

Prerequisites

  • You have configured your AWS account
  • You have configured your Red Hat accounts
  • You have installed the ROSA CLI and configured it to the latest version

Procedure

  1. Create an AWS VPC using the default CloudFormations template by running the following command:

    $ rosa create network
    Copy to Clipboard Toggle word wrap
  2. Optional: Customize your VPC by specifying additional parameters.

    You can use the --param flag to specify changes to the default VPC template. The following example command specifies custom values for region, Name, AvailabilityZoneCount and VpcCidr.

    $ rosa create network --param Region=us-east-2 --param Name=quickstart-stack --param AvailabilityZoneCount=3 --param VpcCidr=10.0.0.0/16
    Copy to Clipboard Toggle word wrap

    The command takes about 5 minutes to run and provides regular status updates from AWS as resources are created. If there is an issue with CloudFormation, a rollback is attempted. For all other errors that are encountered, please follow the error message instructions or contact AWS support.

Verification

  • When completed, you receive a summary of the created resources:

    INFO[0140] Resources created in stack:
    INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id>
    INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP>
    INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e
    INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id>
    INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id>
    INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id>
    INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id>
    INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id>
    INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id>
    INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ 
    1
    
    INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ 
    2
    
    INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id>
    INFO[0140] Stack rosa-network-stack-5555 created \ 
    3
    Copy to Clipboard Toggle word wrap
    1 2
    These two subnet IDs are used to create your cluster when using the rosa create cluster command.
    3
    The network stack name is used to delete the resource later.
Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.

Prerequisites

  • You have installed Terraform version 1.4.0 or newer on your machine.
  • You have installed Git on your machine.

Procedure

  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example
    Copy to Clipboard Toggle word wrap
  3. Initiate the Terraform file by running the following command:

    $ terraform init
    Copy to Clipboard Toggle word wrap

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region. You can choose to specify a cluster name. A rosa.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa.tfplan -var region=<region>
    Copy to Clipboard Toggle word wrap
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa.tfplan
    Copy to Clipboard Toggle word wrap
    1. Optional: You can capture the values of the Terraform-provisioned private, public, and machinepool subnet IDs as environment variables to use when creating your Red Hat OpenShift Service on AWS cluster by running the following commands:

      $ export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)
      Copy to Clipboard Toggle word wrap
    2. Verify that the variables were correctly set with the following command:

      $ echo $SUBNET_IDS
      Copy to Clipboard Toggle word wrap

      Example output

      $ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
      Copy to Clipboard Toggle word wrap

Creating an AWS Virtual Private Cloud manually

If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.

Your VPC must meet the requirements shown in the following table.

Expand
Table 1.2. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

1.3.1. Troubleshooting

If your cluster fails to install, troubleshoot these common issues:

  • Make sure your DHCP option set includes a domain name, and ensure that the domain name does not include any spaces or capital letters.
  • If your VPC uses a custom DNS resolver (the domain name servers field of your DHCP option set is not AmazonProvideDNS), make sure it is able to properly resolve the private hosted zones configured in Route53.

For more information about troubleshooting Red Hat OpenShift Service on AWS cluster installations, see Troubleshooting Red Hat OpenShift Service on AWS cluster installations.

1.3.1.1. Get support

If you need additional support, visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources.

Tagging your subnets

Before you can use your VPC to create a Red Hat OpenShift Service on AWS cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources for a cluster. The following table shows how your resources should be tagged:

Expand
ResourceKeyValue

Public subnet

kubernetes.io/role/elb

1 (or no value)

Private subnet

kubernetes.io/role/internal-elb

1 (or no value)

Note

You must tag at least one private subnet and, if applicable, one public subnet.

Prerequisites

  • You have created a VPC.
  • You have installed the aws CLI.

Procedure

  1. Tag your resources in your terminal by running the following commands:

    1. For public subnets, run:

      $ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
      Copy to Clipboard Toggle word wrap
    2. For private subnets, run:

      $ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
      Copy to Clipboard Toggle word wrap

Verification

  • Verify that the tag is correctly applied by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Copy to Clipboard Toggle word wrap

    Example output

    TAGS    Name                    <subnet-id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet-id>        subnet  1
    Copy to Clipboard Toggle word wrap

1.4. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

1.5. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

1.6. Creating a Red Hat OpenShift Service on AWS cluster using the CLI

When using the ROSA CLI, rosa, to create a cluster, you can select the default options to create the cluster quickly.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
  • You have logged in to your Red Hat account by using the ROSA CLI.
  • You have created an OIDC configuration.
  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure

  1. Use one of the following commands to create your Red Hat OpenShift Service on AWS cluster:

    Note

    When creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see CIDR range definitions.

    • If you did not set environmental variables, run the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \ 
      1
      
          --mode=auto --hosted-cp [--private] \ 
      2
      
          --operator-roles-prefix <operator-role-prefix> \ 
      3
      
          --external-id <external-id> \ 
      4
      
          --oidc-config-id <id-of-oidc-configuration> \
          --subnet-ids=<public-subnet-id>,<private-subnet-id>
      Copy to Clipboard Toggle word wrap
      1
      Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
      2
      Optional: The --private argument is used to create private Red Hat OpenShift Service on AWS clusters. If you use this argument, ensure that you only use your private subnet ID for --subnet-ids.
      3
      By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.
      Note

      If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.

      4
      Optional: A unique identifier that might be required when you assume a role in another account.
    • If you set the environmental variables, create a cluster with a single, initial machine pool, using either a publicly or privately available API, and a publicly or privately available Ingress by running the following command:

      $ rosa create cluster --private --cluster-name=<cluster_name> \
          --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
    • If you set the environmental variables, create a cluster with a single, initial machine pool, a publicly available API, and a publicly available Ingress by running the following command:

      $ rosa create cluster --cluster-name=<cluster_name> --mode=auto \
          --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
      Copy to Clipboard Toggle word wrap
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap

    The following State field changes are listed in the output as the cluster installation progresses:

    • pending (Preparing account)
    • installing (DNS setup in progress)
    • installing
    • ready

      Note

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the progress of the cluster creation by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: To watch for new log messages as the installation progresses, use the --watch argument.

1.7. Granting user access to a cluster

You can grant a user access to your Red Hat OpenShift Service on AWS cluster by adding them to your configured identity provider.

You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure adds a user to a GitHub organization that is configured for identity provision to the cluster.

Procedure

  1. Navigate to github.com and log in to your GitHub account.
  2. Invite users that require access to the Red Hat OpenShift Service on AWS cluster to your GitHub organization. Follow the steps in Inviting users to join your organization in the GitHub documentation.

1.8. Granting administrator privileges to a user

After you have added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin privileges for your Red Hat OpenShift Service on AWS cluster.

Procedure

  • To configure cluster-admin privileges for an identity provider user:

    1. Grant the user cluster-admin privileges:

      $ rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name.

      Example output

      I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify if the user is listed as a member of the cluster-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID                 GROUPS
      <idp_user_name>    cluster-admins
      Copy to Clipboard Toggle word wrap

  • To configure dedicated-admin privileges for an identity provider user:

    1. Grant the user dedicated-admin privileges:

      $ rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify if the user is listed as a member of the dedicated-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ID                 GROUPS
      <idp_user_name>    dedicated-admins
      Copy to Clipboard Toggle word wrap

1.9. Accessing a cluster through the web console

After you have created a cluster administrator user or added a user to your configured identity provider, you can log into your Red Hat OpenShift Service on AWS cluster through the web console.

Procedure

  1. Obtain the console URL for your cluster:

    $ rosa describe cluster -c <cluster_name> | grep Console 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_name> with the name of your cluster.

    Example output

    Console URL:                https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com
    Copy to Clipboard Toggle word wrap

  2. Go to the console URL in the output of the preceding step and log in.

    • If you created a cluster-admin user, log in by using the provided credentials.
    • If you configured an identity provider for your cluster, select the identity provider name in the Log in with…​ dialog and complete any authorization requests that are presented by your provider.

1.10. Deploying an application from the Developer Catalog

From the Red Hat OpenShift Service on AWS web console, you can deploy a test application from the Developer Catalog and expose it with a route.

Prerequisites

  • You logged in to the Red Hat Hybrid Cloud Console.
  • You created a Red Hat OpenShift Service on AWS cluster.
  • You configured an identity provider for your cluster.
  • You added your user account to the configured identity provider.

Procedure

  1. Go to the Cluster List page in OpenShift Cluster Manager.
  2. Click the options icon (⋮) next to the cluster you want to view.
  3. Click Open console.
  4. Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials.
  5. In the Administrator perspective, select Home Projects Create Project.
  6. Enter a name for your project and optionally add a Display Name and Description.
  7. Click Create to create the project.
  8. Switch to the Developer perspective and select +Add. Verify that the selected Project is the one that you just created.
  9. In the Developer Catalog dialog, select All services.
  10. In the Developer Catalog page, select Languages JavaScript from the menu.
  11. Click Node.js, and then click Create to open the Create Source-to-Image application page.

    Note

    You might need to click Clear All Filters to display the Node.js option.

  12. In the Git section, click Try sample.
  13. Add a unique name in the Name field. The value will be used to name the associated resources.
  14. Confirm that Deployment and Create a route are selected.
  15. Click Create to deploy the application. It will take a few minutes for the pods to deploy.
  16. Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the nodejs build to complete and for the nodejs pod to be in a Running state before continuing.
  17. When the deployment is complete, click the route URL for the application, which has a format similar to the following:

    https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/
    Copy to Clipboard Toggle word wrap

    A new tab in your browser opens with a message similar to the following:

    Welcome to your Node.js application on OpenShift
    Copy to Clipboard Toggle word wrap
  18. Optional: Delete the application and clean up the resources that you created:

    1. In the Administrator perspective, navigate to Home Projects.
    2. Click the action menu for your project and select Delete Project.

1.11. Revoking administrator privileges and user access

You can revoke cluster-admin or dedicated-admin privileges from a user by using the ROSA CLI, rosa.

To revoke cluster access from a user, you must remove the user from your configured identity provider.

Follow the procedures in this section to revoke administrator privileges or cluster access from a user.

Revoking administrator privileges from a user

Follow the steps in this section to revoke cluster-admin or dedicated-admin privileges from a user.

Procedure

  • To revoke cluster-admin privileges from an identity provider user:

    1. Revoke the cluster-admin privilege:

      $ rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name.

      Example output

      ? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes
      I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify that the user is not listed as a member of the cluster-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      W: There are no users configured for cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

  • To revoke dedicated-admin privileges from an identity provider user:

    1. Revoke the dedicated-admin privilege:

      $ rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      ? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes
      I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

    2. Verify that the user is not listed as a member of the dedicated-admins group:

      $ rosa list users --cluster=<cluster_name>
      Copy to Clipboard Toggle word wrap

      Example output

      W: There are no users configured for cluster '<cluster_name>'
      Copy to Clipboard Toggle word wrap

Revoking user access to a cluster

You can revoke cluster access for an identity provider user by removing them from your configured identity provider.

You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure revokes cluster access for a member of a GitHub organization that is configured for identity provision to the cluster.

Procedure

  1. Navigate to github.com and log in to your GitHub account.
  2. Remove the user from your GitHub organization. Follow the steps in Removing a member from your organization in the GitHub documentation.

You can delete a Red Hat OpenShift Service on AWS cluster by using the ROSA CLI, rosa. You can also use the ROSA CLI to delete the AWS Identity and Access Management (IAM) account-wide roles, the cluster-specific Operator roles, and the OpenID Connect (OIDC) provider. To delete the account-wide and Operator policies, you can use the AWS IAM Console or the AWS CLI.

Important

Account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.

Procedure

  1. Delete a cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster:

    $ rosa delete cluster --cluster=<cluster_name> --watch
    Copy to Clipboard Toggle word wrap
    Important

    You must wait for the cluster deletion to complete before you remove the IAM roles, policies, and OIDC provider. The account-wide roles are required to delete the resources created by the installer. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate with AWS APIs.

  2. After the cluster is deleted, delete the OIDC provider that the cluster Operators use to authenticate:

    $ rosa delete oidc-provider -c <cluster_id> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster.
    Note

    You can use the -y option to automatically answer yes to the prompts.

  3. Delete the cluster-specific Operator IAM roles:

    $ rosa delete operator-roles -c <cluster_id> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    Replace <cluster_id> with the ID of the cluster.
  4. Delete the account-wide roles:

    Important

    Account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.

    $ rosa delete account-roles --prefix <prefix> --mode auto 
    1
    Copy to Clipboard Toggle word wrap
    1
    You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, depending on how they were created, HCP-ROSA or ManagedOpenShift.
  5. Delete the account-wide and Operator IAM policies that you created for Red Hat OpenShift Service on AWS deployments:

    1. Log in to the AWS IAM Console.
    2. Navigate to Access management Policies and select the checkbox for one of the account-wide policies.
    3. With the policy selected, click on Actions Delete to open the delete policy dialog.
    4. Enter the policy name to confirm the deletion and select Delete to delete the policy.
    5. Repeat this step to delete each of the account-wide and Operator policies for the cluster.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat