Prepare your environment


Red Hat OpenShift Service on AWS 4

Planning, limits, and scalability for Red Hat OpenShift Service on AWS

Red Hat OpenShift Documentation Team

Abstract

This document provides planning considerations for Red Hat OpenShift Service on AWS (ROSA) cluster deployments, including information about cluster limits and scalability.

This is a high level checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS cluster.

The machine that you run the installation process from must have access to the following:

  • Amazon Web Services API and authentication service endpoints
  • Red Hat OpenShift API and authentication service endpoints (api.openshift.com and sso.redhat.com)
  • Internet connectivity to obtain installation artifacts during deployment

1.1. Accounts and permissions

Ensure that you have the following accounts, credentials, and permissions.

1.1.1. AWS account

  • Create an AWS account if you do not already have one.
  • Gather the credentials required to log in to your AWS account.
  • Ensure that your AWS account has sufficient permissions to use the ROSA CLI: Least privilege permissions for common ROSA CLI commands
  • Enable Red Hat OpenShift Service on AWS for your AWS account on the AWS console.

  • Ensure you have not enabled restrictive tag policies. For more information, see Tag policies in the AWS documentation.

1.1.2. Red Hat account

  • Create a Red Hat account for the Red Hat Hybrid Cloud Console if you do not already have one.
  • Gather the credentials required to log in to your Red Hat account.

1.2. CLI requirements

You need to download and install several CLI (command-line interface) tools to be able to deploy a cluster.

1.2.1. AWS CLI (aws)

  1. Install the AWS Command Line Interface.
  2. Log in to your AWS account using the AWS CLI: Sign in through the AWS CLI
  3. Verify your account identity:

     $ aws sts get-caller-identity
    Copy to Clipboard Toggle word wrap
  4. Check whether the service role for ELB (Elastic Load Balancing) exists:

    $ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
    Copy to Clipboard Toggle word wrap

    If the role does not exist, create it by running the following command:

    $ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
    Copy to Clipboard Toggle word wrap

1.2.2. ROSA CLI (rosa)

  1. Install the ROSA CLI from the web console.
  2. Log in to your Red Hat account by running rosa login and following the instructions in the command output:

    $ rosa login
    To login to your Red{nbsp}Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
    ? Copy the token and paste it here:
    Copy to Clipboard Toggle word wrap

    Alternatively, you can copy the full $ rosa login --token=abc…​ command and paste that in the terminal:

    $ rosa login --token=<abc..>
    Copy to Clipboard Toggle word wrap
  3. Confirm you are logged in using the correct account and credentials:

    $ rosa whoami
    Copy to Clipboard Toggle word wrap

1.2.3. OpenShift CLI (oc)

The OpenShift CLI (oc) is not required to deploy a Red Hat OpenShift Service on AWS cluster, but is a useful tool for interacting with your cluster after it is deployed.

  1. Download and install oc from the OpenShift Cluster Manager Command-line interface (CLI) tools page, or follow the instructions in Getting started with the OpenShift CLI.
  2. Verify that the OpenShift CLI has been installed correctly by running the following command:

    $ rosa verify openshift-client
    Copy to Clipboard Toggle word wrap

1.3. AWS infrastructure prerequisites

  • Optionally, ensure that your AWS account has sufficient quota available to deploy a cluster.

    $ rosa verify quota
    Copy to Clipboard Toggle word wrap

    This command only checks the total quota allocated to your account; it does not reflect the amount of quota already consumed from that quota. Running this command is optional because your quota is verified during cluster deployment. However, Red Hat recommends running this command to confirm your quota ahead of time so that deployment is not interrupted by issues with quota availability.

  • For more information about resources provisioned during Red Hat OpenShift Service on AWS cluster deployment, see Provisioned AWS Infrastructure.
  • For more information about the required AWS service quotas, see Required AWS service quotas.

1.4. Service Control Policy (SCP) prerequisites

Red Hat OpenShift Service on AWS clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.

  • Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster. For more information, see the Minimum set of effective permissions for SCPs.
  • When you create a Red Hat OpenShift Service on AWS cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.

1.5. Networking prerequisites

Prerequisites needed from a networking standpoint.

1.5.1. Minimum bandwidth

During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.

After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.

1.5.2. Firewall

1.5.3. Create VPC before cluster deployment

Red Hat OpenShift Service on AWS clusters must be deployed into an existing AWS Virtual Private Cloud (VPC).

Note

Installing a new Red Hat OpenShift Service on AWS cluster into a VPC that was automatically created by the installer for a different cluster is not supported.

Your VPC must meet the requirements shown in the following table.

Expand
Table 1.1. Requirements for your VPC
RequirementDetails

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

1.5.4. Additional custom security groups

During cluster creation, you can add additional custom security groups to a cluster that has an existing non-managed VPC. To do so, complete these prerequisites before you create the cluster:

  • Create the custom security groups in AWS before you create the cluster.
  • Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
  • You may need to request additional AWS quota for Security groups per network interface.

For more details see the detailed requirements for Security groups.

1.5.5. Custom DNS and domains

You can configure a custom domain name server and custom domain name for your cluster. To do so, complete the following prerequisites before you create the cluster:

  • By default, Red Hat OpenShift Service on AWS clusters require you to set the domain name servers option to AmazonProvidedDNS to ensure successful cluster creation and operation.
  • To use a custom DNS server and domain name for your cluster, the Red Hat OpenShift Service on AWS installer must be able to use VPC DNS with default DHCP options so that it can resolve internal IPs and services. This means that you must create a custom DHCP option set to forward DNS lookups to your DNS server, and associate this option set with your VPC before you create the cluster.
  • Confirm that your VPC is using VPC Resolver by running the following command:

    $ aws ec2 describe-dhcp-options
    Copy to Clipboard Toggle word wrap

Red Hat OpenShift Service on AWS provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.

Ensure that the following prerequisites are met before installing your cluster.

The following prerequisites must be complete before you deploy a Red Hat OpenShift Service on AWS cluster.

2.2. AWS account

  • Your AWS account must allow sufficient quota to deploy your cluster.
  • If your organization applies and enforces SCP policies, these policies must not be more restrictive than the roles and policies required by the cluster.
  • You can deploy native AWS services within the same AWS account.
  • Your account must have a service-linked role to allow the installation program to configure Elastic Load Balancing (ELB). See "Creating the Elastic Load Balancing (ELB) service-linked role" for more information.

2.2.1. Support requirements

  • Red Hat recommends that the customer have at least Business Support from AWS.
  • Red Hat may have permission from the customer to request AWS support on their behalf.
  • Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.
  • Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS clusters in the same manner, unless otherwise specified in this requirements section.

2.2.2. Security requirements

  • Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.
  • Red Hat must have egress allowed to the domains documented in the "Firewall prerequisites" section. Clusters with egress zero are exempt from this requirement.

The following configuration details are required only if you use OpenShift Cluster Manager to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements.

2.3.1. AWS account association

When you provision Red Hat OpenShift Service on AWS using OpenShift Cluster Manager (console.redhat.com), you must associate the ocm-role and user-role IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as account linking.

The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account.

2.3.2. Associating your AWS account with IAM roles

You can associate or link your AWS account with existing IAM roles by using the ROSA CLI, rosa.

Prerequisites

  • You have an AWS account.
  • You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created the ocm-role and user-role IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:

    $ rosa list ocm-role
    Copy to Clipboard Toggle word wrap
    $ rosa list user-role
    Copy to Clipboard Toggle word wrap

    If Yes is displayed in the Linked column for both roles, you have already linked the roles to an AWS account.

Procedure

  1. In the ROSA CLI, link your ocm-role resource to your Red Hat organization by using your Amazon Resource Name (ARN):

    Note

    You must have Red Hat Organization Administrator privileges to run the rosa link command. After you link the ocm-role resource with your AWS account, it takes effect and is visible to all users in the organization.

    $ rosa link ocm-role --role-arn <arn>
    Copy to Clipboard Toggle word wrap

    Example output

    I: Linking OCM role
    ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes
    I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
    Copy to Clipboard Toggle word wrap

  2. In the ROSA CLI, link your user-role resource to your Red Hat user account by using your Amazon Resource Name (ARN):

    $ rosa link user-role --role-arn <arn>
    Copy to Clipboard Toggle word wrap

    Example output

    I: Linking User role
    ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes
    I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
    Copy to Clipboard Toggle word wrap

Additional resources

You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS clusters on any of the associated AWS accounts from your Red Hat organization.

With this capability, you can create clusters on different AWS profiles according to characteristics that make sense for your business, for example, by using one AWS profile for each region to create region-bound environments.

Prerequisites

  • You have an AWS account.
  • You are using OpenShift Cluster Manager to create clusters.
  • You have the permissions required to install AWS account-wide roles.
  • You have installed and configured the latest AWS (aws) and ROSA (rosa) CLIs on your installation host.
  • You have created the ocm-role and user-role IAM roles for Red Hat OpenShift Service on AWS.

Procedure

To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role, user, and account roles in the additional AWS account.

To create the roles in an additional region, specify the --profile <aws-profile> parameter when running the rosa create commands and replace <aws_profile> with the additional account profile name:

  • To specify an AWS account profile when creating an OpenShift Cluster Manager role:

    $ rosa create --profile <aws_profile> ocm-role
    Copy to Clipboard Toggle word wrap
  • To specify an AWS account profile when creating a user role:

    $ rosa create --profile <aws_profile> user-role
    Copy to Clipboard Toggle word wrap
  • To specify an AWS account profile when creating the account roles:

    $ rosa create --profile <aws_profile> account-roles
    Copy to Clipboard Toggle word wrap
Note

If you do not specify a profile, the default AWS profile and its associated AWS region are used.

An AWS opt-in region is a region that is not enabled in your AWS account by default. If you want to deploy a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:

  • The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.
  • The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.

    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

2.4.1. Setting the AWS security token version

If you want to create a Red Hat OpenShift Service on AWS cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.

Prerequisites

  • You have installed and configured the latest AWS CLI on your installation host.

Procedure

  1. List the ID of the AWS account that is defined in your AWS CLI configuration:

    $ aws sts get-caller-identity --query Account --output json
    Copy to Clipboard Toggle word wrap

    Ensure that the output matches the ID of the relevant AWS account.

  2. List the security token version that is set in your AWS account:

    $ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json
    Copy to Clipboard Toggle word wrap

    Example output

    1
    Copy to Clipboard Toggle word wrap

  3. To update the security token version to version 2 for all regions in your AWS account, run the following command:

    $ aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
    Copy to Clipboard Toggle word wrap
    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

2.5. Red Hat managed IAM references for AWS

Red Hat is not responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.

2.6. Provisioned AWS Infrastructure

This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS cluster.

2.6.1. EC2 instances

AWS EC2 instances are required to deploy Red Hat OpenShift Service on AWS.

At a minimum, two m5.xlarge EC2 instances are deployed for use as worker nodes.

The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload.

2.6.2. Amazon Elastic Block Store storage

Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. By default, the following storage is provisioned for each EC2 instance:

  • Node volumes

    • Type: AWS EBS GP3
    • Default size: 300 GiB (adjustable at creation time)
    • Minimum size: 75 GiB
  • Workload persistent volumes

    • Default storage class: gp3-csi
    • Provisioner: ebs.csi.aws.com
    • Dynamic persistent volume provisioning

2.6.3. Elastic Load Balancing

By default, one Network Load Balancer is created for use by the default ingress controller. You can create additional load balancers of the following types according to the needs of your workload:

  • Classic Load Balancer
  • Network Load Balancer
  • Application Load Balancer

For more information, see the ELB documentation for AWS.

2.6.4. S3 storage

The image registry is backed by AWS S3 storage. Resources are pruned regularly to optimize S3 usage and cluster performance.

Note

Two buckets are required with a typical size of 2TB each.

2.6.5. VPC

Configure your VPC according to the following requirements:

  • Subnets: Every cluster requires a minimum of one private subnet for every availability zone. For example, 1 private subnet is required for a single-zone cluster, and 3 private subnets are required for a cluster with 3 availability zones.

    If your cluster needs direct access to a network that is external to the cluster, including the public internet, you require at least one public subnet.

    Red Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended.

    Note

    A public subnet connects directly to the internet through an internet gateway.

    A private subnet connects to the internet through a network address translation (NAT) gateway.

  • Route tables: One route table per private subnet, and one additional table per cluster.
  • Internet gateways: One Internet Gateway per cluster.
  • NAT gateways: One NAT Gateway per public subnet.

2.6.6. Security groups

AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances.

Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in Required ports for default security groups.

Expand
Table 2.1. Required ports for default security groups
GroupTypeIP ProtocolPort range

WorkerSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

2.6.6.1. Additional custom security groups

You can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:

  • You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.
  • You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.
  • You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for Red Hat OpenShift Service on AWS see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.

2.7. Networking prerequisites

2.7.1. Minimum bandwidth

During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.

After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.

  • If you are using a firewall to control egress traffic from Red Hat OpenShift Service on AWS, your Virtual Private Cloud (VPC) must be able to complete requests from the cluster to the Amazon S3 service, for example, via an Amazon S3 gateway.
  • You must also configure your firewall to grant access to the following domain and port combinations.
Expand
DomainPortFunction

quay.io

443

Provides core container images.

cdn01.quay.io

443

Provides core container images.

cdn02.quay.io

443

Provides core container images.

cdn03.quay.io

443

Provides core container images.

cdn04.quay.io

443

Provides core container images.

cdn05.quay.io

443

Provides core container images.

cdn06.quay.io

443

Provides core container images.

quayio-production-s3.s3.amazonaws.com

443

Provides core container images.

registry.redhat.io

443

Provides core container images.

registry.access.redhat.com

443

Required. Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the odo CLI tool that helps developers build on OpenShift and Kubernetes.

access.redhat.com

443

Required. Hosts a signature store that a container client requires for verifying images when pulling them from registry.access.redhat.com.

api.openshift.com

443

Required. Used to check for available updates to the cluster.

mirror.openshift.com

443

Required. Used to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator (CVO) needs only a single functioning source.

2.7.3.2. Domains for telemetry
Expand
DomainPortFunction

infogw.api.openshift.com

443

Required for telemetry.

console.redhat.com

443

Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades.

sso.redhat.com

443

Required. The https://console.redhat.com/openshift site uses authentication from sso.redhat.com to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, etc.

Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring.

Expand
DomainPortFunction

sts.<aws_region>.amazonaws.com [1]

443

Required. Used to access the AWS Secure Token Service (STS) regional endpoint. Ensure that you replace <aws-region> with the region that your cluster is deployed in.

  1. This can also be accomplished by configuring a private interface endpoint in your AWS Virtual Private Cloud (VPC) to the regional AWS STS endpoint.
2.7.3.4. Domains for your workload

Your workload may require access to other sites that provide resources for programming languages or frameworks.

  • Allow access to sites that provide resources required by your builds.
  • Allow access to outbound URLs required for your workload, for example, OpenShift Outbound URLs to Allow.
Expand
DomainPortFunction

registry.connect.redhat.com

443

Optional. Required for all third-party-images and certified operators.

rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com

443

Optional. Provides access to container images hosted on registry.connect.redhat.com.

oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com

443

Optional. Required for Sonatype Nexus, F5 Big IP operators.

If you use a bastion host to connect to a private cluster with egress zero, you must add the following rules to your firewall so that it can connect and authenticate to the cluster.

Expand
DomainPortFrom/ToFunction

sso.redhat.com

443

ROSA CLI running on bastion host

The OpenShift console uses authentication from sso.redhat.com to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, etc.

api.openshift.com

443

ROSA CLI running on bastion host

Required for registering a Red Hat OpenShift Service on AWS cluster into Red Hat Hybrid Cloud Console.

iam.amazonaws.com

443

ROSA CLI running on bastion host

Used for creating IAM roles and attaching permissions.

servicequotas.<your region>.amazonaws.com

443

ROSA CLI running on bastion host

Checks AWS quotas to ensure they satisfy ROSA installation requirements. Alternatively, you can create a VPC endpoint for servicequota service to avoid whitelisting this URL from your firewall.

sts.<your region>.amazonaws.com

443

ROSA CLI running on bastion host

Used to get short-lived token to access AWS service. Alternatively, you can create a VPC endpoint for STS service to avoid whitelisting this url from your firewall.

ec2.<your region>.amazonaws.com

443

ROSA CLI running on bastion host

Used to retrieve EC2 instance related information such as subnets. Alternatively, you can create a VPC endpoint for EC2 service to avoid whitelisting this URL from your firewall.

Expand
DomainPortFrom/ToFunction

sts.<your region>.amazonaws.com

443

Red Hat OpenShift Service on AWS cluster

Used to access the AWS Secure Token Service (STS) regional endpoint to retrieve a short-lived token to access AWS services. Alternatively, you can create a VPC endpoint for STS service to avoid whitelisting this URL from your firewall.

console.redhat.com

443

Any browser to access Red Hat Hybrid Cloud Console

To manage a Red Hat OpenShift Service on AWS cluster from Hybrid Cloud Console.

sso.redhat.com

443

Any browser to access Red Hat Hybrid Cloud Console

The Red Hat Hybrid Cloud Console site uses authentication from sso.redhat.com to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, etc.

Next steps

Additional resources

Chapter 3. Required IAM roles and resources

You must create several role resources on your AWS account in order to create and manage a Red Hat OpenShift Service on AWS cluster.

3.1. Overview of required roles

To create and manage your Red Hat OpenShift Service on AWS cluster, you must create several account-wide and cluster-wide roles. If you intend to use OpenShift Cluster Manager to create or manage your cluster, you need some additional roles.

To create and manage clusters

Several account-wide roles are required to create and manage Red Hat OpenShift Service on AWS clusters. These roles only need to be created once per AWS account, and do not need to be created fresh for each cluster. One or more AWS managed policies are attached to each role to grant that role the required capabilities. You can specify your own prefix, or use the default prefix (ManagedOpenShift).

Note

Role names are limited to a maximum length of 64 characters in AWS IAM. When the user-specified prefix for a cluster is longer than 20 characters, the role name is truncated to observe this 64-character maximum in AWS IAM.

For Red Hat OpenShift Service on AWS clusters, you must create the following account-wide roles and attach the indicated AWS managed policies:

Expand
Table 3.1. Required account roles and AWS policies for Red Hat OpenShift Service on AWS
Role nameAWS policy names

<prefix>-HCP-ROSA-Worker-Role

ROSAWorkerInstancePolicy and AmazonEC2ContainerRegistryReadOnly

<prefix>-HCP-ROSA-Support-Role

ROSASRESupportPolicy

<prefix>-HCP-ROSA-Installer-Role

ROSAInstallerPolicy

Note

Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication.

To use Operator-managed cluster capabilities

Some cluster capabilities, including several capabilities provided by default, are managed using Operators. Cluster-specific Operator roles (operator-roles in the ROSA CLI) are required to use these capabilities. These roles are used to obtain the temporary permissions required to carry out cluster operations such as managing back-end storage, ingress, and registry. Obtaining these permissions requires the configuration of an OpenID Connect (OIDC) provider, which connects to AWS Security Token Service (STS) to authenticate Operator access to AWS resources.

For Red Hat OpenShift Service on AWS clusters, you must create the following Operator roles and attach the indicated AWS Managed policies:

Expand
Table 3.2. Required Operator roles and AWS Managed policies for ROSA with HCP
Role nameAWS-managed policy name

openshift-cloud-network-config-controller-c

ROSACloudNetworkConfigOperatorPolicy

openshift-image-registry-installer-cloud-credentials

ROSAImageRegistryOperatorPolicy

kube-system-kube-controller-manager

ROSAKubeControllerPolicy

kube-system-capa-controller-manager

ROSANodePoolManagementPolicy

kube-system-control-plane-operator

ROSAControlPlaneOperatorPolicy

kube-system-kms-provider

ROSAKMSProviderPolicy

openshift-ingress-operator-cloud-credentials

ROSAIngressOperatorPolicy

openshift-cluster-csi-drivers-ebs-cloud-credentials

ROSAAmazonEBSCSIDriverOperatorPolicy

When you create Operator roles using the rosa create operator-role command, the roles created are named using the pattern <cluster_name>-<hash>-<role_name>, for example, test-abc1-kube-system-control-plane-operator. When your cluster name is longer than 15 characters, the role name is truncated.

To use OpenShift Cluster Manager

The web user interface, OpenShift Cluster Manager, requires you to create additional roles in your AWS account to create a trust relationship between that AWS account and the OpenShift Cluster Manager.

This trust relationship is achieved through the creation and association of the ocm-role AWS IAM role. This role has a trust policy with the AWS installer that links your Red Hat account to your AWS account. In addition, you also need a user-role AWS IAM role for each web UI user, which serves to identify these users. This user-role AWS IAM role has no permissions.

The following AWS IAM roles are required to use OpenShift Cluster Manager:

  • ocm-role
  • user-role

3.2. Roles required to create and manage clusters

Several account-wide roles (account-roles in the ROSA CLI) are required to create or manage Red Hat OpenShift Service on AWS clusters. These roles must be created using the ROSA CLI (rosa), regardless of whether you typically use OpenShift Cluster Manager or the ROSA CLI to create and manage your clusters. These roles only need to be created once, and do not need to be created for every cluster you install.

Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.

Note

Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.
  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure

  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    Copy to Clipboard Toggle word wrap
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Copy to Clipboard Toggle word wrap

      Example output

      ManagedOpenShift
      Copy to Clipboard Toggle word wrap

For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.

3.3. Resources required for OIDC authentication

Red Hat OpenShift Service on AWS clusters use OIDC and the AWS Security Token Service (STS) to authenticate Operator access to AWS resources they require to perform their functions. Each production cluster requires its own OIDC configuration.

3.3.1. Creating an OpenID Connect configuration

When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    This command returns the following information.

    Example output

    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
    Copy to Clipboard Toggle word wrap

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>
    1
    Copy to Clipboard Toggle word wrap
    1
    In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Copy to Clipboard Toggle word wrap

      Example output

      13cdr6b
      Copy to Clipboard Toggle word wrap

Verification

  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Copy to Clipboard Toggle word wrap

    Example output

    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    Copy to Clipboard Toggle word wrap

Some cluster capabilities, including several capabilities provided by default, are managed using Operators. Cluster-specific Operator roles (operator-roles in the ROSA CLI) use the OpenID Connect (OIDC) provider for the cluster to temporarily authenticate Operator access to AWS resources.

Operator roles are used to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud ingress controller, and external access to a cluster.

When you create the Operator roles, the account-wide Operator policies for the matching cluster version are attached to the roles. AWS managed Operator policies are versioned in AWS IAM. The latest version of an AWS managed policy is always used, so you do not need to manage or schedule upgrades for AWS managed policies used by ROSA with HCP.

Note

If more than one matching policy is available in your account for an Operator role, an interactive list of options is provided when you create the role.

Expand
Table 3.3. Required Operator roles and AWS Managed policies for ROSA with HCP
Role nameAWS Managed policy nameRole description

openshift-cloud-network-config-controller-credentials

ROSACloudNetworkConfigOperatorPolicy

An IAM role required by the cloud network config controller to manage cloud network credentials for a cluster.

openshift-image-registry-installer-cloud-credentials

ROSAImageRegistryOperatorPolicy

An IAM role required by the ROSA Image Registry Operator to manage the OpenShift image registry storage in AWS S3 for a cluster.

kube-system-kube-controller-manager

ROSAKubeControllerPolicy

An IAM role required for OpenShift management on HCP clusters.

kube-system-capa-controller-manager

ROSANodePoolManagementPolicy

An IAM role required for node management on HCP clusters.

kube-system-control-plane-operator

ROSAControlPlaneOperatorPolicy

An IAM role required control plane management on HCP clusters.

kube-system-kms-provider

ROSAKMSProviderPolicy

An IAM role required for OpenShift management on HCP clusters.

openshift-ingress-operator-cloud-credentials

ROSAIngressOperatorPolicy

An IAM role required by the ROSA Ingress Operator to manage external access to a cluster.

openshift-cluster-csi-drivers-ebs-cloud-credentials

ROSAAmazonEBSCSIDriverOperatorPolicy

An IAM role required by ROSA to manage back-end storage through the Container Storage Interface (CSI).

3.4.2. Creating Operator roles and policies

When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.

Prerequisites

  • You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
  • You have installed and configured the latest ROSA CLI (rosa), on your installation host.
  • You created the account-wide AWS roles.

Procedure

  1. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX 
    1
    
    	--oidc-config-id=$OIDC_ID 
    2
    
    	--installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role 
    3
    Copy to Clipboard Toggle word wrap
    1
    You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2
    This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
    3
    This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.

    You must include the --hosted-cp parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.

    Example output

    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> 
    1
    
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 
    2
    
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    Copy to Clipboard Toggle word wrap

    1
    This field is prepopulated with the prefix that you set in the initial creation command.
    2
    This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.

    The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.

Verification

  • You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:

    $ rosa list operator-roles
    Copy to Clipboard Toggle word wrap

    Example output

    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes 
    1
    
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    Copy to Clipboard Toggle word wrap

    1
    After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

The roles in this section are only required when you want to use OpenShift Cluster Manager to create and manage clusters. If you intend to create and manage clusters using only the ROSA CLI (rosa) and the OpenShift CLI (oc), these roles are not required.

3.5.1. Creating an ocm-role IAM role

You create your ocm-role IAM roles by using the command-line interface (CLI).

Prerequisites

  • You have an AWS account.
  • You have Red Hat Organization Administrator privileges in the OpenShift Cluster Manager organization.
  • You have the permissions required to install AWS account-wide roles.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  • To create an ocm-role IAM role with basic privileges, run the following command:

    $ rosa create ocm-role
    Copy to Clipboard Toggle word wrap
  • To create an ocm-role IAM role with admin privileges, run the following command:

    $ rosa create ocm-role --admin
    Copy to Clipboard Toggle word wrap

    This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (rosa) create your Operator roles and policies. See "Methods of account-wide role creation" for more information.

Example output

I: Creating ocm role
? Role prefix: ManagedOpenShift 
1

? Enable admin capabilities for the OCM role (optional): No 
2

? Permissions boundary ARN (optional): 
3

? Role Path (optional): 
4

? Role creation mode: auto 
5

I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>'
? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 
6

I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN  'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182'
I: Linking OCM role
? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 
7

? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 
8

I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'
Copy to Clipboard Toggle word wrap

1
A prefix value for all of the created AWS resources. In this example, ManagedOpenShift prepends all of the AWS resources.
2
Choose if you want this role to have the additional admin permissions.
Note

You do not see this prompt if you used the --admin option.

3
The Amazon Resource Name (ARN) of the policy to set permission boundaries.
4
Specify an IAM path for the user name.
5
Choose the method to create your AWS roles. Using auto, the ROSA CLI generates and links the roles and policies. In the auto mode, you receive some different prompts to create the AWS roles.
6
The auto method asks if you want to create a specific ocm-role using your prefix.
7
Confirm that you want to associate your IAM role with your OpenShift Cluster Manager.
8
Links the created role with your AWS organization.

3.5.2. Creating a user-role IAM role

You can create your user-role IAM roles by using the command-line interface (CLI).

Prerequisites

  • You have an AWS account.
  • You have installed and configured the latest ROSA CLI, rosa, on your installation host.

Procedure

  • To create a user-role IAM role with basic privileges, run the following command:

    $ rosa create user-role
    Copy to Clipboard Toggle word wrap

    This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (rosa) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" for more information.

Example output

I: Creating User role
? Role prefix: ManagedOpenShift 
1

? Permissions boundary ARN (optional): 
2

? Role Path (optional): 
3

? Role creation mode: auto 
4

I: Creating ocm user role using 'arn:aws:iam::2066:user'
? Create the 'ManagedOpenShift-User.osdocs-Role' role? Yes 
5

I: Created role 'ManagedOpenShift-User.osdocs-Role' with ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role'
I: Linking User role
? User Role ARN: arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role
? Link the 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' role with account '1AGE'? Yes 
6

I: Successfully linked role ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' with account '1AGE'
Copy to Clipboard Toggle word wrap

1
A prefix value for all of the created AWS resources. In this example, ManagedOpenShift prepends all of the AWS resources.
2
The Amazon Resource Name (ARN) of the policy to set permission boundaries.
3
Specify an IAM path for the user name.
4
Choose the method to create your AWS roles. Using auto, the ROSA CLI generates and links the roles and policies. In the auto mode, you receive some different prompts to create the AWS roles.
5
The auto method asks if you want to create a specific user-role using your prefix.
6
Links the created role with your AWS organization.

Chapter 4. Required AWS service quotas

Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS cluster.

4.1. Required AWS service quotas

The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases:

  • Red Hat OpenShift Service on AWS clusters require a minimum AWS EC2 service quota of 32 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is 5. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota for Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances.
  • Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for Security groups per network interface is 5, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6.
Note

The AWS SDK allows ROSA to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible for cluster creation to fail because of a lack of available quota even though the AWS SDK quota check passes. To fix this issue, increase your quota.

If you need to modify or increase a specific AWS quota, see Amazon’s documentation on requesting a quota increase. Large quota requests are submitted to Amazon Support for review, and can take some time to be approved. If your quota request is urgent, contact AWS Support.

Expand
Table 4.1. Red Hat OpenShift Service on AWS-required service quota
Quota nameService codeQuota codeAWS defaultMinimum requiredDescription

Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances

ec2

L-1216C47A

5

32

Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create ROSA clusters.

Storage for General Purpose SSD (gp3) volume storage in TiB

ebs

L-7A658B76

50

1[a]

The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. 1 TiB of storage is the required minimum for optimal performance.

[a] The default quota of 50 TiB is more than Red Hat OpenShift Service on AWS clusters require; however, because AWS cost is based on usage rather than quota, Red Hat recommends using the default quota.
Expand
Table 4.2. General AWS service quotas
Quota nameService codeQuota codeAWS defaultMinimum requiredDescription

EC2-VPC Elastic IPs

ec2

L-0263D0A3

5

5

The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region.

VPCs per Region

vpc

L-F678F1CE

5

5

The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region.

Internet gateways per Region

vpc

L-A4707A72

5

5

The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region.

Network interfaces per Region

vpc

L-DF5E4CA3

5,000

5,000

The maximum number of network interfaces per Region.

Security groups per network interface

vpc

L-2AFB9258

5

5

The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000.

Application Load Balancers per Region

elasticloadbalancing

L-53DA6B97

50

50

The maximum number of Application Load Balancers that can exist in each region.

4.2. Next steps

Chapter 5. Setting up the environment

After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS.

Several command-line interface (CLI) tools are required to deploy and work with your cluster.

Prerequisites

  • You have an AWS account.
  • You have a Red Hat account.

Procedure

  1. Log in to your Red Hat and AWS accounts to access the download page for each required tool.

    1. Log in to your Red Hat account at console.redhat.com.
    2. Log in to your AWS account at aws.amazon.com.
  2. Install and configure the latest AWS CLI (aws).

    1. Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation.
    2. Configure the AWS CLI by specifying your aws_access_key_id, aws_secret_access_key, and region in the .aws/credentials file. For more information, see AWS Configuration basics in the AWS documentation.

      Note

      You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region.

    3. Query the AWS API to verify if the AWS CLI is installed and configured correctly:

      $ aws sts get-caller-identity  --output text
      Copy to Clipboard Toggle word wrap

      Example output

      <aws_account_id>    arn:aws:iam::<aws_account_id>:user/<username>  <aws_user_id>
      Copy to Clipboard Toggle word wrap

  3. Install and configure the latest ROSA CLI.

    1. Navigate to Downloads.
    2. Find Red Hat OpenShift Service on AWS command line interface (rosa) in the list of tools and click Download.

      The rosa-linux.tar.gz file is downloaded to your default download location.

    3. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive:

      $ tar xvf rosa-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    4. Move the rosa binary file to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv rosa /usr/local/bin/rosa
      Copy to Clipboard Toggle word wrap
    5. Verify that the ROSA CLI is installed correctly by querying the rosa version:

      $ rosa version
      Copy to Clipboard Toggle word wrap

      Example output

      1.2.47
      Your ROSA CLI is up to date.
      Copy to Clipboard Toggle word wrap

  4. Log in to the ROSA CLI using an offline access token.

    1. Run the login command:

      $ rosa login
      Copy to Clipboard Toggle word wrap

      Example output

      To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa
      ? Copy the token and paste it here:
      Copy to Clipboard Toggle word wrap

    2. Navigate to the URL listed in the command output to view your offline access token.
    3. Enter the offline access token at the command-line prompt to log in.

      ? Copy the token and paste it here: *******************
      [full token length omitted]
      Copy to Clipboard Toggle word wrap
      Note

      In the future you can specify the offline access token by using the --token="<offline_access_token>" argument when you run the rosa login command.

    4. Verify that you are logged in and confirm that your credentials are correct before proceeding:

      $ rosa whoami
      Copy to Clipboard Toggle word wrap

      Example output

      AWS Account ID:               <aws_account_number>
      AWS Default Region:           us-east-1
      AWS ARN:                      arn:aws:iam::<aws_account_number>:user/<aws_user_name>
      OCM API:                      https://api.openshift.com
      OCM Account ID:               <red_hat_account_id>
      OCM Account Name:             Your Name
      OCM Account Username:         you@domain.com
      OCM Account Email:            you@domain.com
      OCM Organization ID:          <org_id>
      OCM Organization Name:        Your organization
      OCM Organization External ID: <external_org_id>
      Copy to Clipboard Toggle word wrap

  5. Install and configure the latest OpenShift CLI (oc).

    1. Use the ROSA CLI to download the oc CLI.

      The following command downloads the latest version of the CLI to the current working directory:

      $ rosa download openshift-client
      Copy to Clipboard Toggle word wrap
    2. Extract the oc binary file from the downloaded archive. The following example extracts the files from a Linux tar archive:

      $ tar xvf openshift-client-linux.tar.gz
      Copy to Clipboard Toggle word wrap
    3. Move the oc binary to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user:

      $ sudo mv oc /usr/local/bin/oc
      Copy to Clipboard Toggle word wrap
    4. Verify that the oc CLI is installed correctly:

      $ rosa verify openshift-client
      Copy to Clipboard Toggle word wrap

      Example output

      I: Verifying whether OpenShift command-line tool is available...
      I: Current OpenShift Client Version: 4.17.3
      Copy to Clipboard Toggle word wrap

5.2. Next steps

This document describes how to plan your Red Hat OpenShift Service on AWS environment based on the tested cluster maximums.

Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.

The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.

While planning your environment, determine how many pods are expected to fit per node using the following formula:

required pods per cluster / pods per node = total number of nodes needed
Copy to Clipboard Toggle word wrap

The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in Planning your environment based on application requirements.

Example scenario

If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:

2200 / 250 = 8.8
Copy to Clipboard Toggle word wrap

If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:

2200 / 20 = 110
Copy to Clipboard Toggle word wrap

Where:

required pods per cluster / total number of nodes = expected pods per node
Copy to Clipboard Toggle word wrap

This document describes how to plan your Red Hat OpenShift Service on AWS environment based on your application requirements.

Consider an example application environment:

Expand
Pod typePod quantityMax memoryCPU coresPersistent storage

apache

100

500 MB

0.5

1 GB

node.js

200

1 GB

1

1 GB

postgresql

100

1 GB

2

10 GB

JBoss EAP

100

1 GB

1

1 GB

Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage.

Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.

Expand
Node typeQuantityCPUsRAM (GB)

Nodes (option 1)

100

4

16

Nodes (option 2)

50

8

32

Nodes (option 3)

25

16

64

Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.

The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing.

Disable the service links in the deployment’s service specification file to overcome this:

Example

Kind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: deploymentConfigTemplate
  creationTimestamp:
  annotations:
    description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service.
    tags: ''
objects:
  - kind: DeploymentConfig
    apiVersion: apps.openshift.io/v1
    metadata:
      name: deploymentconfig${IDENTIFIER}
    spec:
      template:
        metadata:
          labels:
            name: replicationcontroller${IDENTIFIER}
        spec:
          enableServiceLinks: false
          containers:
          - name: pause${IDENTIFIER}
            image: "${IMAGE}"
            ports:
            - containerPort: 8080
              protocol: TCP
            env:
            - name: ENVVAR1_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR2_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR3_${IDENTIFIER}
              value: "${ENV_VALUE}"
            - name: ENVVAR4_${IDENTIFIER}
              value: "${ENV_VALUE}"
            resources: {}
            imagePullPolicy: IfNotPresent
            capabilities: {}
            securityContext:
              capabilities: {}
              privileged: false
          restartPolicy: Always
          serviceAccount: ''
      replicas: 1
      selector:
        name: replicationcontroller${IDENTIFIER}
      triggers:
      - type: ConfigChange
      strategy:
        type: Rolling
  - kind: Service
    apiVersion: v1
    metadata:
      name: service${IDENTIFIER}
    spec:
      selector:
        name: replicationcontroller${IDENTIFIER}
      ports:
      - name: serviceport${IDENTIFIER}
        protocol: TCP
        port: 80
        targetPort: 8080
      portalIP: ''
      type: ClusterIP
      sessionAffinity: None
    status:
      loadBalancer: {}
  parameters:
  - name: IDENTIFIER
    description: Number to append to the name of resources
    value: '1'
    required: true
  - name: IMAGE
    description: Image to use for deploymentConfig
    value: gcr.io/google-containers/pause-amd64:3.0
    required: false
  - name: ENV_VALUE
    description: Value to use for environment variables
    generate: expression
    from: "[A-Za-z0-9]{255}"
    required: false
  labels:
template: deploymentConfigTemplate
Copy to Clipboard Toggle word wrap

The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including:

  • <SERVICE_NAME>_SERVICE_HOST=<IP>
  • <SERVICE_NAME>_SERVICE_PORT=<PORT>
  • <SERVICE_NAME>_PORT=tcp://<IP>:<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp
  • <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT>
  • <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR>

The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat