Prepare your environment
Planning, limits, and scalability for Red Hat OpenShift Service on AWS
Abstract
Chapter 1. Prerequisites checklist for deploying ROSA using STS
This is a checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS (ROSA) classic cluster with STS.
This is a high level checklist and your implementation can vary.
Before running the installation process, verify that you deploy this from a machine that has access to:
- The API services for the cloud to which you provision.
-
Access to
api.openshift.com
,oidc.op1.openshiftapps.com
, andsso.redhat.com
. - The hosts on the network that you provision.
- The internet to obtain installation media.
Starting with version 1.2.7 of the ROSA CLI, all OIDC provider endpoint URLs on new clusters use Amazon CloudFront and the oidc.op1.openshiftapps.com domain. This change improves access speed, reduces latency, and improves resiliency for new clusters created with the ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations.
1.1. Accounts and CLIs Prerequisites
Accounts and CLIs you must install to deploy the cluster.
1.1.1. AWS account
Gather the following details:
- AWS IAM User
- AWS Access Key ID
- AWS Secret Access Key
- Ensure that you have the right permissions as detailed AWS managed IAM policies for ROSA and About IAM resources for ROSA clusters that use STS.
- See Account for more details.
1.1.2. AWS CLI (aws
)
- Install from AWS Command Line Interface if you have not already.
Configure the CLI:
Enter
aws configure
in the terminal:$ aws configure
- Enter the AWS Access Key ID and press enter.
- Enter the AWS Secret Access Key and press enter.
- Enter the default region you want to deploy into.
- Enter the output format you want, “table” or “json”.
Verify the output by running:
$ aws sts get-caller-identity
Ensure that the service role for ELB already exists by running:
$ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
If it does not exist, run:
$ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
1.1.3. Red Hat account
- Create a Red Hat Hybrid Cloud Console account if you have not already.
1.1.4. ROSA CLI (rosa
)
- Enable ROSA from your AWS account on the AWS console if you have not already.
- Install the CLI from Installing the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa or from the OpenShift console AWS console.
Enter
rosa login
in a terminal, and this will prompt you to go to the token page through the console:$ rosa login
- Log in with your Red Hat account credentials.
- Click the Load token button.
Copy the token and paste it back into the CLI prompt and press enter.
Alternatively, you can copy the full
$ rosa login --token=abc…
command and paste that in the terminal:$ rosa login --token=<abc..>
Verify your credentials by running:
$ rosa whoami
Ensure you have sufficient quota by running:
$ rosa verify quota
- See Provisioned AWS Infrastructure for more details on AWS services provisioned for ROSA cluster.
- See Required AWS service quotas for more details on AWS services quota.
1.1.5. OpenShift CLI (oc
)
- Install from Getting started with the OpenShift CLI or from the OpenShift console Command-line interface (CLI) tools.
Verify that the OpenShift CLI has been installed correctly by running:
$ rosa verify openshift-client
Once you have the above prerequisites installed and enabled, proceed to the next steps.
1.2. SCP Prerequisites
ROSA clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.
- Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster.
-
Ensure that your SCP is configured to allow the required
aws-marketplace:Subscribe
permission when you choose Enable ROSA from the console, and see AWS Organizations service control policy (SCP) is denying required AWS Marketplace permissions for more details. When you create a ROSA classic cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.
-
This OIDC provider configuration relies on a public key that is located in the
us-east-1
AWS region. -
Customers with AWS SCPs must allow the use of the
us-east-1
AWS region, even if these clusters are deployed in a different region.
-
This OIDC provider configuration relies on a public key that is located in the
1.3. Networking Prerequisites
Prerequisites needed from a networking standpoint.
1.3.1. Minimum bandwidth
During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster resources and public internet resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.
After deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.
1.3.2. Firewall
- Configure your firewall to allow access to the domains and ports listed in AWS firewall prerequisites.
1.3.3. Additional custom security groups
When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Complete these prerequisites before you create the cluster:
- Create the custom security groups in AWS before you create the cluster.
- Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
-
You may need to request additional AWS quota for
Security groups per network interface
.
For more details see the detailed requirements for Security groups.
1.3.4. Custom DNS
If you want to use custom DNS, then the ROSA installer must be able to use VPC DNS with default DHCP options so it can resolve hosts locally.
To do so, run
aws ec2 describe-dhcp-options
and see if the VPC is using VPC Resolver:$ aws ec2 describe-dhcp-options
- Otherwise, the upstream DNS will need to forward the cluster scope to this VPC so the cluster can resolve internal IPs and services.
1.4. PrivateLink Prerequisites
If you choose to deploy a PrivateLink cluster, then be sure to deploy the cluster in the pre-existing BYO VPC:
Create a public and private subnet for each AZ that your cluster uses.
- Alternatively, implement transit gateway for internet and egress with appropriate routes.
The VPC’s CIDR block must contain the
Networking.MachineCIDR
range, which is the IP address for cluster machines.- The subnet CIDR blocks must belong to the machine CIDR that you specify.
Set both
enableDnsHostnames
andenableDnsSupport
totrue
.- That way, the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster internal DNS records.
Verify route tables by running:
---- $ aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>" ----
- Ensure that the cluster can egress either through NAT gateway in public subnet or through transit gateway.
- Ensure whatever UDR you would like to follow is set up.
- You can also configure a cluster-wide proxy during or after install. Configuring a cluster-wide proxy for more details.
You can install a non-PrivateLink ROSA cluster in a pre-existing BYO VPC.
Chapter 2. Detailed requirements for deploying ROSA using STS
Red Hat OpenShift Service on AWS (ROSA) provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.
AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS (ROSA) because it provides enhanced security.
Ensure that the following AWS prerequisites are met before installing ROSA with STS.
When you create a ROSA cluster using AWS STS, an associated AWS OpenID Connect (OIDC) identity provider is created as well. This OIDC provider configuration relies on a public key that is located in the us-east-1
AWS region. Customers with AWS SCPs must allow the use of the us-east-1
AWS region, even if these clusters are deployed in a different region.
2.1. Customer requirements when using STS for deployment
The following prerequisites must be complete before you deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS).
2.1.1. Account
You must ensure that the AWS limits are sufficient to support Red Hat OpenShift Service on AWS provisioned within your AWS account. Running the
rosa verify quota
command in the CLI validates that you have the required quota to run a cluster.NoteQuota verification checks your AWS quota, but it does not compare your consumption to your AWS quota. See the "Limits and scalability" link in Additional resources for more information.
- If SCP policies are applied and enforced, these policies must not be more restrictive than the roles and policies required by the cluster.
- Your AWS account should not be transferable to Red Hat.
- You should not impose additional AWS usage restrictions beyond the defined roles and policies on Red Hat activities. Imposing restrictions will severely hinder Red Hat’s ability to respond to incidents.
- You may deploy native AWS services within the same AWS account.
Your account must have a service-linked role set up as it is required for Elastic Load Balancing (ELB) to be configured. See the "Creating the Elastic Load Balancing (ELB) service-linked role" link in the Additional resources for information about creating a service-linked role for your ELB if you have not created a load balancer in your AWS account previously.
NoteYou are encouraged, but not required, to deploy resources in a Virtual Private Cloud (VPC) separate from the VPC hosting Red Hat OpenShift Service on AWS and other Red Hat supported services.
Additional resources
2.1.2. Access requirements
- Red Hat must have AWS console access to the customer-provided AWS account. Red Hat protects and manages this access.
- You must not use the AWS account to elevate your permissions within the Red Hat OpenShift Service on AWS (ROSA) cluster.
-
Actions available in the ROSA CLI (
rosa
) or OpenShift Cluster Manager console must not be directly performed in your AWS account. - You do not need to have a preconfigured domain to deploy ROSA clusters. If you wish to use a custom domain, see the Additional resources for information.
Additional resources
2.1.3. Support requirements
- Red Hat recommends that the customer have at least Business Support from AWS.
- Red Hat may have permission from the customer to request AWS support on their behalf.
- Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.
- Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS clusters in the same manner, unless otherwise specified in this requirements section.
2.1.4. Security requirements
- Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.
- Red Hat must have egress allowed to the documented domains. See the "AWS firewall prerequisites" section for the designated domains.
Additional resources
2.1.5. Requirements for using OpenShift Cluster Manager
The following sections describe requirements for OpenShift Cluster Manager. If you use the CLI tools exclusively, then you can disregard the requirements.
To use OpenShift Cluster Manager, you must link your AWS accounts. This linking concept is also known as account association.
2.1.5.1. AWS account association
Red Hat OpenShift Service on AWS (ROSA) cluster-provisioning tasks require linking ocm-role
and user-role
IAM roles to your AWS account using your Amazon Resource Name (ARN).
The ocm-role
ARN is stored as a label in your Red Hat organization while the user-role
ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform the necessary tasks in the AWS account.
2.1.5.2. Linking your AWS account
You can link your AWS account to existing IAM roles by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa
.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. You have created your
ocm-role
anduser-role
IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:$ rosa list ocm-role
$ rosa list user-role
If
Yes
is displayed in theLinked
column for both roles, you have already linked the roles to an AWS account.
Procedure
From the CLI, link your
ocm-role
resource to your Red Hat organization by using your Amazon Resource Name (ARN):NoteYou must have Red Hat Organization Administrator privileges to run the
rosa link
command. After you link theocm-role
resource with your AWS account, it is visible for all users in the organization.$ rosa link ocm-role --role-arn <arn>
Example output
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
From the CLI, link your
user-role
resource to your Red Hat user account by using your Amazon Resource Name (ARN):$ rosa link user-role --role-arn <arn>
Example output
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
Additional resources
- See Account-wide IAM role and policy reference for a list of IAM roles needed for cluster creation.
2.1.5.3. Associating multiple AWS accounts with your Red Hat organization
You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS (ROSA) clusters on any of the associated AWS accounts from your Red Hat organization.
With this feature, you can create clusters in different AWS regions by using multiple AWS profiles as region-bound environments.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. -
You have created your
ocm-role
anduser-role
IAM roles.
Procedure
To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role
, user, and account roles in the additional AWS account.
To create the roles in an additional region, specify the --profile <aws-profile>
parameter when running the rosa create
commands and replace <aws_profile>
with the additional account profile name:
To specify an AWS account profile when creating an OpenShift Cluster Manager role:
$ rosa create --profile <aws_profile> ocm-role
To specify an AWS account profile when creating a user role:
$ rosa create --profile <aws_profile> user-role
To specify an AWS account profile when creating the account roles:
$ rosa create --profile <aws_profile> account-roles
If you do not specify a profile, the default AWS profile is used.
2.2. Requirements for deploying a cluster in an opt-in region
An AWS opt-in region is a region that is not enabled by default. If you want to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:
- The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.
The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.
ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.2.1. Setting the AWS security token version
If you want to create a Red Hat OpenShift Service on AWS (ROSA) cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.
Prerequisites
- You have installed and configured the latest AWS CLI on your installation host.
Procedure
List the ID of the AWS account that is defined in your AWS CLI configuration:
$ aws sts get-caller-identity --query Account --output json
Ensure that the output matches the ID of the relevant AWS account.
List the security token version that is set in your AWS account:
$ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json
Example output
1
To update the security token version to version 2 for all regions in your AWS account, run the following command:
$ aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.3. Red Hat managed IAM references for AWS
With the STS deployment model, Red Hat is no longer responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.
-
To use the
ocm
CLI, you must have anocm-role
anduser-role
resource. See OpenShift Cluster Manager IAM role resources. - If you have a single cluster, see Account-wide IAM role and policy reference.
- For every cluster, you must have the necessary operator roles. See Cluster-specific Operator IAM role reference.
2.4. Provisioned AWS Infrastructure
This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS (ROSA) cluster. For a more detailed listing of all provisioned AWS components, see the OpenShift Container Platform documentation.
2.4.1. EC2 instances
AWS EC2 instances are required for deploying the control plane and data plane functions of ROSA in the AWS public cloud.
Instance types can vary for control plane and infrastructure nodes, depending on the worker node count. At a minimum, the following EC2 instances will be deployed:
-
Three
m5.2xlarge
control plane nodes -
Two
r5.xlarge
infrastructure nodes -
Two
m5.xlarge
customizable worker nodes
For further guidance on worker node counts, see the information about initial planning considerations in the "Limits and scalability" topic listed in the "Additional resources" section of this page.
2.4.2. Amazon Elastic Block Store storage
Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage.
Volume requirements for each EC2 instance:
Control Plane Volume
- Size: 350GB
- Type: gp3
- Input/Output Operations Per Second: 1000
Infrastructure Volume
- Size: 300GB
- Type: gp3
- Input/Output Operations Per Second: 900
Worker Volume
- Size: 300GB
- Type: gp3
- Input/Output Operations Per Second: 900
Clusters deployed before the release of OpenShift Container Platform 4.11 use gp2 type storage by default.
2.4.3. Elastic Load Balancing
Up to two Network Load Balancers for API and up to two Classic Load Balancers for application router. For more information, see the ELB documentation for AWS.
2.4.4. S3 storage
The image registry is backed by AWS S3 storage. Pruning of resources is performed regularly to optimize S3 usage and cluster performance.
Two buckets are required with a typical size of 2TB each.
2.4.5. VPC
Customers should expect to see one VPC per cluster. Additionally, the VPC will need the following configurations:
Subnets: Two subnets for a cluster with a single availability zone, or six subnets for a cluster with multiple availability zones.
NoteA public subnet connects directly to the internet through an internet gateway. A private subnet connects to the internet through a network address translation (NAT) gateway.
- Route tables: One route table per private subnet, and one additional table per cluster.
- Internet gateways: One Internet Gateway per cluster.
- NAT gateways: One NAT Gateway per public subnet.
Figure 2.1. Sample VPC Architecture
2.4.6. Security groups
AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances. You must ensure the ports required for the OpenShift installation are open on your network and configured to allow access between hosts.
Group | Type | IP Protocol | Port range |
---|---|---|---|
MasterSecurityGroup |
|
|
|
|
| ||
|
| ||
|
| ||
WorkerSecurityGroup |
|
|
|
|
| ||
BootstrapSecurityGroup |
|
|
|
|
|
2.4.6.1. Additional custom security groups
When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:
- You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.
- You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.
- You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for ROSA, see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.
2.5. Networking prerequisites
2.5.1. Minimum bandwidth
During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster resources and public internet resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.
After deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.
2.5.2. AWS firewall prerequisites
If you are using a firewall to control egress traffic from your Red Hat OpenShift Service on AWS, you must configure your firewall to grant access to the certain domain and port combinations below. Red Hat OpenShift Service on AWS requires this access to provide a fully managed OpenShift service.
2.5.2.1. ROSA Classic
Only ROSA clusters deployed with PrivateLink can use a firewall to control egress traffic.
Prerequisites
- You have configured an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service.
Procedure
Allowlist the following URLs that are used to install and download packages and tools:
Domain Port Function registry.redhat.io
443
Provides core container images.
quay.io
443
Provides core container images.
cdn01.quay.io
443
Provides core container images.
cdn02.quay.io
443
Provides core container images.
cdn03.quay.io
443
Provides core container images.
sso.redhat.com
443
Required. The
https://console.redhat.com/openshift
site uses authentication fromsso.redhat.com
to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, and so on.quay-registry.s3.amazonaws.com
443
Provides core container images.
quayio-production-s3.s3.amazonaws.com
443
Provides core container images.
openshift.org
443
Provides Red Hat Enterprise Linux CoreOS (RHCOS) images.
registry.access.redhat.com
443
Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the
odo
CLI tool that helps developers build on OpenShift and Kubernetes.access.redhat.com
443
Required. Hosts a signature store that a container client requires for verifying images when pulling them from
registry.access.redhat.com
.registry.connect.redhat.com
443
Required for all third-party images and certified Operators.
console.redhat.com
443
Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades.
sso.redhat.com
443
The
https://console.redhat.com/openshift
site uses authentication fromsso.redhat.com
.pull.q1w2.quay.rhcloud.com
443
Provides core container images as a fallback when quay.io is not available.
.q1w2.quay.rhcloud.com
443
Provides core container images as a fallback when quay.io is not available.
www.okd.io
443
The
openshift.org
site redirects throughwww.okd.io
.www.redhat.com
443
The
sso.redhat.com
site redirects throughwww.redhat.com
.aws.amazon.com
443
The
iam.amazonaws.com
andsts.amazonaws.com
sites redirect throughaws.amazon.com
.catalog.redhat.com
443
The
registry.access.redhat.com
andhttps://registry.redhat.io
sites redirect throughcatalog.redhat.com
.dvbwgdztaeq9o.cloudfront.net
[1]443
Used by ROSA for STS implementation with managed OIDC configuration.
time-a-g.nist.gov
123 [2]
Allows NTP traffic for FedRAMP.
time-a-wwv.nist.gov
123 [2]
Allows NTP traffic for FedRAMP.
time-a-b.nist.gov
123 [2]
Allows NTP traffic for FedRAMP.
-
The string of alphanumeric characters before
cloudfront.net
could change if there is a major cloudfront outage that requires redirecting the resource. - Both TCP and UDP ports.
-
The string of alphanumeric characters before
Allowlist the following telemetry URLs:
Domain Port Function cert-api.access.redhat.com
443
Required for telemetry.
api.access.redhat.com
443
Required for telemetry.
infogw.api.openshift.com
443
Required for telemetry.
console.redhat.com
443
Required for telemetry and Red Hat Insights.
cloud.redhat.com/api/ingress
443
Required for telemetry and Red Hat Insights.
observatorium-mst.api.openshift.com
443
Required for managed OpenShift-specific telemetry.
observatorium.api.openshift.com
443
Required for managed OpenShift-specific telemetry.
Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring in the Additional resources section.
Allowlist the following Amazon Web Services (AWS) API URls:
Domain Port Function .amazonaws.com
443
Required to access AWS services and resources.
Alternatively, if you choose to not use a wildcard for Amazon Web Services (AWS) APIs, you must allowlist the following URLs:
Domain Port Function ec2.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
events.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
iam.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
route53.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
sts.amazonaws.com
443
Used to install and manage clusters in an AWS environment, for clusters configured to use the global endpoint for AWS STS.
sts.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment, for clusters configured to use regionalized endpoints for AWS STS. See AWS STS regionalized endpoints for more information.
tagging.us-east-1.amazonaws.com
443
Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1, regardless of the region the cluster is deployed in.
ec2.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
elasticloadbalancing.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
servicequotas.<aws_region>.amazonaws.com
443
Required. Used to confirm quotas for deploying the service.
tagging.<aws_region>.amazonaws.com
443
Allows the assignment of metadata about AWS resources in the form of tags.
Allowlist the following OpenShift URLs:
Domain Port Function mirror.openshift.com
443
Used to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator (CVO) needs only a single functioning source.
storage.googleapis.com/openshift-release
(Recommended)443
Alternative site to mirror.openshift.com/. Used to download platform release signatures that are used by the cluster to know what images to pull from quay.io.
api.openshift.com
443
Used to check if updates are available for the cluster.
Allowlist the following site reliability engineering (SRE) and management URLs:
Domain Port Function api.pagerduty.com
443
This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on.
events.pagerduty.com
443
This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on.
api.deadmanssnitch.com
443
Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running.
nosnch.in
443
Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running.
.osdsecuritylogs.splunkcloud.com
ORinputs1.osdsecuritylogs.splunkcloud.com
inputs2.osdsecuritylogs.splunkcloud.com
inputs4.osdsecuritylogs.splunkcloud.com
inputs5.osdsecuritylogs.splunkcloud.com
inputs6.osdsecuritylogs.splunkcloud.com
inputs7.osdsecuritylogs.splunkcloud.com
inputs8.osdsecuritylogs.splunkcloud.com
inputs9.osdsecuritylogs.splunkcloud.com
inputs10.osdsecuritylogs.splunkcloud.com
inputs11.osdsecuritylogs.splunkcloud.com
inputs12.osdsecuritylogs.splunkcloud.com
inputs13.osdsecuritylogs.splunkcloud.com
inputs14.osdsecuritylogs.splunkcloud.com
inputs15.osdsecuritylogs.splunkcloud.com
9997
Used by the
splunk-forwarder-operator
as a logging forwarding endpoint to be used by Red Hat SRE for log-based alerting.http-inputs-osdsecuritylogs.splunkcloud.com
443
Required. Used by the
splunk-forwarder-operator
as a logging forwarding endpoint to be used by Red Hat SRE for log-based alerting.sftp.access.redhat.com
(Recommended)22
The SFTP server used by
must-gather-operator
to upload diagnostic logs to help troubleshoot issues with the cluster.Allowlist the following URLs for optional third-party content:
Domain Port Function registry.connect.redhat.com
443
Required for all third-party-images and certified operators.
rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com
443
Provides access to container images hosted on
registry.connect.redhat.com
oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com
443
Required for Sonatype Nexus, F5 Big IP operators.
- Allowlist any site that provides resources for a language or framework that your builds require.
- Allowlist any outbound URLs that depend on the languages and frameworks used in OpenShift. See OpenShift Outbound URLs to Allow for a list of recommended URLs to be allowed on the firewall or proxy.
2.5.2.2. ROSA with HCP
Prerequisites
- You have configured an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service.
Procedure
Allowlist the following URLs that are used to download and install packages and tools:
Domain Port Function quay.io
443
Provides core container images.
cdn01.quay.io
443
Provides core container images.
cdn02.quay.io
443
Provides core container images.
cdn03.quay.io
443
Provides core container images.
quayio-production-s3.s3.amazonaws.com
443
Provides core container images.
registry.redhat.io
443
Provides core container images.
registry.access.redhat.com
443
Required. Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the
odo
CLI tool that helps developers build on OpenShift and Kubernetes.access.redhat.com
443
Required. Hosts a signature store that a container client requires for verifying images when pulling them from
registry.access.redhat.com
.api.openshift.com
443
Required. Used to check for available updates to the cluster.
mirror.openshift.com
443
Required. Used to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator (CVO) needs only a single functioning source.
Allowlist the following telemetry URLs:
Domain Port Function infogw.api.openshift.com
443
Required for telemetry.
console.redhat.com
443
Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades.
sso.redhat.com
443
Required. The
https://console.redhat.com/openshift
site uses authentication fromsso.redhat.com
to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, etc.Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring in the Additional resources section.
Allowlist the following Amazon Web Services (AWS) API URls:
Domain Port Function sts.<aws_region>.amazonaws.com
[1]443
Required. Used to access the AWS Secure Token Service (STS) regional endpoint. Ensure that you replace
<aws-region>
with the region that your cluster is deployed in.sts.amazonaws.com
[2]443
See footnote. Used to access the AWS Secure Token Service (STS) global endpoint.
- This can also be accomplished by configuring a private interface endpoint in your AWS Virtual Private Cloud (VPC) to the regional AWS STS endpoint.
- The AWS STS global endpoint is only required to be allowed if you are running a version of OpenShift before 4.14.18 or 4.15.4. ROSA HCP version 4.14.18+, 4.15.4+, and 4.16.0+ use the AWS STS regional endpoint.
Allowlist the following URLs for optional third-party content:
Domain Port Function registry.connect.redhat.com
443
Optional. Required for all third-party-images and certified operators.
rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com
443
Optional. Provides access to container images hosted on
registry.connect.redhat.com
.oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com
443
Optional. Required for Sonatype Nexus, F5 Big IP operators.
- Allowlist any site that provides resources for a language or framework that your builds require.
- Allowlist any outbound URLs that depend on the languages and frameworks used in OpenShift. See OpenShift Outbound URLs to Allow for a list of recommended URLs to be allowed on the firewall or proxy.
Additional resources
2.6. Next steps
2.7. Additional resources
Chapter 3. ROSA IAM role resources
Red Hat OpenShift Service on AWS (ROSA) web UI requires that you have specific permissions on your AWS account that create a trust relationship to provide the end-user experience at OpenShift Cluster Manager and for the rosa
command line interface (CLI).
This trust relationship is achieved through the creation and association of the ocm-role
AWS IAM role. This role has a trust policy with the AWS installer that links your Red Hat account to your AWS account. In addition, you also need a user-role
AWS IAM role for each web UI user, which serves to identify these users. This user-role
AWS IAM role has no permissions.
The AWS IAM roles required to use OpenShift Cluster Manager are:
-
ocm-role
-
user-role
Whether you manage your clusters using the ROSA CLI (rosa
) or OpenShift Cluster Manager web UI, you must create the account-wide roles, known as account-roles
in the ROSA CLI, by using the ROSA CLI. These account roles are necessary for your first cluster, and these roles can be used across multiple clusters. These required account roles are:
-
Worker-Role
-
Support-Role
-
Installer-Role
-
ControlPlane-Role
Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication.
For more information about creating these roles, see Account-wide IAM role and policy reference.
Cluster-specific Operator roles, known as operator-roles
in the ROSA CLI, obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, ingress, and registry. These roles are required by the cluster that you create. These required Operator roles are:
-
<cluster_name>-<hash>-openshift-cluster-csi-drivers-ebs-cloud-credentials
-
<cluster_name>-<hash>-openshift-cloud-network-config-controller-credentials
-
<cluster_name>-<hash>-openshift-machine-api-aws-cloud-credentials
-
<cluster_name>-<hash>-openshift-cloud-credential-operator-cloud-credentials
-
<cluster_name>-<hash>-openshift-image-registry-installer-cloud-credentials
-
<cluster_name>-<hash>-openshift-ingress-operator-cloud-credentials
For more information on creating these roles, see Cluster-specific Operator IAM role reference.
3.1. About the ocm-role IAM resource
You must create the ocm-role
IAM resource to enable a Red Hat organization of users to create Red Hat OpenShift Service on AWS (ROSA) clusters. Within the context of linking to AWS, a Red Hat organization is a single user within OpenShift Cluster Manager.
Some considerations for your ocm-role
IAM resource are:
-
Only one
ocm-role
IAM role can be linked per Red Hat organization; however, you can have any number ofocm-role
IAM roles per AWS account. The web UI requires that only one of these roles can be linked at a time. -
Any user in a Red Hat organization may create and link an
ocm-role
IAM resource. Only the Red Hat Organization Administrator can unlink an
ocm-role
IAM resource. This limitation is to protect other Red Hat organization members from disturbing the interface capabilities of other users.NoteIf you just created a Red Hat account that is not part of an existing organization, this account is also the Red Hat Organization Administrator.
-
See "Understanding the OpenShift Cluster Manager role" in the Additional resources of this section for a list of the AWS permissions policies for the basic and admin
ocm-role
IAM resources.
Using the ROSA CLI (rosa
), you can link your IAM resource when you create it.
"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your ocm-role
IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking your IAM resource, you see a trust relationship from your ocm-role
IAM resource in AWS with the arn:aws:iam::7333:role/RH-Managed-OpenShift-Installer
resource.
After a Red Hat Organization Administrator has created and linked an ocm-role
IAM resource, all organization members may want to create and link their own user-role
IAM role. This IAM resource only needs to be created and linked only once per user. If another user in your Red Hat organization has already created and linked an ocm-role
IAM resource, you need to ensure you have created and linked your own user-role
IAM role.
Additional resources
3.1.1. Creating an ocm-role IAM role
You create your ocm-role
IAM roles by using the command-line interface (CLI).
Prerequisites
- You have an AWS account.
- You have Red Hat Organization Administrator privileges in the OpenShift Cluster Manager organization.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI,
rosa
, on your installation host.
Procedure
To create an ocm-role IAM role with basic privileges, run the following command:
$ rosa create ocm-role
To create an ocm-role IAM role with admin privileges, run the following command:
$ rosa create ocm-role --admin
This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (
rosa
) create your Operator roles and policies. See "Methods of account-wide role creation" in the Additional resources for more information.
Example output
I: Creating ocm role ? Role prefix: ManagedOpenShift 1 ? Enable admin capabilities for the OCM role (optional): No 2 ? Permissions boundary ARN (optional): 3 ? Role Path (optional): 4 ? Role creation mode: auto 5 I: Creating role using 'arn:aws:iam::<ARN>:user/<UserName>' ? Create the 'ManagedOpenShift-OCM-Role-182' role? Yes 6 I: Created role 'ManagedOpenShift-OCM-Role-182' with ARN 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' I: Linking OCM role ? OCM Role ARN: arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182 7 ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' role with organization '<AWS ARN>'? Yes 8 I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-OCM-Role-182' with organization account '<AWS ARN>'
- 1
- A prefix value for all of the created AWS resources. In this example,
ManagedOpenShift
prepends all of the AWS resources. - 2
- Choose if you want this role to have the additional admin permissions.Note
You do not see this prompt if you used the
--admin
option. - 3
- The Amazon Resource Name (ARN) of the policy to set permission boundaries.
- 4
- Specify an IAM path for the user name.
- 5
- Choose the method to create your AWS roles. Using
auto
, the ROSA CLI generates and links the roles and policies. In theauto
mode, you receive some different prompts to create the AWS roles. - 6
- The
auto
method asks if you want to create a specificocm-role
using your prefix. - 7
- Confirm that you want to associate your IAM role with your OpenShift Cluster Manager.
- 8
- Links the created role with your AWS organization.
3.2. About the user-role IAM role
You need to create a user-role
IAM role per web UI user to enable those users to create ROSA clusters.
Some considerations for your user-role
IAM role are:
-
You only need one
user-role
IAM role per Red Hat user account, but your Red Hat organization can have many of these IAM resources. -
Any user in a Red Hat organization may create and link an
user-role
IAM role. -
There can be numerous
user-role
IAM roles per AWS account per Red Hat organization. -
Red Hat uses the
user-role
IAM role to identify the user. This IAM resource has no AWS account permissions. -
Your AWS account can have multiple
user-role
IAM roles, but you must link each IAM role to each user in your Red Hat organization. No user can have more than one linkeduser-role
IAM role.
"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your user-role
IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking this IAM resource, you see a trust relationship from your user-role
IAM role in AWS with the arn:aws:iam::710019948333:role/RH-Managed-OpenShift-Installer
resource.
3.2.1. Creating a user-role IAM role
You can create your user-role
IAM roles by using the command-line interface (CLI).
Prerequisites
- You have an AWS account.
-
You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI,
rosa
, on your installation host.
Procedure
To create a
user-role
IAM role with basic privileges, run the following command:$ rosa create user-role
This command allows you create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (
rosa
) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" in the Additional resources for more information.
Example output
I: Creating User role ? Role prefix: ManagedOpenShift 1 ? Permissions boundary ARN (optional): 2 ? Role Path (optional): 3 ? Role creation mode: auto 4 I: Creating ocm user role using 'arn:aws:iam::2066:user' ? Create the 'ManagedOpenShift-User.osdocs-Role' role? Yes 5 I: Created role 'ManagedOpenShift-User.osdocs-Role' with ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' I: Linking User role ? User Role ARN: arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role ? Link the 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' role with account '1AGE'? Yes 6 I: Successfully linked role ARN 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' with account '1AGE'
- 1
- A prefix value for all of the created AWS resources. In this example,
ManagedOpenShift
prepends all of the AWS resources. - 2
- The Amazon Resource Name (ARN) of the policy to set permission boundaries.
- 3
- Specify an IAM path for the user name.
- 4
- Choose the method to create your AWS roles. Using
auto
, the ROSA CLI generates and links the roles and policies. In theauto
mode, you receive some different prompts to create the AWS roles. - 5
- The
auto
method asks if you want to create a specificuser-role
using your prefix. - 6
- Links the created role with your AWS organization.
If you unlink or delete your user-role
IAM role prior to deleting your cluster, an error prevents you from deleting your cluster. You must create or relink this role to proceed with the deletion process. See Repairing a cluster that cannot be deleted for more information.
3.3. AWS account association
Red Hat OpenShift Service on AWS (ROSA) cluster-provisioning tasks require linking ocm-role
and user-role
IAM roles to your AWS account using your Amazon Resource Name (ARN).
The ocm-role
ARN is stored as a label in your Red Hat organization while the user-role
ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform the necessary tasks in the AWS account.
3.3.1. Linking your AWS account
You can link your AWS account to existing IAM roles by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa
.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. You have created your
ocm-role
anduser-role
IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:$ rosa list ocm-role
$ rosa list user-role
If
Yes
is displayed in theLinked
column for both roles, you have already linked the roles to an AWS account.
Procedure
From the CLI, link your
ocm-role
resource to your Red Hat organization by using your Amazon Resource Name (ARN):NoteYou must have Red Hat Organization Administrator privileges to run the
rosa link
command. After you link theocm-role
resource with your AWS account, it is visible for all users in the organization.$ rosa link ocm-role --role-arn <arn>
Example output
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
From the CLI, link your
user-role
resource to your Red Hat user account by using your Amazon Resource Name (ARN):$ rosa link user-role --role-arn <arn>
Example output
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
3.3.2. Associating multiple AWS accounts with your Red Hat organization
You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS (ROSA) clusters on any of the associated AWS accounts from your Red Hat organization.
With this feature, you can create clusters in different AWS regions by using multiple AWS profiles as region-bound environments.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. -
You have created your
ocm-role
anduser-role
IAM roles.
Procedure
To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role
, user, and account roles in the additional AWS account.
To create the roles in an additional region, specify the --profile <aws-profile>
parameter when running the rosa create
commands and replace <aws_profile>
with the additional account profile name:
To specify an AWS account profile when creating an OpenShift Cluster Manager role:
$ rosa create --profile <aws_profile> ocm-role
To specify an AWS account profile when creating a user role:
$ rosa create --profile <aws_profile> user-role
To specify an AWS account profile when creating the account roles:
$ rosa create --profile <aws_profile> account-roles
If you do not specify a profile, the default AWS profile is used.
3.4. Permission boundaries for the installer role
You can apply a policy as a permissions boundary on an installer role. You can use an AWS-managed policy or a customer-managed policy to set the boundary for an Amazon Web Services(AWS) Identity and Access Management (IAM) entity (user or role). The combination of policy and boundary policy limits the maximum permissions for the user or role. ROSA includes a set of three prepared permission boundary policy files, with which you can restrict permissions for the installer role since changing the installer policy itself is not supported.
This feature is only supported on Red Hat OpenShift Service on AWS (classic architecture) clusters.
The permission boundary policy files are as follows:
- The Core boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to install an Red Hat OpenShift Service on AWS cluster. The installer does not have permissions to create a virtual private cloud (VPC) or PrivateLink (PL). A VPC needs to be provided.
- The VPC boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to create/manage the VPC. It does not include permissions for PL or core installation. If you need to install a cluster with enough permissions for the installer to install the cluster and create/manage the VPC, but you do not need to set up PL, then use the core and VPC boundary files together with the installer role.
- The PrivateLink (PL) boundary policy file contains the minimum permissions needed for ROSA (classic architecture) installer to create the AWS PL with a cluster. It does not include permissions for VPC or core installation. Provide a pre-created VPC for all PL clusters during installation.
When using the permission boundary policy files, the following combinations apply:
- No permission boundary policies means that the full installer policy permissions apply to your cluster.
Core only sets the most restricted permissions for the installer role. The VPC and PL permissions are not included in the Core only boundary policy.
- Installer cannot create or manage the VPC or PL.
- You must have a customer-provided VPC, and PrivateLink (PL) is not available.
Core + VPC sets the core and VPC permissions for the installer role.
- Installer cannot create or manage the PL.
- Assumes you are not using custom/BYO-VPC.
- Assumes the installer will create and manage the VPC.
Core + PrivateLink (PL) means the installer can provision the PL infrastructure.
- You must have a customer-provided VPC.
- This is for a private cluster with PL.
This example procedure is applicable for an installer role and policy with the most restriction of permissions, using only the core installer permission boundary policy for ROSA. You can complete this with the AWS console or the AWS CLI. This example uses the AWS CLI and the following policy:
Example 3.1. sts_installer_core_permission_boundary_policy.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CopyImage", "ec2:CreateNetworkInterface", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteNetworkInterface", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeregisterImage", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstanceCreditSpecifications", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeInstanceTypes", "ec2:DescribeInternetGateways", "ec2:DescribeKeyPairs", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribeNetworkInterfaces", "ec2:DescribePrefixLists", "ec2:DescribeRegions", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSecurityGroupRules", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcClassicLink", "ec2:DescribeVpcClassicLinkDnsSupport", "ec2:DescribeVpcEndpoints", "ec2:DescribeVpcs", "ec2:GetConsoleOutput", "ec2:GetEbsDefaultKmsKeyId", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ReleaseAddress", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:DeregisterTargets", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "elasticloadbalancing:SetSecurityGroups", "iam:AddRoleToInstanceProfile", "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetInstanceProfile", "iam:TagInstanceProfile", "iam:GetRole", "iam:GetRolePolicy", "iam:GetUser", "iam:ListAttachedRolePolicies", "iam:ListInstanceProfiles", "iam:ListInstanceProfilesForRole", "iam:ListRolePolicies", "iam:ListRoles", "iam:ListUserPolicies", "iam:ListUsers", "iam:PassRole", "iam:RemoveRoleFromInstanceProfile", "iam:SimulatePrincipalPolicy", "iam:TagRole", "iam:UntagRole", "route53:ChangeResourceRecordSets", "route53:ChangeTagsForResource", "route53:CreateHostedZone", "route53:DeleteHostedZone", "route53:GetAccountLimit", "route53:GetChange", "route53:GetHostedZone", "route53:ListHostedZones", "route53:ListHostedZonesByName", "route53:ListResourceRecordSets", "route53:ListTagsForResource", "route53:UpdateHostedZoneComment", "s3:CreateBucket", "s3:DeleteBucket", "s3:DeleteObject", "s3:GetAccelerateConfiguration", "s3:GetBucketAcl", "s3:GetBucketCORS", "s3:GetBucketLocation", "s3:GetBucketLogging", "s3:GetBucketObjectLockConfiguration", "s3:GetBucketPolicy", "s3:GetBucketRequestPayment", "s3:GetBucketTagging", "s3:GetBucketVersioning", "s3:GetBucketWebsite", "s3:GetEncryptionConfiguration", "s3:GetLifecycleConfiguration", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetReplicationConfiguration", "s3:ListBucket", "s3:ListBucketVersions", "s3:PutBucketAcl", "s3:PutBucketPolicy", "s3:PutBucketTagging", "s3:PutEncryptionConfiguration", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectTagging", "servicequotas:GetServiceQuota", "servicequotas:ListAWSDefaultServiceQuotas", "sts:AssumeRole", "sts:AssumeRoleWithWebIdentity", "sts:GetCallerIdentity", "tag:GetResources", "tag:UntagResources", "kms:DescribeKey", "cloudwatch:GetMetricData", "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:CreateVpcEndpoint", "ec2:DeleteVpcEndpoints", "ec2:CreateVpcEndpointServiceConfiguration", "ec2:DeleteVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServiceConfigurations", "ec2:DescribeVpcEndpointServicePermissions", "ec2:DescribeVpcEndpointServices", "ec2:ModifyVpcEndpointServicePermissions" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/red-hat-managed": "true" } } } ] }
To use the permission boundaries, you will need to prepare the permission boundary policy and add it to your relevant installer role in AWS IAM. While the ROSA (rosa
) CLI offers a permission boundary function, it applies to all roles and not just the installer role, which means it does not work with the provided permission boundary policies (which are only for the installer role).
Prerequisites
- You have an AWS account.
- You have the permissions required to administer AWS roles and policies.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your workstation. - You have already prepared your ROSA account-wide roles, includes the installer role, and the corresponding policies. If these do not exist in your AWS account, see "Creating the account-wide STS roles and policies" in Additional resources.
Procedure
Prepare the policy file by entering the following command in the
rosa
CLI:$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.17/sts_installer_core_permission_boundary_policy.json
Create the policy in AWS and gather its Amazon Resource Name (ARN) by entering the following command:
$ aws iam create-policy \ --policy-name rosa-core-permissions-boundary-policy \ --policy-document file://./rosa-installer-core.json \ --description "ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink"
Example output
{ "Policy": { "PolicyName": "rosa-core-permissions-boundary-policy", "PolicyId": "<Policy ID>", "Arn": "arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "<CreateDate>", "UpdateDate": "<UpdateDate>" } }
Add the permission boundary policy to the installer role you want to restrict by entering the following command:
$ aws iam put-role-permissions-boundary \ --role-name ManagedOpenShift-Installer-Role \ --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy
Display the installer role to validate attached policies (including permissions boundary) by entering the following command in the
rosa
CLI:$ aws iam get-role --role-name ManagedOpenShift-Installer-Role \ --output text | grep PERMISSIONSBOUNDARY
Example output
PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Policy
For more examples of PL and VPC permission boundary policies see:
Example 3.2.
sts_installer_privatelink_permission_boundary_policy.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:ModifyVpcEndpointServiceConfiguration", "route53:ListHostedZonesByVPC", "route53:CreateVPCAssociationAuthorization", "route53:AssociateVPCWithHostedZone", "route53:DeleteVPCAssociationAuthorization", "route53:DisassociateVPCFromHostedZone", "route53:ChangeResourceRecordSets" ], "Resource": "*" } ] }
Example 3.3.
sts_installer_vpc_permission_boundary_policy.json
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:CreateDhcpOptions", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRouteTable", "ec2:CreateSubnet", "ec2:CreateVpc", "ec2:DeleteDhcpOptions", "ec2:DeleteInternetGateway", "ec2:DeleteNatGateway", "ec2:DeleteRouteTable", "ec2:DeleteSubnet", "ec2:DeleteVpc", "ec2:DetachInternetGateway", "ec2:DisassociateRouteTable", "ec2:ModifySubnetAttribute", "ec2:ModifyVpcAttribute", "ec2:ReplaceRouteTableAssociation" ], "Resource": "*" } ] }
3.5. Additional resources
- See Permissions boundaries for IAM entities (AWS documentation).
- See Creating the account-wide STS roles and policies.
- See Troubleshooting IAM roles.
- See Account-wide IAM role and policy reference for a list of IAM roles needed for cluster creation.
Chapter 4. Limits and scalability
This document details the tested cluster maximums for Red Hat OpenShift Service on AWS (ROSA) clusters, along with information about the test environment and configuration used to test the maximums. Information about control plane and infrastructure node sizing and scaling is also provided.
4.1. Cluster maximums
Consider the following tested object maximums when you plan a Red Hat OpenShift Service on AWS (ROSA) cluster installation. The table specifies the maximum limits for each tested type in a (ROSA) cluster.
These guidelines are based on a cluster of 180 compute (also known as worker) nodes in a multiple availability zone configuration. For smaller clusters, the maximums are lower.
Maximum type | 4.x tested maximum |
---|---|
Number of pods [1] | 25,000 |
Number of pods per node | 250 |
Number of pods per core | There is no default value |
Number of namespaces [2] | 5,000 |
Number of pods per namespace [3] | 25,000 |
Number of services [4] | 10,000 |
Number of services per namespace | 5,000 |
Number of back ends per service | 5,000 |
Number of deployments per namespace [3] | 2,000 |
- The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application.
- When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available.
- There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
-
Each service port and each service back end has a corresponding entry in
iptables
. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system.
4.2. OpenShift Container Platform testing environment and configuration
The following table lists the OpenShift Container Platform environment and configuration on which the cluster maximums are tested for the AWS cloud platform.
Node | Type | vCPU | RAM(GiB) | Disk type | Disk size(GiB)/IOPS | Count | Region |
---|---|---|---|---|---|---|---|
Control plane/etcd [1] | m5.4xlarge | 16 | 64 | gp3 | 350 / 1,000 | 3 | us-west-2 |
Infrastructure nodes [2] | r5.2xlarge | 8 | 64 | gp3 | 300 / 900 | 3 | us-west-2 |
Workload [3] | m5.2xlarge | 8 | 32 | gp3 | 350 / 900 | 3 | us-west-2 |
Compute nodes | m5.2xlarge | 8 | 32 | gp3 | 350 / 900 | 102 | us-west-2 |
- io1 disks are used for control plane/etcd nodes in all versions prior to 4.10.
- Infrastructure nodes are used to host monitoring components because Prometheus can claim a large amount of memory, depending on usage patterns.
- Workload nodes are dedicated to run performance and scalability workload generators.
Larger cluster sizes and higher object counts might be reachable. However, the sizing of the infrastructure nodes limits the amount of memory that is available to Prometheus. When creating, modifying, or deleting objects, Prometheus stores the metrics in its memory for roughly 3 hours prior to persisting the metrics on disk. If the rate of creation, modification, or deletion of objects is too high, Prometheus can become overwhelmed and fail due to the lack of memory resources.
4.3. Control plane and infrastructure node sizing and scaling
When you install a Red Hat OpenShift Service on AWS (ROSA) cluster, the sizing of the control plane and infrastructure nodes are automatically determined by the compute node count.
If you change the number of compute nodes in your cluster after installation, the Red Hat Site Reliability Engineering (SRE) team scales the control plane and infrastructure nodes as required to maintain cluster stability.
4.3.1. Node sizing during installation
During the installation process, the sizing of the control plane and infrastructure nodes are dynamically calculated. The sizing calculation is based on the number of compute nodes in a cluster.
The following table lists the control plane and infrastructure node sizing that is applied during installation.
Number of compute nodes | Control plane size | Infrastructure node size |
---|---|---|
1 to 25 | m5.2xlarge | r5.xlarge |
26 to 100 | m5.4xlarge | r5.2xlarge |
101 to 180 | m5.8xlarge | r5.4xlarge |
The maximum number of compute nodes on ROSA is 180.
4.3.2. Node scaling after installation
If you change the number of compute nodes after installation, the control plane and infrastructure nodes are scaled by the Red Hat Site Reliability Engineering (SRE) team as required. The nodes are scaled to maintain platform stability.
Postinstallation scaling requirements for control plane and infrastructure nodes are assessed on a case-by-case basis. Node resource consumption and received alerts are taken into consideration.
Rules for control plane node resizing alerts
The resizing alert is triggered for the control plane nodes in a cluster when the following occurs:
Control plane nodes sustain over 66% utilization on average in a cluster.
NoteThe maximum number of compute nodes on ROSA is 180.
Rules for infrastructure node resizing alerts
Resizing alerts are triggered for the infrastructure nodes in a cluster when it has high-sustained CPU or memory utilization. This high-sustained utilization status is:
- Infrastructure nodes sustain over 50% utilization on average in a cluster with a single availability zone using 2 infrastructure nodes.
Infrastructure nodes sustain over 66% utilization on average in a cluster with multiple availability zones using 3 infrastructure nodes.
NoteThe maximum number of compute nodes on ROSA is 180.
The resizing alerts only appear after sustained periods of high utilization. Short usage spikes, such as a node temporarily going down causing the other node to scale up, do not trigger these alerts.
The SRE team might scale the control plane and infrastructure nodes for additional reasons, for example to manage an increase in resource consumption on the nodes.
When scaling is applied, the customer is notified through a service log entry. For more information about the service log, see Accessing the service logs for ROSA clusters.
4.3.3. Sizing considerations for larger clusters
For larger clusters, infrastructure node sizing can become a significant impacting factor to scalability. There are many factors that influence the stated thresholds, including the etcd version or storage data format.
Exceeding these limits does not necessarily mean that the cluster will fail. In most cases, exceeding these numbers results in lower overall performance.
4.4. Next steps
4.5. Additional resources
Chapter 5. ROSA with HCP limits and scalability
This document details the tested cluster maximums for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, along with information about the test environment and configuration used to test the maximums. For ROSA with HCP clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster.
5.1. ROSA with HCP cluster maximums
Consider the following tested object maximums when you plan a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster installation. The table specifies the maximum limits for each tested type in a ROSA with HCP cluster.
These guidelines are based on a cluster of 500 compute (also known as worker) nodes. For smaller clusters, the maximums are lower.
Customers running ROSA with HCP 4.14.x and 4.15.x clusters require a minimum z-stream version of 4.14.28 or 4.15.15 and greater to scale to 500 worker nodes. For earlier versions, the maximum is 90 worker nodes.
Maximum type | 4.x tested maximum |
---|---|
Number of pods [1] | 25,000 |
Number of pods per node | 250 |
Number of pods per core | There is no default value |
Number of namespaces [2] | 5,000 |
Number of pods per namespace [3] | 25,000 |
Number of services [4] | 10,000 |
Number of services per namespace | 5,000 |
Number of back ends per service | 5,000 |
Number of deployments per namespace [3] | 2,000 |
- The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application.
- When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available.
- There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
-
Each service port and each service back end has a corresponding entry in
iptables
. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system.
5.2. Next steps
5.3. Additional resources
Chapter 6. Planning your environment
6.1. Planning your environment based on tested cluster maximums
This document describes how to plan your Red Hat OpenShift Service on AWS environment based on the tested cluster maximums.
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.
The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.
While planning your environment, determine how many pods are expected to fit per node using the following formula:
required pods per cluster / pods per node = total number of nodes needed
The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in Planning your environment based on application requirements.
Example scenario
If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:
2200 / 250 = 8.8
If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:
2200 / 20 = 110
Where:
required pods per cluster / total number of nodes = expected pods per node
6.2. Planning your environment based on application requirements
This document describes how to plan your Red Hat OpenShift Service on AWS environment based on your application requirements.
Consider an example application environment:
Pod type | Pod quantity | Max memory | CPU cores | Persistent storage |
---|---|---|---|---|
apache | 100 | 500 MB | 0.5 | 1 GB |
node.js | 200 | 1 GB | 1 | 1 GB |
postgresql | 100 | 1 GB | 2 | 10 GB |
JBoss EAP | 100 | 1 GB | 1 | 1 GB |
Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage.
Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.
Node type | Quantity | CPUs | RAM (GB) |
---|---|---|---|
Nodes (option 1) | 100 | 4 | 16 |
Nodes (option 2) | 50 | 8 | 32 |
Nodes (option 3) | 25 | 16 | 64 |
Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.
The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing.
Disable the service links in the deployment’s service specification file to overcome this:
Example
Kind: Template apiVersion: template.openshift.io/v1 metadata: name: deploymentConfigTemplate creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: deploymentconfig${IDENTIFIER} spec: template: metadata: labels: name: replicationcontroller${IDENTIFIER} spec: enableServiceLinks: false containers: - name: pause${IDENTIFIER} image: "${IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_${IDENTIFIER} value: "${ENV_VALUE}" - name: ENVVAR2_${IDENTIFIER} value: "${ENV_VALUE}" - name: ENVVAR3_${IDENTIFIER} value: "${ENV_VALUE}" - name: ENVVAR4_${IDENTIFIER} value: "${ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontroller${IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - kind: Service apiVersion: v1 metadata: name: service${IDENTIFIER} spec: selector: name: replicationcontroller${IDENTIFIER} ports: - name: serviceport${IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deploymentConfigTemplate
The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX
on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including:
-
<SERVICE_NAME>_SERVICE_HOST=<IP>
-
<SERVICE_NAME>_SERVICE_PORT=<PORT>
-
<SERVICE_NAME>_PORT=tcp://<IP>:<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp
-
<SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR>
The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it.
Chapter 7. Required AWS service quotas
Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS cluster.
7.1. Required AWS service quotas
The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases:
-
ROSA (classic architecture) clusters require a minimum AWS EC2 service quota of 100 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is
5
. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota forRunning On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances
. -
Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for
Security groups per network interface
is5
, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6.
The AWS SDK allows ROSA to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible that the quota check can pass in the AWS SDK yet the cluster creation can fail. To fix this issue, increase your quota.
If you need to modify or increase a specific quota, see Amazon’s documentation on requesting a quota increase. Large quota requests are submitted to Amazon Support for review, and take some time to be approved. If your quota request is urgent, contact AWS Support.
Quota name | Service code | Quota code | AWS default | Minimum required | Description |
---|---|---|---|---|---|
Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances | ec2 | L-1216C47A | 5 | 100 | Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create ROSA clusters. ROSA has a minimum requirement of 100 vCPUs for cluster creation. |
Storage for General Purpose SSD (gp2) volume storage in TiB | ebs | L-D18FCD1D | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region. |
Storage for General Purpose SSD (gp3) volume storage in TiB | ebs | L-7A658B76 | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. |
Storage for Provisioned IOPS SSD (io1) volumes in TiB | ebs | L-FD252861 | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across Provisioned IOPS SSD (io1) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. |
Quota name | Service code | Quota code | AWS default | Minimum required | Description |
---|---|---|---|---|---|
EC2-VPC Elastic IPs | ec2 | L-0263D0A3 | 5 | 5 | The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region. |
VPCs per Region | vpc | L-F678F1CE | 5 | 5 | The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region. |
Internet gateways per Region | vpc | L-A4707A72 | 5 | 5 | The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region. |
Network interfaces per Region | vpc | L-DF5E4CA3 | 5,000 | 5,000 | The maximum number of network interfaces per Region. |
Security groups per network interface | vpc | L-2AFB9258 | 5 | 5 | The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000. |
Snapshots per Region | ebs | L-309BACF6 | 10,000 | 10,000 | The maximum number of snapshots per Region |
IOPS for Provisioned IOPS SSD (Io1) volumes | ebs | L-B3A130E6 | 300,000 | 300,000 | The maximum aggregated number of IOPS that can be provisioned across Provisioned IOPS SDD (io1) volumes in this Region. |
Application Load Balancers per Region | elasticloadbalancing | L-53DA6B97 | 50 | 50 | The maximum number of Application Load Balancers that can exist in each region. |
Classic Load Balancers per Region | elasticloadbalancing | L-E9E9831D | 20 | 20 | The maximum number of Classic Load Balancers that can exist in each region. |
7.1.1. Additional resources
7.2. Next steps
Chapter 8. Setting up the environment for using STS
After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS (ROSA).
AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS (ROSA) because it provides enhanced security.
8.1. Setting up the environment for STS
Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS), complete the following steps to set up your environment.
Prerequisites
- Review and complete the deployment prerequisites and policies.
- Create a Red Hat account, if you do not already have one. Then, check your email for a verification link. You will need these credentials to install ROSA.
Procedure
Log in to the Amazon Web Services (AWS) account that you want to use.
It is recommended to use a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.
If you are using AWS Organizations and you need to have a service control policy (SCP) applied to the AWS account you plan to use, these policies must not be more restrictive than the roles and policies required by the cluster.
Enable the ROSA service in the AWS Management Console.
- Sign in to your AWS account.
- To enable ROSA, go to the ROSA service and select Enable OpenShift.
Install and configure the AWS CLI.
Follow the AWS command-line interface documentation to install and configure the AWS CLI for your operating system.
Specify the correct
aws_access_key_id
andaws_secret_access_key
in the.aws/credentials
file. See AWS Configuration basics in the AWS documentation.Set a default AWS region.
NoteYou can use the environment variable to set the default AWS region.
The ROSA service evaluates regions in the following priority order:
-
The region specified when running the
rosa
command with the--region
flag. -
The region set in the
AWS_DEFAULT_REGION
environment variable. See Environment variables to configure the AWS CLI in the AWS documentation. - The default region set in your AWS configuration file. See Quick configuration with aws configure in the AWS documentation.
-
The region specified when running the
Optional: Configure your AWS CLI settings and credentials by using an AWS named profile.
rosa
evaluates AWS named profiles in the following priority order:-
The profile specified when running the
rosa
command with the--profile
flag. -
The profile set in the
AWS_PROFILE
environment variable. See Named profiles in the AWS documentation.
-
The profile specified when running the
Verify the AWS CLI is installed and configured correctly by running the following command to query the AWS API:
$ aws sts get-caller-identity
Install the latest version of the ROSA CLI (
rosa
).- Download the latest release of the ROSA CLI for your operating system.
Optional: Rename the file you downloaded to
rosa
and make the file executable. This documentation usesrosa
to refer to the executable file.$ chmod +x rosa
Optional: Add
rosa
to your path.$ mv rosa /usr/local/bin/rosa
Enter the following command to verify your installation:
$ rosa
Example output
Command line tool for Red Hat OpenShift Service on AWS. For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws Usage: rosa [command] Available Commands: completion Generates completion scripts create Create a resource from stdin delete Delete a specific resource describe Show details of a specific resource download Download necessary tools for using your cluster edit Edit a specific resource grant Grant role to a specific resource help Help about any command init Applies templates to support Red Hat OpenShift Service on AWS install Installs a resource into a cluster link Link a ocm/user role from stdin list List all resources of a specific type login Log in to your Red Hat account logout Log out logs Show installation or uninstallation logs for a cluster revoke Revoke role from a specific resource uninstall Uninstalls a resource from a cluster unlink UnLink a ocm/user role from stdin upgrade Upgrade a resource verify Verify resources are configured correctly for cluster install version Prints the version of the tool whoami Displays user account information Flags: --color string Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default "auto") --debug Enable debug mode. -h, --help help for rosa Use "rosa [command] --help" for more information about a command.
Generate the command completion scripts for the ROSA CLI. The following example generates the Bash completion scripts for a Linux machine:
$ rosa completion bash | sudo tee /etc/bash_completion.d/rosa
Source the scripts to enable
rosa
command completion from your existing terminal. The following example sources the Bash completion scripts forrosa
on a Linux machine:$ source /etc/bash_completion.d/rosa
Log in to your Red Hat account with the ROSA CLI.
Enter the following command.
$ rosa login
Replace
<my_offline_access_token>
with your token.Example output
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token>
Example output continued
I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com'
Verify that your AWS account has the necessary quota to deploy a ROSA cluster.
$ rosa verify quota [--region=<aws_region>]
Example output
I: Validating AWS quota... I: AWS quota ok
NoteSometimes your AWS quota varies by region. If you receive any errors, try a different region.
If you need to increase your quota, go to the AWS Management Console and request a quota increase for the service that failed.
After the quota check succeeds, proceed to the next step.
Prepare your AWS account for cluster deployment:
Run the following command to verify your Red Hat and AWS credentials are setup correctly. Check that your AWS Account ID, Default Region and ARN match what you expect. You can safely ignore the rows beginning with OpenShift Cluster Manager for now.
$ rosa whoami
Example output
AWS Account ID: 000000000000 AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::000000000000:user/hello OCM API: https://api.openshift.com OCM Account ID: 1DzGIdIhqEWyt8UUXQhSoWaaaaa OCM Account Name: Your Name OCM Account Username: you@domain.com OCM Account Email: you@domain.com OCM Organization ID: 1HopHfA2hcmhup5gCr2uH5aaaaa OCM Organization Name: Red Hat OCM Organization External ID: 0000000
Install the OpenShift CLI (
oc
), version 4.7.9 or greater, from the ROSA (rosa
) CLI.Enter this command to download the latest version of the
oc
CLI:$ rosa download openshift-client
-
After downloading the
oc
CLI, unzip it and add it to your path. Enter this command to verify that the
oc
CLI is installed correctly:$ rosa verify openshift-client
Create roles
After completing these steps, you are ready to set up IAM and OIDC access-based roles.
8.2. Next steps
8.3. Additional resources
Legal Notice
Copyright © 2024 Red Hat, Inc.
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.