Prepare your environment
Planning, limits, and scalability for Red Hat OpenShift Service on AWS
Abstract
Chapter 1. Prerequisites checklist for deploying Red Hat OpenShift Service on AWS Copy linkLink copied to clipboard!
This is a high level checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS cluster.
The machine that you run the installation process from must have access to the following:
- Amazon Web Services API and authentication service endpoints
-
Red Hat OpenShift API and authentication service endpoints (
api.openshift.com
andsso.redhat.com
) - Internet connectivity to obtain installation artifacts during deployment
1.1. Accounts and permissions Copy linkLink copied to clipboard!
Ensure that you have the following accounts, credentials, and permissions.
1.1.1. AWS account Copy linkLink copied to clipboard!
- Create an AWS account if you do not already have one.
- Gather the credentials required to log in to your AWS account.
- Ensure that your AWS account has sufficient permissions to use the ROSA CLI: Least privilege permissions for common ROSA CLI commands
Enable Red Hat OpenShift Service on AWS for your AWS account on the AWS console.
-
If your account is the management account for your organization (used for AWS billing purposes), you must have
aws-marketplace:Subscribe
permissions available on your account. See Service control policy (SCP) prerequisites for more information, or see the AWS documentation for troubleshooting: AWS Organizations service control policy denies required AWS Marketplace permissions.
-
If your account is the management account for your organization (used for AWS billing purposes), you must have
- Ensure you have not enabled restrictive tag policies. For more information, see Tag policies in the AWS documentation.
1.1.2. Red Hat account Copy linkLink copied to clipboard!
- Create a Red Hat account for the Red Hat Hybrid Cloud Console if you do not already have one.
- Gather the credentials required to log in to your Red Hat account.
1.2. CLI requirements Copy linkLink copied to clipboard!
You need to download and install several CLI (command-line interface) tools to be able to deploy a cluster.
1.2.1. AWS CLI (aws) Copy linkLink copied to clipboard!
- Install the AWS Command Line Interface.
- Log in to your AWS account using the AWS CLI: Sign in through the AWS CLI
Verify your account identity:
aws sts get-caller-identity
$ aws sts get-caller-identity
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the service role for ELB (Elastic Load Balancing) exists:
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
$ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the role does not exist, create it by running the following command:
aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
$ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2. ROSA CLI (rosa) Copy linkLink copied to clipboard!
- Install the ROSA CLI from the web console.
Log in to your Red Hat account by running
rosa login
and following the instructions in the command output:rosa login
$ rosa login To login to your Red{nbsp}Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can copy the full
$ rosa login --token=abc…
command and paste that in the terminal:rosa login --token=<abc..>
$ rosa login --token=<abc..>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm you are logged in using the correct account and credentials:
rosa whoami
$ rosa whoami
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.3. OpenShift CLI (oc) Copy linkLink copied to clipboard!
The OpenShift CLI (oc
) is not required to deploy a Red Hat OpenShift Service on AWS cluster, but is a useful tool for interacting with your cluster after it is deployed.
-
Download and install
oc
from the OpenShift Cluster Manager Command-line interface (CLI) tools page, or follow the instructions in Getting started with the OpenShift CLI. Verify that the OpenShift CLI has been installed correctly by running the following command:
rosa verify openshift-client
$ rosa verify openshift-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. AWS infrastructure prerequisites Copy linkLink copied to clipboard!
Optionally, ensure that your AWS account has sufficient quota available to deploy a cluster.
rosa verify quota
$ rosa verify quota
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command only checks the total quota allocated to your account; it does not reflect the amount of quota already consumed from that quota. Running this command is optional because your quota is verified during cluster deployment. However, Red Hat recommends running this command to confirm your quota ahead of time so that deployment is not interrupted by issues with quota availability.
- For more information about resources provisioned during Red Hat OpenShift Service on AWS cluster deployment, see Provisioned AWS Infrastructure.
- For more information about the required AWS service quotas, see Required AWS service quotas.
1.4. Service Control Policy (SCP) prerequisites Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.
- Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster. For more information, see the Minimum set of effective permissions for SCPs.
- When you create a Red Hat OpenShift Service on AWS cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.
1.5. Networking prerequisites Copy linkLink copied to clipboard!
Prerequisites needed from a networking standpoint.
1.5.1. Minimum bandwidth Copy linkLink copied to clipboard!
During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.
After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.
1.5.2. Firewall Copy linkLink copied to clipboard!
- Configure your firewall to allow access to the domains and ports listed in AWS firewall prerequisites
1.5.3. Create VPC before cluster deployment Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS clusters must be deployed into an existing AWS Virtual Private Cloud (VPC).
Installing a new Red Hat OpenShift Service on AWS cluster into a VPC that was automatically created by the installer for a different cluster is not supported.
Your VPC must meet the requirements shown in the following table.
Requirement | Details |
---|---|
VPC name | You need to have the specific VPC name and ID when creating your cluster. |
CIDR range | Your VPC CIDR range should match your machine CIDR. |
Availability zone | You need one availability zone for a single zone, and you need three for availability zones for multi-zone. |
Public subnet | You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet. |
DNS hostname and resolution | You must ensure that the DNS hostname and resolution are enabled. |
1.5.4. Additional custom security groups Copy linkLink copied to clipboard!
During cluster creation, you can add additional custom security groups to a cluster that has an existing non-managed VPC. To do so, complete these prerequisites before you create the cluster:
- Create the custom security groups in AWS before you create the cluster.
- Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
-
You may need to request additional AWS quota for
Security groups per network interface
.
For more details see the detailed requirements for Security groups.
1.5.5. Custom DNS and domains Copy linkLink copied to clipboard!
You can configure a custom domain name server and custom domain name for your cluster. To do so, complete the following prerequisites before you create the cluster:
-
By default, Red Hat OpenShift Service on AWS clusters require you to set the
domain name servers
option toAmazonProvidedDNS
to ensure successful cluster creation and operation. - To use a custom DNS server and domain name for your cluster, the Red Hat OpenShift Service on AWS installer must be able to use VPC DNS with default DHCP options so that it can resolve internal IPs and services. This means that you must create a custom DHCP option set to forward DNS lookups to your DNS server, and associate this option set with your VPC before you create the cluster.
Confirm that your VPC is using VPC Resolver by running the following command:
aws ec2 describe-dhcp-options
$ aws ec2 describe-dhcp-options
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Detailed requirements for deploying Red Hat OpenShift Service on AWS Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.
Ensure that the following prerequisites are met before installing your cluster.
2.1. Customer requirements for all Red Hat OpenShift Service on AWS clusters Copy linkLink copied to clipboard!
The following prerequisites must be complete before you deploy a Red Hat OpenShift Service on AWS cluster.
2.2. AWS account Copy linkLink copied to clipboard!
- Your AWS account must allow sufficient quota to deploy your cluster.
- If your organization applies and enforces SCP policies, these policies must not be more restrictive than the roles and policies required by the cluster.
- You can deploy native AWS services within the same AWS account.
- Your account must have a service-linked role to allow the installation program to configure Elastic Load Balancing (ELB). See "Creating the Elastic Load Balancing (ELB) service-linked role" for more information.
2.2.1. Support requirements Copy linkLink copied to clipboard!
- Red Hat recommends that the customer have at least Business Support from AWS.
- Red Hat may have permission from the customer to request AWS support on their behalf.
- Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.
- Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS clusters in the same manner, unless otherwise specified in this requirements section.
2.2.2. Security requirements Copy linkLink copied to clipboard!
- Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.
- Red Hat must have egress allowed to the domains documented in the "Firewall prerequisites" section. Clusters with egress zero are exempt from this requirement.
2.3. Requirements for using OpenShift Cluster Manager Copy linkLink copied to clipboard!
The following configuration details are required only if you use OpenShift Cluster Manager to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements.
2.3.1. AWS account association Copy linkLink copied to clipboard!
When you provision Red Hat OpenShift Service on AWS using OpenShift Cluster Manager (console.redhat.com
), you must associate the ocm-role
and user-role
IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as account linking.
The ocm-role
ARN is stored as a label in your Red Hat organization while the user-role
ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account.
2.3.2. Associating your AWS account with IAM roles Copy linkLink copied to clipboard!
You can associate or link your AWS account with existing IAM roles by using the ROSA CLI, rosa
.
Prerequisites
- You have an AWS account.
- You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. You have created the
ocm-role
anduser-role
IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:rosa list ocm-role
$ rosa list ocm-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa list user-role
$ rosa list user-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
Yes
is displayed in theLinked
column for both roles, you have already linked the roles to an AWS account.
Procedure
In the ROSA CLI, link your
ocm-role
resource to your Red Hat organization by using your Amazon Resource Name (ARN):NoteYou must have Red Hat Organization Administrator privileges to run the
rosa link
command. After you link theocm-role
resource with your AWS account, it takes effect and is visible to all users in the organization.rosa link ocm-role --role-arn <arn>
$ rosa link ocm-role --role-arn <arn>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the ROSA CLI, link your
user-role
resource to your Red Hat user account by using your Amazon Resource Name (ARN):rosa link user-role --role-arn <arn>
$ rosa link user-role --role-arn <arn>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
2.3.3. Associating multiple AWS accounts with your Red Hat organization Copy linkLink copied to clipboard!
You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS clusters on any of the associated AWS accounts from your Red Hat organization.
With this capability, you can create clusters on different AWS profiles according to characteristics that make sense for your business, for example, by using one AWS profile for each region to create region-bound environments.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest AWS (
aws
) and ROSA (rosa
) CLIs on your installation host. -
You have created the
ocm-role
anduser-role
IAM roles for Red Hat OpenShift Service on AWS.
Procedure
To associate an additional AWS account, first create a profile in your local AWS configuration. Then, associate the account with your Red Hat organization by creating the ocm-role
, user, and account roles in the additional AWS account.
To create the roles in an additional region, specify the --profile <aws-profile>
parameter when running the rosa create
commands and replace <aws_profile>
with the additional account profile name:
To specify an AWS account profile when creating an OpenShift Cluster Manager role:
rosa create --profile <aws_profile> ocm-role
$ rosa create --profile <aws_profile> ocm-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To specify an AWS account profile when creating a user role:
rosa create --profile <aws_profile> user-role
$ rosa create --profile <aws_profile> user-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To specify an AWS account profile when creating the account roles:
rosa create --profile <aws_profile> account-roles
$ rosa create --profile <aws_profile> account-roles
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you do not specify a profile, the default AWS profile and its associated AWS region are used.
2.4. Requirements for deploying a cluster in an opt-in region Copy linkLink copied to clipboard!
An AWS opt-in region is a region that is not enabled in your AWS account by default. If you want to deploy a Red Hat OpenShift Service on AWS cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:
- The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.
The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.
ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.4.1. Setting the AWS security token version Copy linkLink copied to clipboard!
If you want to create a Red Hat OpenShift Service on AWS cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.
Prerequisites
- You have installed and configured the latest AWS CLI on your installation host.
Procedure
List the ID of the AWS account that is defined in your AWS CLI configuration:
aws sts get-caller-identity --query Account --output json
$ aws sts get-caller-identity --query Account --output json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the output matches the ID of the relevant AWS account.
List the security token version that is set in your AWS account:
aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json
$ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
1
1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To update the security token version to version 2 for all regions in your AWS account, run the following command:
aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
$ aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.5. Red Hat managed IAM references for AWS Copy linkLink copied to clipboard!
Red Hat is not responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.
-
To use the
ocm
CLI, you must have anocm-role
anduser-role
resource. See Required IAM roles and resources. - If you have a single cluster, see Account-wide IAM role and policy reference.
- For each cluster, you must have the necessary Operator roles. See Cluster-specific Operator IAM role reference.
2.6. Provisioned AWS Infrastructure Copy linkLink copied to clipboard!
This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS cluster.
2.6.1. EC2 instances Copy linkLink copied to clipboard!
AWS EC2 instances are required to deploy Red Hat OpenShift Service on AWS.
At a minimum, two m5.xlarge
EC2 instances are deployed for use as worker nodes.
The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload.
2.6.2. Amazon Elastic Block Store storage Copy linkLink copied to clipboard!
Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. By default, the following storage is provisioned for each EC2 instance:
Node volumes
-
Type:
AWS EBS GP3
- Default size: 300 GiB (adjustable at creation time)
- Minimum size: 75 GiB
-
Type:
Workload persistent volumes
-
Default storage class:
gp3-csi
-
Provisioner:
ebs.csi.aws.com
- Dynamic persistent volume provisioning
-
Default storage class:
2.6.3. Elastic Load Balancing Copy linkLink copied to clipboard!
By default, one Network Load Balancer is created for use by the default ingress controller. You can create additional load balancers of the following types according to the needs of your workload:
- Classic Load Balancer
- Network Load Balancer
- Application Load Balancer
For more information, see the ELB documentation for AWS.
2.6.4. S3 storage Copy linkLink copied to clipboard!
The image registry is backed by AWS S3 storage. Resources are pruned regularly to optimize S3 usage and cluster performance.
Two buckets are required with a typical size of 2TB each.
2.6.5. VPC Copy linkLink copied to clipboard!
Configure your VPC according to the following requirements:
Subnets: Every cluster requires a minimum of one private subnet for every availability zone. For example, 1 private subnet is required for a single-zone cluster, and 3 private subnets are required for a cluster with 3 availability zones.
If your cluster needs direct access to a network that is external to the cluster, including the public internet, you require at least one public subnet.
Red Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended.
NoteA public subnet connects directly to the internet through an internet gateway.
A private subnet connects to the internet through a network address translation (NAT) gateway.
- Route tables: One route table per private subnet, and one additional table per cluster.
- Internet gateways: One Internet Gateway per cluster.
- NAT gateways: One NAT Gateway per public subnet.
2.6.6. Security groups Copy linkLink copied to clipboard!
AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances.
Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in Required ports for default security groups.
Group | Type | IP Protocol | Port range |
---|---|---|---|
WorkerSecurityGroup |
|
|
|
|
|
2.6.6.1. Additional custom security groups Copy linkLink copied to clipboard!
You can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:
- You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.
- You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.
- You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for Red Hat OpenShift Service on AWS see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.
2.7. Networking prerequisites Copy linkLink copied to clipboard!
2.7.1. Minimum bandwidth Copy linkLink copied to clipboard!
During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.
After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.
2.7.2. AWS firewall prerequisites Copy linkLink copied to clipboard!
If you are using a firewall to control egress traffic from your Red Hat OpenShift Service on AWS cluster, you must configure your firewall to grant access to the certain domain and port combinations below. Red Hat OpenShift Service on AWS requires this access to provide a fully managed OpenShift service.
Prerequisites
- You have configured an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service.
Procedure
Allowlist the following URLs that are used to install and download packages and tools:
Expand Domain Port Function registry.redhat.io
443
Provides core container images.
quay.io
443
Provides core container images.
cdn01.quay.io
443
Provides core container images.
cdn02.quay.io
443
Provides core container images.
cdn03.quay.io
443
Provides core container images.
cdn04.quay.io
443
Provides core container images.
cdn05.quay.io
443
Provides core container images.
cdn06.quay.io
443
Provides core container images.
sso.redhat.com
443
Required. The
https://console.redhat.com/openshift
site uses authentication fromsso.redhat.com
to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, and so on.quay-registry.s3.amazonaws.com
443
Provides core container images.
quayio-production-s3.s3.amazonaws.com
443
Provides core container images.
registry.access.redhat.com
443
Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the
odo
CLI tool that helps developers build on OpenShift and Kubernetes.access.redhat.com
443
Required. Hosts a signature store that a container client requires for verifying images when pulling them from
registry.access.redhat.com
.registry.connect.redhat.com
443
Required for all third-party images and certified Operators.
console.redhat.com
443
Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades.
sso.redhat.com
443
The
https://console.redhat.com/openshift
site uses authentication fromsso.redhat.com
.pull.q1w2.quay.rhcloud.com
443
Provides core container images as a fallback when quay.io is not available.
catalog.redhat.com
443
The
registry.access.redhat.com
andhttps://registry.redhat.io
sites redirect throughcatalog.redhat.com
.oidc.op1.openshiftapps.com
443
Used by Red Hat OpenShift Service on AWS for STS implementation with managed OIDC configuration.
Allowlist the following telemetry URLs:
Expand Domain Port Function cert-api.access.redhat.com
443
Required for telemetry.
api.access.redhat.com
443
Required for telemetry.
infogw.api.openshift.com
443
Required for telemetry.
console.redhat.com
443
Required for telemetry and Red Hat Insights.
observatorium-mst.api.openshift.com
443
Required for managed OpenShift-specific telemetry.
observatorium.api.openshift.com
443
Required for managed OpenShift-specific telemetry.
Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring in the Additional resources section.
Allowlist the following Amazon Web Services (AWS) API URls:
Expand Domain Port Function .amazonaws.com
443
Required to access AWS services and resources.
Alternatively, if you choose to not use a wildcard for Amazon Web Services (AWS) APIs, you must allowlist the following URLs:
Expand Domain Port Function ec2.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
events.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
iam.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
route53.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
sts.amazonaws.com
443
Used to install and manage clusters in an AWS environment, for clusters configured to use the global endpoint for AWS STS.
sts.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment, for clusters configured to use regionalized endpoints for AWS STS. See AWS STS regionalized endpoints for more information.
tagging.us-east-1.amazonaws.com
443
Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1, regardless of the region the cluster is deployed in.
ec2.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
elasticloadbalancing.<aws_region>.amazonaws.com
443
Used to install and manage clusters in an AWS environment.
tagging.<aws_region>.amazonaws.com
443
Allows the assignment of metadata about AWS resources in the form of tags.
Allowlist the following OpenShift URLs:
Expand Domain Port Function mirror.openshift.com
443
Used to access mirrored installation content and images. This site is also a source of release image signatures.
api.openshift.com
443
Used to check if updates are available for the cluster.
Allowlist the following site reliability engineering (SRE) and management URLs:
Expand Domain Port Function api.pagerduty.com
443
This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on.
events.pagerduty.com
443
This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on.
api.deadmanssnitch.com
443
Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running.
nosnch.in
443
Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running.
http-inputs-osdsecuritylogs.splunkcloud.com
443
Required. Used by the
splunk-forwarder-operator
as a logging forwarding endpoint to be used by Red Hat SRE for log-based alerting.sftp.access.redhat.com
(Recommended)22
The SFTP server used by
must-gather-operator
to upload diagnostic logs to help troubleshoot issues with the cluster.
2.7.3. Firewall prerequisites for Red Hat OpenShift Service on AWS Copy linkLink copied to clipboard!
- If you are using a firewall to control egress traffic from Red Hat OpenShift Service on AWS, your Virtual Private Cloud (VPC) must be able to complete requests from the cluster to the Amazon S3 service, for example, via an Amazon S3 gateway.
- You must also configure your firewall to grant access to the following domain and port combinations.
2.7.3.1. Domains for installation packages and tools Copy linkLink copied to clipboard!
Domain | Port | Function |
---|---|---|
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 | Provides core container images. |
| 443 |
Required. Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the |
| 443 |
Required. Hosts a signature store that a container client requires for verifying images when pulling them from |
| 443 | Required. Used to check for available updates to the cluster. |
| 443 | Required. Used to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator (CVO) needs only a single functioning source. |
2.7.3.2. Domains for telemetry Copy linkLink copied to clipboard!
Domain | Port | Function |
---|---|---|
| 443 | Required for telemetry. |
| 443 | Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades. |
| 443 |
Required. The |
Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring.
2.7.3.3. Domains for Amazon Web Services (AWS) APIs Copy linkLink copied to clipboard!
Domain | Port | Function |
---|---|---|
| 443 |
Required. Used to access the AWS Secure Token Service (STS) regional endpoint. Ensure that you replace |
- This can also be accomplished by configuring a private interface endpoint in your AWS Virtual Private Cloud (VPC) to the regional AWS STS endpoint.
2.7.3.4. Domains for your workload Copy linkLink copied to clipboard!
Your workload may require access to other sites that provide resources for programming languages or frameworks.
- Allow access to sites that provide resources required by your builds.
- Allow access to outbound URLs required for your workload, for example, OpenShift Outbound URLs to Allow.
2.7.3.5. Optional domains to enable third-party content Copy linkLink copied to clipboard!
Domain | Port | Function |
---|---|---|
| 443 | Optional. Required for all third-party-images and certified operators. |
| 443 |
Optional. Provides access to container images hosted on |
| 443 | Optional. Required for Sonatype Nexus, F5 Big IP operators. |
2.7.3.6. Outbound firewall rules for the ROSA CLI for clusters with egress zero Copy linkLink copied to clipboard!
If you use a bastion host to connect to a private cluster with egress zero, you must add the following rules to your firewall so that it can connect and authenticate to the cluster.
Domain | Port | From/To | Function |
---|---|---|---|
| 443 | ROSA CLI running on bastion host |
The OpenShift console uses authentication from |
| 443 | ROSA CLI running on bastion host | Required for registering a Red Hat OpenShift Service on AWS cluster into Red Hat Hybrid Cloud Console. |
| 443 | ROSA CLI running on bastion host | Used for creating IAM roles and attaching permissions. |
| 443 | ROSA CLI running on bastion host | Checks AWS quotas to ensure they satisfy ROSA installation requirements. Alternatively, you can create a VPC endpoint for servicequota service to avoid whitelisting this URL from your firewall. |
| 443 | ROSA CLI running on bastion host | Used to get short-lived token to access AWS service. Alternatively, you can create a VPC endpoint for STS service to avoid whitelisting this url from your firewall. |
| 443 | ROSA CLI running on bastion host | Used to retrieve EC2 instance related information such as subnets. Alternatively, you can create a VPC endpoint for EC2 service to avoid whitelisting this URL from your firewall. |
2.7.3.7. Outbound firewall rules from Red Hat Hybrid Cloud Console for clusters with egress zero Copy linkLink copied to clipboard!
Domain | Port | From/To | Function |
---|---|---|---|
| 443 | Red Hat OpenShift Service on AWS cluster | Used to access the AWS Secure Token Service (STS) regional endpoint to retrieve a short-lived token to access AWS services. Alternatively, you can create a VPC endpoint for STS service to avoid whitelisting this URL from your firewall. |
| 443 | Any browser to access Red Hat Hybrid Cloud Console | To manage a Red Hat OpenShift Service on AWS cluster from Hybrid Cloud Console. |
| 443 | Any browser to access Red Hat Hybrid Cloud Console |
The Red Hat Hybrid Cloud Console site uses authentication from |
Next steps
Additional resources
Chapter 3. Required IAM roles and resources Copy linkLink copied to clipboard!
You must create several role resources on your AWS account in order to create and manage a Red Hat OpenShift Service on AWS cluster.
3.1. Overview of required roles Copy linkLink copied to clipboard!
To create and manage your Red Hat OpenShift Service on AWS cluster, you must create several account-wide and cluster-wide roles. If you intend to use OpenShift Cluster Manager to create or manage your cluster, you need some additional roles.
- To create and manage clusters
Several account-wide roles are required to create and manage Red Hat OpenShift Service on AWS clusters. These roles only need to be created once per AWS account, and do not need to be created fresh for each cluster. One or more AWS managed policies are attached to each role to grant that role the required capabilities. You can specify your own prefix, or use the default prefix (
ManagedOpenShift
).NoteRole names are limited to a maximum length of 64 characters in AWS IAM. When the user-specified prefix for a cluster is longer than 20 characters, the role name is truncated to observe this 64-character maximum in AWS IAM.
For Red Hat OpenShift Service on AWS clusters, you must create the following account-wide roles and attach the indicated AWS managed policies:
Expand Table 3.1. Required account roles and AWS policies for Red Hat OpenShift Service on AWS Role name AWS policy names <prefix>-HCP-ROSA-Worker-Role
ROSAWorkerInstancePolicy
andAmazonEC2ContainerRegistryReadOnly
<prefix>-HCP-ROSA-Support-Role
ROSASRESupportPolicy
<prefix>-HCP-ROSA-Installer-Role
ROSAInstallerPolicy
Role creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication.
- To use Operator-managed cluster capabilities
Some cluster capabilities, including several capabilities provided by default, are managed using Operators. Cluster-specific Operator roles (
operator-roles
in the ROSA CLI) are required to use these capabilities. These roles are used to obtain the temporary permissions required to carry out cluster operations such as managing back-end storage, ingress, and registry. Obtaining these permissions requires the configuration of an OpenID Connect (OIDC) provider, which connects to AWS Security Token Service (STS) to authenticate Operator access to AWS resources.For Red Hat OpenShift Service on AWS clusters, you must create the following Operator roles and attach the indicated AWS Managed policies:
Expand Table 3.2. Required Operator roles and AWS Managed policies for ROSA with HCP Role name AWS-managed policy name openshift-cloud-network-config-controller-c
ROSACloudNetworkConfigOperatorPolicy
openshift-image-registry-installer-cloud-credentials
ROSAImageRegistryOperatorPolicy
kube-system-kube-controller-manager
ROSAKubeControllerPolicy
kube-system-capa-controller-manager
ROSANodePoolManagementPolicy
kube-system-control-plane-operator
ROSAControlPlaneOperatorPolicy
kube-system-kms-provider
ROSAKMSProviderPolicy
openshift-ingress-operator-cloud-credentials
ROSAIngressOperatorPolicy
openshift-cluster-csi-drivers-ebs-cloud-credentials
ROSAAmazonEBSCSIDriverOperatorPolicy
When you create Operator roles using the
rosa create operator-role
command, the roles created are named using the pattern<cluster_name>-<hash>-<role_name>
, for example,test-abc1-kube-system-control-plane-operator
. When your cluster name is longer than 15 characters, the role name is truncated.- To use OpenShift Cluster Manager
The web user interface, OpenShift Cluster Manager, requires you to create additional roles in your AWS account to create a trust relationship between that AWS account and the OpenShift Cluster Manager.
This trust relationship is achieved through the creation and association of the
ocm-role
AWS IAM role. This role has a trust policy with the AWS installer that links your Red Hat account to your AWS account. In addition, you also need auser-role
AWS IAM role for each web UI user, which serves to identify these users. Thisuser-role
AWS IAM role has no permissions.The following AWS IAM roles are required to use OpenShift Cluster Manager:
-
ocm-role
-
user-role
-
3.2. Roles required to create and manage clusters Copy linkLink copied to clipboard!
Several account-wide roles (account-roles
in the ROSA CLI) are required to create or manage Red Hat OpenShift Service on AWS clusters. These roles must be created using the ROSA CLI (rosa
), regardless of whether you typically use OpenShift Cluster Manager or the ROSA CLI to create and manage your clusters. These roles only need to be created once, and do not need to be created for every cluster you install.
3.2.1. Creating the account-wide STS roles and policies Copy linkLink copied to clipboard!
Before you create your Red Hat OpenShift Service on AWS cluster, you must create the required account-wide roles and policies.
Specific AWS-managed policies for Red Hat OpenShift Service on AWS must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for Red Hat OpenShift Service on AWS clusters, see AWS managed policies for ROSA.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
- You have available AWS service quotas.
- You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
-
You have installed and configured the latest ROSA CLI (
rosa
) on your installation host. - You have logged in to your Red Hat account by using the ROSA CLI.
Procedure
If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:
rosa create account-roles --hosted-cp
$ rosa create account-roles --hosted-cp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Set your prefix as an environmental variable by running the following command:
export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
$ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the value of the variable by running the following command:
echo $ACCOUNT_ROLES_PREFIX
$ echo $ACCOUNT_ROLES_PREFIX
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ManagedOpenShift
ManagedOpenShift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information regarding AWS managed IAM policies for Red Hat OpenShift Service on AWS, see AWS managed IAM policies for ROSA.
3.3. Resources required for OIDC authentication Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS clusters use OIDC and the AWS Security Token Service (STS) to authenticate Operator access to AWS resources they require to perform their functions. Each production cluster requires its own OIDC configuration.
3.3.1. Creating an OpenID Connect configuration Copy linkLink copied to clipboard!
When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
-
You have installed and configured the latest ROSA CLI,
rosa
, on your installation host.
Procedure
To create your OIDC configuration alongside the AWS resources, run the following command:
rosa create oidc-config --mode=auto --yes
$ rosa create oidc-config --mode=auto --yes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns the following information.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for
--mode auto
, otherwise you must determine these values based onaws
CLI output for--mode manual
.Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:
export OIDC_ID=<oidc_config_id>
$ export OIDC_ID=<oidc_config_id>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In the example output above, the OIDC configuration ID is 13cdr6b.
View the value of the variable by running the following command:
echo $OIDC_ID
$ echo $OIDC_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
13cdr6b
13cdr6b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:
rosa list oidc-config
$ rosa list oidc-config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Roles required for Operator managed cluster capabilities Copy linkLink copied to clipboard!
Some cluster capabilities, including several capabilities provided by default, are managed using Operators. Cluster-specific Operator roles (operator-roles
in the ROSA CLI) use the OpenID Connect (OIDC) provider for the cluster to temporarily authenticate Operator access to AWS resources.
3.4.1. Cluster-specific Operator IAM role reference Copy linkLink copied to clipboard!
Operator roles are used to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud ingress controller, and external access to a cluster.
When you create the Operator roles, the account-wide Operator policies for the matching cluster version are attached to the roles. AWS managed Operator policies are versioned in AWS IAM. The latest version of an AWS managed policy is always used, so you do not need to manage or schedule upgrades for AWS managed policies used by ROSA with HCP.
If more than one matching policy is available in your account for an Operator role, an interactive list of options is provided when you create the role.
Role name | AWS Managed policy name | Role description |
---|---|---|
|
| An IAM role required by the cloud network config controller to manage cloud network credentials for a cluster. |
|
| An IAM role required by the ROSA Image Registry Operator to manage the OpenShift image registry storage in AWS S3 for a cluster. |
|
| An IAM role required for OpenShift management on HCP clusters. |
|
| An IAM role required for node management on HCP clusters. |
|
| An IAM role required control plane management on HCP clusters. |
|
| An IAM role required for OpenShift management on HCP clusters. |
|
| An IAM role required by the ROSA Ingress Operator to manage external access to a cluster. |
|
| An IAM role required by ROSA to manage back-end storage through the Container Storage Interface (CSI). |
3.4.2. Creating Operator roles and policies Copy linkLink copied to clipboard!
When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
-
You have installed and configured the latest ROSA CLI (
rosa
), on your installation host. - You created the account-wide AWS roles.
Procedure
To create your Operator roles, run the following command:
rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following breakdown provides options for the Operator role creation.
rosa create operator-roles --hosted-cp
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX
1 --oidc-config-id=$OIDC_ID
2 --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
- 2
- This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
- 3
- This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.
You must include the
--hosted-cp
parameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.
Verification
You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:
rosa list operator-roles
$ rosa list operator-roles
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.
3.5. Roles required to use OpenShift Cluster Manager Copy linkLink copied to clipboard!
The roles in this section are only required when you want to use OpenShift Cluster Manager to create and manage clusters. If you intend to create and manage clusters using only the ROSA CLI (rosa
) and the OpenShift CLI (oc
), these roles are not required.
3.5.1. Creating an ocm-role IAM role Copy linkLink copied to clipboard!
You create your ocm-role
IAM roles by using the command-line interface (CLI).
Prerequisites
- You have an AWS account.
- You have Red Hat Organization Administrator privileges in the OpenShift Cluster Manager organization.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest ROSA CLI,
rosa
, on your installation host.
Procedure
To create an ocm-role IAM role with basic privileges, run the following command:
rosa create ocm-role
$ rosa create ocm-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create an ocm-role IAM role with admin privileges, run the following command:
rosa create ocm-role --admin
$ rosa create ocm-role --admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (
rosa
) create your Operator roles and policies. See "Methods of account-wide role creation" for more information.
Example output
- 1
- A prefix value for all of the created AWS resources. In this example,
ManagedOpenShift
prepends all of the AWS resources. - 2
- Choose if you want this role to have the additional admin permissions.Note
You do not see this prompt if you used the
--admin
option. - 3
- The Amazon Resource Name (ARN) of the policy to set permission boundaries.
- 4
- Specify an IAM path for the user name.
- 5
- Choose the method to create your AWS roles. Using
auto
, the ROSA CLI generates and links the roles and policies. In theauto
mode, you receive some different prompts to create the AWS roles. - 6
- The
auto
method asks if you want to create a specificocm-role
using your prefix. - 7
- Confirm that you want to associate your IAM role with your OpenShift Cluster Manager.
- 8
- Links the created role with your AWS organization.
3.5.2. Creating a user-role IAM role Copy linkLink copied to clipboard!
You can create your user-role
IAM roles by using the command-line interface (CLI).
Prerequisites
- You have an AWS account.
-
You have installed and configured the latest ROSA CLI,
rosa
, on your installation host.
Procedure
To create a
user-role
IAM role with basic privileges, run the following command:rosa create user-role
$ rosa create user-role
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (
rosa
) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" for more information.
Example output
- 1
- A prefix value for all of the created AWS resources. In this example,
ManagedOpenShift
prepends all of the AWS resources. - 2
- The Amazon Resource Name (ARN) of the policy to set permission boundaries.
- 3
- Specify an IAM path for the user name.
- 4
- Choose the method to create your AWS roles. Using
auto
, the ROSA CLI generates and links the roles and policies. In theauto
mode, you receive some different prompts to create the AWS roles. - 5
- The
auto
method asks if you want to create a specificuser-role
using your prefix. - 6
- Links the created role with your AWS organization.
Chapter 4. Required AWS service quotas Copy linkLink copied to clipboard!
Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS cluster.
4.1. Required AWS service quotas Copy linkLink copied to clipboard!
The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases:
-
Red Hat OpenShift Service on AWS clusters require a minimum AWS EC2 service quota of 32 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is
5
. Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota forRunning On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances
.
-
Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for
Security groups per network interface
is5
, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6.
The AWS SDK allows ROSA to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible for cluster creation to fail because of a lack of available quota even though the AWS SDK quota check passes. To fix this issue, increase your quota.
If you need to modify or increase a specific AWS quota, see Amazon’s documentation on requesting a quota increase. Large quota requests are submitted to Amazon Support for review, and can take some time to be approved. If your quota request is urgent, contact AWS Support.
Quota name | Service code | Quota code | AWS default | Minimum required | Description |
---|---|---|---|---|---|
Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances | ec2 | L-1216C47A | 5 | 32 | Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create ROSA clusters. |
Storage for General Purpose SSD (gp3) volume storage in TiB | ebs | L-7A658B76 | 50 | 1[a] | The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. 1 TiB of storage is the required minimum for optimal performance. |
[a]
The default quota of 50 TiB is more than Red Hat OpenShift Service on AWS clusters require; however, because AWS cost is based on usage rather than quota, Red Hat recommends using the default quota.
|
Quota name | Service code | Quota code | AWS default | Minimum required | Description |
---|---|---|---|---|---|
EC2-VPC Elastic IPs | ec2 | L-0263D0A3 | 5 | 5 | The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region. |
VPCs per Region | vpc | L-F678F1CE | 5 | 5 | The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region. |
Internet gateways per Region | vpc | L-A4707A72 | 5 | 5 | The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region. |
Network interfaces per Region | vpc | L-DF5E4CA3 | 5,000 | 5,000 | The maximum number of network interfaces per Region. |
Security groups per network interface | vpc | L-2AFB9258 | 5 | 5 | The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000. |
Application Load Balancers per Region | elasticloadbalancing | L-53DA6B97 | 50 | 50 | The maximum number of Application Load Balancers that can exist in each region. |
4.2. Next steps Copy linkLink copied to clipboard!
Chapter 5. Setting up the environment Copy linkLink copied to clipboard!
After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS.
5.1. Installing and configuring the required CLI tools Copy linkLink copied to clipboard!
Several command-line interface (CLI) tools are required to deploy and work with your cluster.
Prerequisites
- You have an AWS account.
- You have a Red Hat account.
Procedure
Log in to your Red Hat and AWS accounts to access the download page for each required tool.
- Log in to your Red Hat account at console.redhat.com.
- Log in to your AWS account at aws.amazon.com.
Install and configure the latest AWS CLI (
aws
).- Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation.
Configure the AWS CLI by specifying your
aws_access_key_id
,aws_secret_access_key
, andregion
in the.aws/credentials
file. For more information, see AWS Configuration basics in the AWS documentation.NoteYou can optionally use the
AWS_DEFAULT_REGION
environment variable to set the default AWS region.Query the AWS API to verify if the AWS CLI is installed and configured correctly:
aws sts get-caller-identity --output text
$ aws sts get-caller-identity --output text
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>
<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install and configure the latest ROSA CLI.
- Navigate to Downloads.
Find Red Hat OpenShift Service on AWS command line interface (
rosa
) in the list of tools and click Download.The
rosa-linux.tar.gz
file is downloaded to your default download location.Extract the
rosa
binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive:tar xvf rosa-linux.tar.gz
$ tar xvf rosa-linux.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
rosa
binary file to a directory in your execution path. In the following example, the/usr/local/bin
directory is included in the path of the user:sudo mv rosa /usr/local/bin/rosa
$ sudo mv rosa /usr/local/bin/rosa
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the ROSA CLI is installed correctly by querying the
rosa
version:rosa version
$ rosa version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
1.2.47 Your ROSA CLI is up to date.
1.2.47 Your ROSA CLI is up to date.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Log in to the ROSA CLI using an offline access token.
Run the login command:
rosa login
$ rosa login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Navigate to the URL listed in the command output to view your offline access token.
Enter the offline access token at the command-line prompt to log in.
? Copy the token and paste it here: ******************* [full token length omitted]
? Copy the token and paste it here: ******************* [full token length omitted]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the future you can specify the offline access token by using the
--token="<offline_access_token>"
argument when you run therosa login
command.Verify that you are logged in and confirm that your credentials are correct before proceeding:
rosa whoami
$ rosa whoami
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install and configure the latest OpenShift CLI (
oc
).Use the ROSA CLI to download the
oc
CLI.The following command downloads the latest version of the CLI to the current working directory:
rosa download openshift-client
$ rosa download openshift-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
oc
binary file from the downloaded archive. The following example extracts the files from a Linux tar archive:tar xvf openshift-client-linux.tar.gz
$ tar xvf openshift-client-linux.tar.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
oc
binary to a directory in your execution path. In the following example, the/usr/local/bin
directory is included in the path of the user:sudo mv oc /usr/local/bin/oc
$ sudo mv oc /usr/local/bin/oc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
oc
CLI is installed correctly:rosa verify openshift-client
$ rosa verify openshift-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.17.3
I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.17.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Next steps Copy linkLink copied to clipboard!
Chapter 6. Planning resource usage in your cluster Copy linkLink copied to clipboard!
6.1. Planning your environment based on tested cluster maximums Copy linkLink copied to clipboard!
This document describes how to plan your Red Hat OpenShift Service on AWS environment based on the tested cluster maximums.
Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.
The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.
While planning your environment, determine how many pods are expected to fit per node using the following formula:
required pods per cluster / pods per node = total number of nodes needed
required pods per cluster / pods per node = total number of nodes needed
The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in Planning your environment based on application requirements.
Example scenario
If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:
2200 / 250 = 8.8
2200 / 250 = 8.8
If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:
2200 / 20 = 110
2200 / 20 = 110
Where:
required pods per cluster / total number of nodes = expected pods per node
required pods per cluster / total number of nodes = expected pods per node
6.2. Planning your environment based on application requirements Copy linkLink copied to clipboard!
This document describes how to plan your Red Hat OpenShift Service on AWS environment based on your application requirements.
Consider an example application environment:
Pod type | Pod quantity | Max memory | CPU cores | Persistent storage |
---|---|---|---|---|
apache | 100 | 500 MB | 0.5 | 1 GB |
node.js | 200 | 1 GB | 1 | 1 GB |
postgresql | 100 | 1 GB | 2 | 10 GB |
JBoss EAP | 100 | 1 GB | 1 | 1 GB |
Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage.
Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.
Node type | Quantity | CPUs | RAM (GB) |
---|---|---|---|
Nodes (option 1) | 100 | 4 | 16 |
Nodes (option 2) | 50 | 8 | 32 |
Nodes (option 3) | 25 | 16 | 64 |
Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.
The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing.
Disable the service links in the deployment’s service specification file to overcome this:
Example
The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX
on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including:
-
<SERVICE_NAME>_SERVICE_HOST=<IP>
-
<SERVICE_NAME>_SERVICE_PORT=<PORT>
-
<SERVICE_NAME>_PORT=tcp://<IP>:<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp
-
<SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT>
-
<SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR>
The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it.
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.