Prepare your environment
Planning, limits, and scalability for Red Hat OpenShift service on AWS classic architecture
Abstract
Chapter 1. Prerequisites checklist for deploying Red Hat OpenShift Service on AWS classic architecture Copy linkLink copied to clipboard!
This is a high level checklist of prerequisites needed to create a Red Hat OpenShift Service on AWS classic architecture cluster with STS.
The machine that you run the installation process from must have access to the following:
- Amazon Web Services API and authentication service endpoints
-
Red Hat OpenShift API and authentication service endpoints (
api.openshift.comandsso.redhat.com) - Internet connectivity to obtain installation artifacts during deployment
Starting with version 1.2.7 of the ROSA command-line interface (CLI) (rosa), all OIDC provider endpoint URLs on new clusters use Amazon CloudFront and the oidc.op1.openshiftapps.com domain. This change improves access speed, reduces latency, and improves resiliency for new clusters created with the ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations.
1.1. Accounts and permissions Copy linkLink copied to clipboard!
Ensure that you have the following accounts, credentials, and permissions.
1.1.1. AWS account Copy linkLink copied to clipboard!
You must have an AWS account with certain permissions before creating your cluster.
- Create an AWS account if you do not already have one.
- Gather the credentials required to log in to your AWS account.
- Ensure that your AWS account has sufficient permissions to use the ROSA CLI.
Enable Red Hat OpenShift Service on AWS classic architecture for your AWS account on the AWS console.
-
If your account is the management account for your organization (used for AWS billing purposes), you must have
aws-marketplace:Subscribepermissions available on your account. See Service control policy (SCP) prerequisites for more information, or see the AWS documentation for troubleshooting: AWS Organizations service control policy denies required AWS Marketplace permissions.
-
If your account is the management account for your organization (used for AWS billing purposes), you must have
- Ensure you have not enabled restrictive tag policies. For more information, see Tag policies in the AWS documentation.
1.1.2. Red Hat account Copy linkLink copied to clipboard!
Create your Red Hat account to maintain your Red Hat resources.
- Create a Red Hat account for the Red Hat Hybrid Cloud Console if you do not already have one.
- Gather the credentials required to log in to your Red Hat account.
1.2. CLI requirements Copy linkLink copied to clipboard!
You need to download and install several CLI (command-line interface) tools to be able to deploy a cluster.
1.2.1. AWS CLI (aws) Copy linkLink copied to clipboard!
The AWS CLI tool allows you to interact with AWS resources directly.
Procedure
- Install the AWS Command Line Interface.
- Log in to your AWS account using the AWS CLI: Sign in through the AWS CLI
Verify your account identity:
aws sts get-caller-identity
$ aws sts get-caller-identityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the service role for ELB (Elastic Load Balancing) exists:
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
$ aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the role does not exist, create it by running the following command:
aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
$ aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.2. ROSA command-line interface (CLI) (rosa) Copy linkLink copied to clipboard!
Install the ROSA CLI on in your local environment.
Procedure
- Install the ROSA CLI from the web console.
Log in to your Red Hat account by running
rosa loginand following the instructions in the command output:rosa login
$ rosa login To login to your Red{nbsp}Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can copy the full
$ rosa login --token=abc…command and paste that in the terminal:rosa login --token=<abc..>
$ rosa login --token=<abc..>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm you are logged in using the correct account and credentials:
rosa whoami
$ rosa whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.3. OpenShift CLI (oc) Copy linkLink copied to clipboard!
The OpenShift CLI (oc) is not required to deploy a Red Hat OpenShift Service on AWS classic architecture cluster, but is a useful tool for interacting with your cluster after it is deployed.
Procedure
-
Download and install
ocfrom the OpenShift Cluster Manager Command-line interface (CLI) tools page, or follow the instructions in the Additional resources. Verify that the OpenShift CLI has been installed correctly by running the following command:
rosa verify openshift-client
$ rosa verify openshift-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. AWS infrastructure prerequisites Copy linkLink copied to clipboard!
Before you create your cluster, you need to have sufficient AWS quota.
Procedure
To verify that your AWS account has sufficient quota available to deploy a cluster, run the following command:
rosa verify quota
$ rosa verify quotaCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command only checks the total quota allocated to your account; it does not reflect the amount of quota already consumed from that quota. Running this command is optional because your quota is verified during cluster deployment. However, Red Hat recommends running this command to confirm your quota ahead of time so that deployment is not interrupted by issues with quota availability.
1.4. Service Control Policy (SCP) prerequisites Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS classic architecture clusters are hosted in an AWS account within an AWS organizational unit. A service control policy (SCP) is created and applied to the AWS organizational unit that manages what services the AWS sub-accounts are permitted to access.
- Ensure that your organization’s SCPs are not more restrictive than the roles and policies required by the cluster.
- When you create a Red Hat OpenShift Service on AWS classic architecture cluster, an associated AWS OpenID Connect (OIDC) identity provider is created.
1.5. Networking prerequisites Copy linkLink copied to clipboard!
1.5.1. Firewall Copy linkLink copied to clipboard!
You must configure your firewall so that your cluster can access the required domains and ports.
- Configure your firewall to allow access to the domains and ports listed in AWS firewall prerequisites.
1.5.2. VPC requirements for PrivateLink clusters Copy linkLink copied to clipboard!
If you choose to deploy a PrivateLink cluster, then be sure to deploy the cluster in the pre-existing BYO VPC:
Installing a new Red Hat OpenShift Service on AWS classic architecture cluster into a VPC that was automatically created by the installer for a different cluster is not supported.
Procedure
Create a public and private subnet for each AZ that your cluster uses.
- Alternatively, implement transit gateway for internet and egress with appropriate routes.
The VPC’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address for cluster machines.- The subnet CIDR blocks must belong to the machine CIDR that you specify.
Set both
enableDnsHostnamesandenableDnsSupporttotrue.- That way, the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster internal DNS records.
Verify route tables by running:
---- $ aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>" ----
---- $ aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>" ----Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the cluster can egress either through NAT gateway in public subnet or through transit gateway.
- Ensure whatever UDR you want to follow is set up.
You can also configure a cluster-wide proxy during or after install.
NoteYou can install a non-PrivateLink Red Hat OpenShift Service on AWS classic architecture cluster in a pre-existing BYO VPC.
1.5.3. Additional custom security groups Copy linkLink copied to clipboard!
During cluster creation, you can add additional custom security groups to a cluster that has an existing non-managed VPC. To do so, complete these prerequisites before you create the cluster:
- Create the custom security groups in AWS before you create the cluster.
- Associate the custom security groups with the VPC that you are using to create the cluster. Do not associate the custom security groups with any other VPC.
-
You may need to request additional AWS quota for
Security groups per network interface.
1.5.4. Custom DNS and domains Copy linkLink copied to clipboard!
You can configure a custom domain name server and custom domain name for your cluster.
Prerequisites
-
By default, Red Hat OpenShift Service on AWS classic architecture clusters require you to set the
domain name serversoption toAmazonProvidedDNSto ensure successful cluster creation and operation. - To use a custom DNS server and domain name for your cluster, the Red Hat OpenShift Service on AWS classic architecture installer must be able to use VPC DNS with default DHCP options so that it can resolve internal IPs and services. This means that you must create a custom DHCP option set to forward DNS lookups to your DNS server, and associate this option set with your VPC before you create the cluster.
Procedure
Confirm that your VPC is using VPC Resolver by running the following command:
aws ec2 describe-dhcp-options
$ aws ec2 describe-dhcp-optionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Detailed requirements for deploying Red Hat OpenShift Service on AWS classic architecture using STS Copy linkLink copied to clipboard!
Red Hat OpenShift Service on AWS classic architecture provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.
AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS classic architecture because it provides enhanced security.
Ensure that the following prerequisites are met before installing your cluster.
2.1. Customer requirements for all Red Hat OpenShift Service on AWS classic architecture clusters Copy linkLink copied to clipboard!
The following prerequisites must be complete before you deploy a Red Hat OpenShift Service on AWS classic architecture cluster that uses the AWS Security Token Service (STS).
2.2. AWS account Copy linkLink copied to clipboard!
You must have an AWS account with the following considerations to deploy a Red Hat OpenShift Service on AWS classic architecture cluster.
- Your AWS account must allow sufficient quota to deploy your cluster.
- If your organization applies and enforces SCP policies, these policies must not be more restrictive than the roles and policies required by the cluster.
- You can deploy native AWS services within the same AWS account.
- Your account must have a service-linked role to allow the installation program to configure Elastic Load Balancing (ELB). See "Creating the Elastic Load Balancing (ELB) service-linked role" for more information.
2.2.1. Support requirements Copy linkLink copied to clipboard!
To receive Red Hat support, your account must use a specific AWS plan and have the required permissions on your account.
- Red Hat recommends that the customer have at least Business Support from AWS.
- Red Hat may have permission from the customer to request AWS support on their behalf.
- Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.
- Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS classic architecture clusters in the same manner, unless otherwise specified in this requirements section.
2.2.2. Security requirements Copy linkLink copied to clipboard!
Before deploying your cluster, ensure that you plan for your egresses and ingresses to have access to certain domains and IP addresses.
- Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.
- Red Hat must have egress allowed to the domains documented in the "AWS Firewall prerequisites" section.
2.2.3. Requirements for using OpenShift Cluster Manager Copy linkLink copied to clipboard!
The following configuration details are required only if you use OpenShift Cluster Manager to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements.
2.2.3.1. AWS account association Copy linkLink copied to clipboard!
When you provision Red Hat OpenShift Service on AWS classic architecture using OpenShift Cluster Manager (console.redhat.com), you must associate the ocm-role and user-role IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as account linking.
The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account.
2.2.4. Associating your AWS account with IAM roles Copy linkLink copied to clipboard!
You can associate or link your AWS account with existing IAM roles by using the ROSA command-line interface (CLI) (rosa).
Prerequisites
- You have an AWS account.
- You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
-
You have installed and configured the latest AWS CLI (
aws) and ROSA CLI on your installation host. You have created the
ocm-roleanduser-roleIAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:rosa list ocm-role
$ rosa list ocm-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow rosa list user-role
$ rosa list user-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
Yesis displayed in theLinkedcolumn for both roles, you have already linked the roles to an AWS account.
Procedure
In the ROSA CLI, link your
ocm-roleresource to your Red Hat organization by using your Amazon Resource Name (ARN):NoteYou must have Red Hat Organization Administrator privileges to run the
rosa linkcommand. After you link theocm-roleresource with your AWS account, it takes effect and is visible to all users in the organization.rosa link ocm-role --role-arn <arn>
$ rosa link ocm-role --role-arn <arn>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the ROSA CLI, link your
user-roleresource to your Red Hat user account by using your Amazon Resource Name (ARN):rosa link user-role --role-arn <arn>
$ rosa link user-role --role-arn <arn>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Requirements for deploying a cluster in an opt-in region Copy linkLink copied to clipboard!
An AWS opt-in region is a region that is not enabled in your AWS account by default. If you want to deploy a Red Hat OpenShift Service on AWS classic architecture cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:
- The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.
The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.
ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.4.1. Setting the AWS security token version Copy linkLink copied to clipboard!
If you want to create a Red Hat OpenShift Service on AWS classic architecture cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.
Prerequisites
- You have installed and configured the latest AWS CLI on your installation host.
Procedure
List the ID of the AWS account that is defined in your AWS CLI configuration:
aws sts get-caller-identity --query Account --output json
$ aws sts get-caller-identity --query Account --output jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the output matches the ID of the relevant AWS account.
List the security token version that is set in your AWS account:
aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json
$ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
1
1Copy to Clipboard Copied! Toggle word wrap Toggle overflow To update the security token version to version 2 for all regions in your AWS account, run the following command:
aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token
$ aws iam set-security-token-service-preferences --global-endpoint-token-version v2TokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUpdating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.
2.5. Red Hat managed IAM references for AWS Copy linkLink copied to clipboard!
When you use STS as your cluster credential method, Red Hat is not responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.
-
To use the
ocmCLI, you must have anocm-roleanduser-roleresource.
2.7. Provisioned AWS Infrastructure Copy linkLink copied to clipboard!
This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS classic architecture cluster.
2.7.1. EC2 instances Copy linkLink copied to clipboard!
AWS EC2 instances are required to deploy the control plane and data plane functions for Red Hat OpenShift Service on AWS classic architecture. Instance types can vary for control plane and infrastructure nodes, depending on the worker node count.
At a minimum, the following EC2 instances are deployed:
-
Three
m5.2xlargecontrol plane nodes -
Two
r5.xlargeinfrastructure nodes -
Two
m5.xlargeworker nodes
The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload.
2.7.2. Amazon Elastic Block Store storage Copy linkLink copied to clipboard!
Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. By default, the following storage is provisioned for each EC2 instance:
Control Plane Volume
- Size: 350GB
- Type: gp3
- Input/Output Operations Per Second: 1000
Infrastructure Volume
- Size: 300GB
- Type: gp3
- Input/Output Operations Per Second: 900
Worker Volume
- Default size: 300 GiB (adjustable at creation time)
- Minimum size: 128GB
- Type: gp3
- Input/Output Operations Per Second: 900
Clusters deployed before the release of OpenShift Container Platform 4.11 use gp2 type storage by default.
2.7.3. Elastic Load Balancing Copy linkLink copied to clipboard!
Each cluster can use up to two Classic Load Balancers for application router and up to two Network Load Balancers for API.
For more information, see the ELB documentation for AWS.
2.7.4. S3 storage Copy linkLink copied to clipboard!
The image registry is backed by AWS S3 storage. Resources are pruned regularly to optimize S3 usage and cluster performance.
Two buckets are required with a typical size of 2TB each.
2.7.5. VPC Copy linkLink copied to clipboard!
Configure your VPC according to the following requirements:
Subnets: Every cluster requires a minimum of one private subnet for every availability zone. For example, 1 private subnet is required for a single-zone cluster, and 3 private subnets are required for a cluster with 3 availability zones.
If your cluster needs direct access to a network that is external to the cluster, including the public internet, you require at least one public subnet.
Red Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended.
NoteA public subnet connects directly to the internet through an internet gateway.
A private subnet connects to the internet through a network address translation (NAT) gateway.
- Route tables: One route table per private subnet, and one additional table per cluster.
- Internet gateways: One Internet Gateway per cluster.
- NAT gateways: One NAT Gateway per public subnet.
Figure 2.1. Sample VPC Architecture
2.7.6. Security groups Copy linkLink copied to clipboard!
AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances.
Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in Required ports for default security groups.
| Group | Type | IP Protocol | Port range |
|---|---|---|---|
| MasterSecurityGroup |
|
|
|
|
|
| ||
|
|
| ||
|
|
| ||
| WorkerSecurityGroup |
|
|
|
|
|
| ||
| BootstrapSecurityGroup |
|
|
|
|
|
|
2.7.7. Additional custom security groups Copy linkLink copied to clipboard!
When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:
- You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.
- You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.
- You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for Red Hat OpenShift Service on AWS classic architecture see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.
2.8. Networking prerequisites Copy linkLink copied to clipboard!
The following sections detail the requirements to create your cluster.
2.8.1. Minimum bandwidth Copy linkLink copied to clipboard!
During cluster deployment, Red Hat OpenShift Service on AWS classic architecture requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.
After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.
2.9. AWS firewall prerequisites Copy linkLink copied to clipboard!
If you are using a firewall to control egress traffic from your Red Hat OpenShift Service on AWS classic architecture cluster, you must configure your firewall to grant access to the certain domain and port combinations below. Red Hat OpenShift Service on AWS classic architecture requires this access to provide a fully managed OpenShift service. You must configure an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service.
2.9.1. Firewall AllowList requirements for Red Hat OpenShift Service on AWS classic architecture clusters using STS Copy linkLink copied to clipboard!
You must AllowList several URLs to download required packages and tools for your cluster.
Only Red Hat OpenShift Service on AWS classic architecture clusters deployed with PrivateLink can use a firewall to control egress traffic.
2.9.1.1. Domains for installation packages and tools Copy linkLink copied to clipboard!
| Domain | Port | Function |
|---|---|---|
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 |
Required. The |
|
| 443 | Provides core container images. |
|
| 443 | Provides core container images. |
|
| 443 |
Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the |
|
| 443 |
Required. Hosts a signature store that a container client requires for verifying images when pulling them from |
|
| 443 | Required for all third-party images and certified Operators. |
|
| 443 | Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades. |
|
| 443 |
The |
|
| 443 | Provides core container images as a fallback when quay.io is not available. |
|
| 443 |
The |
|
| 443 | Used by Red Hat OpenShift Service on AWS classic architecture for STS implementation with managed OIDC configuration. |
2.9.1.2. Domains for telemetry Copy linkLink copied to clipboard!
| Domain | Port | Function |
|---|---|---|
|
| 443 | Required for telemetry. |
|
| 443 | Required for telemetry. |
|
| 443 | Required for telemetry. |
|
| 443 | Required for telemetry and {red-hat-lightspeed}. |
|
| 443 | Required for managed OpenShift-specific telemetry. |
|
| 443 | Required for managed OpenShift-specific telemetry. |
Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring in the Additional resources section.
2.9.1.3. Domains for Amazon Web Services (AWS) APIs Copy linkLink copied to clipboard!
| Domain | Port | Function |
|---|---|---|
|
| 443 | Required to access AWS services and resources. |
Alternatively, if you choose to not use a wildcard for Amazon Web Services (AWS) APIs, you must allowlist the following URLs:
| Domain | Port | Function |
|---|---|---|
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Used to install and manage clusters in an AWS environment, for clusters configured to use the global endpoint for AWS STS. |
|
| 443 | Used to install and manage clusters in an AWS environment, for clusters configured to use regionalized endpoints for AWS STS. See AWS STS regionalized endpoints for more information. |
|
| 443 | Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1, regardless of the region the cluster is deployed in. |
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Used to install and manage clusters in an AWS environment. |
|
| 443 | Allows the assignment of metadata about AWS resources in the form of tags. |
2.9.1.4. Domains for OpenShift Copy linkLink copied to clipboard!
| Domain | Port | Function |
|---|---|---|
|
| 443 | Used to access mirrored installation content and images. This site is also a source of release image signatures. |
|
| 443 | Used to check if updates are available for the cluster. |
2.9.1.5. Domains for your site reliability engineering (SRE) and management Copy linkLink copied to clipboard!
| Domain | Port | Function |
|---|---|---|
|
| 443 | This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on. |
|
| 443 | This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on. |
|
| 443 | Alerting service used by Red Hat OpenShift Service on AWS classic architecture to send periodic pings that indicate whether the cluster is available and running. |
|
| 443 | Alerting service used by Red Hat OpenShift Service on AWS classic architecture to send periodic pings that indicate whether the cluster is available and running. |
|
| 443 |
Required. Used by the |
|
| 22 |
The SFTP server used by |
2.11. Next steps Copy linkLink copied to clipboard!
Chapter 3. Red Hat OpenShift Service on AWS classic architecture IAM role resources Copy linkLink copied to clipboard!
You must create several role resources on your AWS account in order to create and manage a Red Hat OpenShift Service on AWS classic architecture cluster.
3.1. Overview of required roles Copy linkLink copied to clipboard!
To create and manage your Red Hat OpenShift Service on AWS classic architecture cluster, you must create several account-wide and cluster-wide roles. If you intend to use OpenShift Cluster Manager to create or manage your cluster, you need some additional roles.
- To create and manage clusters
Several account-wide roles are required to create and manage Red Hat OpenShift Service on AWS classic architecture clusters. These roles only need to be created once per AWS account, and do not need to be created fresh for each cluster. One or more AWS managed policies are attached to each role to grant that role the required capabilities. You can specify your own prefix, or use the default prefix (
ManagedOpenShift).NoteRole names are limited to a maximum length of 64 characters in AWS IAM. When the user-specified prefix for a cluster is longer than 20 characters, the role name is truncated to observe this 64-character maximum in AWS IAM.
The following account-wide roles are required:
-
<prefix>-Worker-Role -
<prefix>-Support-Role -
<prefix>-Installer-Role -
<prefix>-ControlPlane-Role
NoteRole creation does not request your AWS access or secret keys. AWS Security Token Service (STS) is used as the basis of this workflow. AWS STS uses temporary, limited-privilege credentials to provide authentication.
-
- To use Operator-managed cluster capabilities
Some cluster capabilities, including several capabilities provided by default, are managed using Operators. Cluster-specific Operator roles (
operator-rolesin the ROSA CLI) are required to use these capabilities. These roles are used to obtain the temporary permissions required to carry out cluster operations such as managing back-end storage, ingress, and registry. Obtaining these permissions requires the configuration of an OpenID Connect (OIDC) provider, which connects to AWS Security Token Service (STS) to authenticate Operator access to AWS resources.The following Operator roles are required for Red Hat OpenShift Service on AWS classic architecture clusters:
-
openshift-cluster-csi-drivers-ebs-cloud-credentials -
openshift-cloud-network-config-controller-cloud-credentials -
openshift-machine-api-aws-cloud-credentials -
openshift-cloud-credential-operator-cloud-credentials -
openshift-image-registry-installer-cloud-credentials -
openshift-ingress-operator-cloud-credentials
-
When you create Operator roles using the rosa create operator-role command, the roles created are named using the pattern <cluster_name>-<hash>-<role_name>, for example, test-abc1-kube-system-control-plane-operator. When your cluster name is longer than 15 characters, the role name is truncated.
- To use OpenShift Cluster Manager
The web user interface, OpenShift Cluster Manager, requires you to create additional roles in your AWS account to create a trust relationship between that AWS account and the OpenShift Cluster Manager.
This trust relationship is achieved through the creation and association of the
ocm-roleAWS IAM role. This role has a trust policy with the AWS installer that links your Red Hat account to your AWS account. In addition, you also need auser-roleAWS IAM role for each web UI user, which serves to identify these users. Thisuser-roleAWS IAM role has no permissions.The following AWS IAM roles are required to use OpenShift Cluster Manager:
-
ocm-role -
user-role
-
Additional resources
3.2. About the ocm-role IAM resource Copy linkLink copied to clipboard!
You must create the ocm-role IAM resource to enable a Red Hat organization of users to create Red Hat OpenShift Service on AWS classic architecture clusters. Within the context of linking to AWS, a Red Hat organization is a single user within OpenShift Cluster Manager.
Some considerations for your ocm-role IAM resource are:
-
Only one
ocm-roleIAM role can be linked per Red Hat organization; however, you can have any number ofocm-roleIAM roles per AWS account. The web UI requires that only one of these roles can be linked at a time. -
Any user in a Red Hat organization may create and link an
ocm-roleIAM resource. Only the Red Hat Organization Administrator can unlink an
ocm-roleIAM resource. This limitation is to protect other Red Hat organization members from disturbing the interface capabilities of other users.NoteIf you just created a Red Hat account that is not part of an existing organization, this account is also the Red Hat Organization Administrator.
-
See "Understanding the OpenShift Cluster Manager role" in the Additional resources of this section for a list of the AWS permissions policies for the basic and admin
ocm-roleIAM resources.
Using the ROSA CLI (rosa), you can link your IAM resource when you create it.
"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your ocm-role IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking your IAM resource, you see a trust relationship from your ocm-role IAM resource in AWS with the arn:aws:iam::7333:role/RH-Managed-OpenShift-Installer resource.
After a Red Hat Organization Administrator has created and linked an ocm-role IAM resource, all organization members may want to create and link their own user-role IAM role. This IAM resource only needs to be created and linked only once per user. If another user in your Red Hat organization has already created and linked an ocm-role IAM resource, you need to ensure you have created and linked your own user-role IAM role.
3.4. About the user-role IAM role Copy linkLink copied to clipboard!
You need to create a user-role IAM role per web UI user to enable those users to create Red Hat OpenShift Service on AWS classic architecture clusters.
Some considerations for your user-role IAM role are:
-
You only need one
user-roleIAM role per Red Hat user account, but your Red Hat organization can have many of these IAM resources. -
Any user in a Red Hat organization may create and link an
user-roleIAM role. -
There can be numerous
user-roleIAM roles per AWS account per Red Hat organization. -
Red Hat uses the
user-roleIAM role to identify the user. This IAM resource has no AWS account permissions. -
Your AWS account can have multiple
user-roleIAM roles, but you must link each IAM role to each user in your Red Hat organization. No user can have more than one linkeduser-roleIAM role.
"Linking" or "associating" your IAM resources with your AWS account means creating a trust-policy with your user-role IAM role and the Red Hat OpenShift Cluster Manager AWS role. After creating and linking this IAM resource, you see a trust relationship from your user-role IAM role in AWS with the arn:aws:iam::710019948333:role/RH-Managed-OpenShift-Installer resource.
3.4.1. Creating a user-role IAM role Copy linkLink copied to clipboard!
You can create your user-role IAM roles by using the ROSA command-line interface (CLI) (rosa).
Prerequisites
- You have an AWS account.
-
You have installed and configured the latest ROSA CLI,
rosa, on your installation host.
Procedure
To create a
user-roleIAM role with basic privileges, run the following command:rosa create user-role
$ rosa create user-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command allows you to create the role by specifying specific attributes. The following example output shows the "auto mode" selected, which lets the ROSA CLI (
rosa) to create your Operator roles and policies. See "Understanding the auto and manual deployment modes" for more information. The following example shows what your creation flow may look like.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
Role prefix-
A prefix value for all of the created AWS resources. In this example,
ManagedOpenShiftprepends all of the AWS resources. Permissions boundary ARN (optional)- The Amazon Resource Name (ARN) of the policy to set permission boundaries.
Role Path (optional)- Specify an IAM path for the user name.
Role creation mode-
Choose the method to create your AWS roles. Using
auto, the ROSA CLI generates and links the roles and policies. In theautomode, you receive some different prompts to create the AWS roles. Create the 'ManagedOpenShift-User.osdocs-Role' role?-
The
automethod asks if you want to create a specificuser-roleusing your prefix. Link the 'arn:aws:iam::2066:role/ManagedOpenShift-User.osdocs-Role' role with account '1AGE'?- Links the created role with your AWS organization.
ImportantIf you unlink or delete your
user-roleIAM role before deleting your cluster, an error prevents you from deleting your cluster. You must create or relink this role to proceed with the deletion process.
3.5. Requirements for using OpenShift Cluster Manager Copy linkLink copied to clipboard!
The following configuration details are required only if you use OpenShift Cluster Manager to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements.
3.5.1. AWS account association Copy linkLink copied to clipboard!
When you provision Red Hat OpenShift Service on AWS classic architecture using OpenShift Cluster Manager (console.redhat.com), you must associate the ocm-role and user-role IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as account linking.
The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account.
3.5.2. Associating your AWS account with IAM roles Copy linkLink copied to clipboard!
You can associate or link your AWS account with existing IAM roles by using the ROSA command-line interface (CLI) (rosa).
Prerequisites
- You have an AWS account.
- You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.
-
You have installed and configured the latest AWS CLI (
aws) and ROSA CLI on your installation host. You have created the
ocm-roleanduser-roleIAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:rosa list ocm-role
$ rosa list ocm-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow rosa list user-role
$ rosa list user-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
Yesis displayed in theLinkedcolumn for both roles, you have already linked the roles to an AWS account.
Procedure
In the ROSA CLI, link your
ocm-roleresource to your Red Hat organization by using your Amazon Resource Name (ARN):NoteYou must have Red Hat Organization Administrator privileges to run the
rosa linkcommand. After you link theocm-roleresource with your AWS account, it takes effect and is visible to all users in the organization.rosa link ocm-role --role-arn <arn>
$ rosa link ocm-role --role-arn <arn>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
I: Linking OCM role ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the ROSA CLI, link your
user-roleresource to your Red Hat user account by using your Amazon Resource Name (ARN):rosa link user-role --role-arn <arn>
$ rosa link user-role --role-arn <arn>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'
I: Linking User role ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5.3. Associating multiple AWS accounts with your Red Hat organization Copy linkLink copied to clipboard!
You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create Red Hat OpenShift Service on AWS classic architecture clusters on any of the associated AWS accounts from your Red Hat organization.
With this capability, you can create clusters on different AWS profiles according to characteristics that make sense for your business, for example, by using one AWS profile for each region to create region-bound environments.
Prerequisites
- You have an AWS account.
- You are using OpenShift Cluster Manager to create clusters.
- You have the permissions required to install AWS account-wide roles.
-
You have installed and configured the latest AWS CLI (
aws) and ROSA command-line interface (CLI) (rosa) on your installation host. -
You have created the
ocm-roleanduser-roleIAM roles for Red Hat OpenShift Service on AWS classic architecture.
Procedure
To specify an AWS account profile when creating an OpenShift Cluster Manager role:
rosa create --profile <aws_profile> ocm-role
$ rosa create --profile <aws_profile> ocm-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow To specify an AWS account profile when creating a user role:
rosa create --profile <aws_profile> user-role
$ rosa create --profile <aws_profile> user-roleCopy to Clipboard Copied! Toggle word wrap Toggle overflow To specify an AWS account profile when creating the account roles:
rosa create --profile <aws_profile> account-roles
$ rosa create --profile <aws_profile> account-rolesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not specify a profile, the default AWS profile and its associated AWS region are used.
3.6. Permission boundaries for the installer role Copy linkLink copied to clipboard!
You can apply a policy as a permissions boundary on an installer role. You can use an AWS-managed policy or a customer-managed policy to set the boundary for an Amazon Web Services (AWS) Identity and Access Management (IAM) entity (user or role). The combination of policy and boundary policy limits the maximum permissions for the user or role. Red Hat OpenShift Service on AWS classic architecture includes a set of three prepared permission boundary policy files, with which you can restrict permissions for the installer role since changing the installer policy itself is not supported.
This feature is only supported on Red Hat OpenShift Service on AWS (classic architecture) clusters.
The permission boundary policy files are as follows:
- The Core boundary policy file contains the minimum permissions needed for Red Hat OpenShift Service on AWS classic architecture installer to install an Red Hat OpenShift Service on AWS classic architecture cluster. The installer does not have permissions to create a virtual private cloud (VPC) or PrivateLink (PL). A VPC needs to be provided.
- The VPC boundary policy file contains the minimum permissions needed for Red Hat OpenShift Service on AWS classic architecture installer to create/manage the VPC. It does not include permissions for PL or core installation. If you need to install a cluster with enough permissions for the installer to install the cluster and create/manage the VPC, but you do not need to set up PL, then use the core and VPC boundary files together with the installer role.
- The PrivateLink (PL) boundary policy file contains the minimum permissions needed for Red Hat OpenShift Service on AWS classic architecture installer to create the AWS PL with a cluster. It does not include permissions for VPC or core installation. Provide a pre-created VPC for all PL clusters during installation.
When using the permission boundary policy files, the following combinations apply:
- No permission boundary policies means that the full installer policy permissions apply to your cluster.
Core only sets the most restricted permissions for the installer role. The VPC and PL permissions are not included in the Core only boundary policy.
- Installer cannot create or manage the VPC or PL.
- You must have a customer-provided VPC, and PrivateLink (PL) is not available.
Core + VPC sets the core and VPC permissions for the installer role.
- Installer cannot create or manage the PL.
- Assumes you are not using custom/BYO-VPC.
- Assumes the installer will create and manage the VPC.
Core + PrivateLink (PL) means the installer can provision the PL infrastructure.
- You must have a customer-provided VPC.
- This is for a private cluster with PL.
This example procedure is applicable for an installer role and policy with the most restriction of permissions, using only the core installer permission boundary policy for Red Hat OpenShift Service on AWS classic architecture. You can complete this with the AWS console or the AWS CLI. This example uses the AWS CLI and the following policy:
The following example shows sts_installer_core_permission_boundary_policy.json:
To use the permission boundaries, you will need to prepare the permission boundary policy and add it to your relevant installer role in AWS IAM. While the ROSA command-line interface (CLI) (rosa) offers a permission boundary function, it applies to all roles and not just the installer role, which means it does not work with the provided permission boundary policies (which are only for the installer role).
Prerequisites
- You have an AWS account.
- You have the permissions required to administer AWS roles and policies.
-
You have installed and configured the latest AWS (
aws) CLI and ROSA CLI on your workstation. - You have already prepared your Red Hat OpenShift Service on AWS classic architecture account-wide roles, includes the installer role, and the corresponding policies. If these do not exist in your AWS account, see "Creating the account-wide STS roles and policies" in Additional resources.
Procedure
Prepare the policy file by entering the following command in the ROSA CLI:
curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.json
$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the policy in AWS and gather its Amazon Resource Name (ARN) by entering the following command:
aws iam create-policy \ --policy-name rosa-core-permissions-boundary-policy \ --policy-document file://./rosa-installer-core.json \ --description "ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink"
$ aws iam create-policy \ --policy-name rosa-core-permissions-boundary-policy \ --policy-document file://./rosa-installer-core.json \ --description "ROSA installer core permission boundary policy, the minimum permission set, allows BYO-VPC, disallows PrivateLink"Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the permission boundary policy to the installer role you want to restrict by entering the following command:
aws iam put-role-permissions-boundary \ --role-name ManagedOpenShift-Installer-Role \ --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy
$ aws iam put-role-permissions-boundary \ --role-name ManagedOpenShift-Installer-Role \ --permissions-boundary arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the installer role to validate attached policies (including permissions boundary) by entering the following command in the ROSA CLI:
aws iam get-role --role-name ManagedOpenShift-Installer-Role \ --output text | grep PERMISSIONSBOUNDARY
$ aws iam get-role --role-name ManagedOpenShift-Installer-Role \ --output text | grep PERMISSIONSBOUNDARYCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy Policy
PERMISSIONSBOUNDARY arn:aws:iam::<account ID>:policy/rosa-core-permissions-boundary-policy PolicyCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more examples of PL and VPC permission boundary policies see:
The following example shows
sts_installer_privatelink_permission_boundary_policy.json:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows
sts_installer_vpc_permission_boundary_policy.json:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Planning resource usage in your cluster Copy linkLink copied to clipboard!
This document describes how to plan your Red Hat OpenShift Service on AWS classic architecture environment based on the tested cluster maximums.
4.1. Planning your environment based on tested cluster maximums Copy linkLink copied to clipboard!
Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster.
The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments.
While planning your environment, determine how many pods are expected to fit per node using the following formula:
required pods per cluster / pods per node = total number of nodes needed
required pods per cluster / pods per node = total number of nodes needed
The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements, as described in Planning your environment based on application requirements.
For example, if you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:
2200 / 250 = 8.8
2200 / 250 = 8.8
If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:
2200 / 20 = 110
2200 / 20 = 110
Where:
required pods per cluster / total number of nodes = expected pods per node
required pods per cluster / total number of nodes = expected pods per node
4.2. Planning your environment based on application requirements Copy linkLink copied to clipboard!
This document describes how to plan your Red Hat OpenShift Service on AWS classic architecture environment based on your application requirements.
Consider an example application environment:
| Pod type | Pod quantity | Max memory | CPU cores | Persistent storage |
|---|---|---|---|---|
| apache | 100 | 500 MB | 0.5 | 1 GB |
| node.js | 200 | 1 GB | 1 | 1 GB |
| postgresql | 100 | 1 GB | 2 | 10 GB |
| JBoss EAP | 100 | 1 GB | 1 | 1 GB |
Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage.
Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.
| Node type | Quantity | CPUs | RAM (GB) |
|---|---|---|---|
| Nodes (option 1) | 100 | 4 | 16 |
| Nodes (option 2) | 50 | 8 | 32 |
| Nodes (option 3) | 25 | 16 | 64 |
Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.
The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing.
Disable the service links in the deployment’s service specification file to overcome this. Consider the following example:
The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including:
-
<SERVICE_NAME>_SERVICE_HOST=<IP> -
<SERVICE_NAME>_SERVICE_PORT=<PORT> -
<SERVICE_NAME>_PORT=tcp://<IP>:<PORT> -
<SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> -
<SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp -
<SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> -
<SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR>
The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it.
Chapter 5. Required AWS service quotas Copy linkLink copied to clipboard!
Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS classic architecture cluster.
5.1. Required AWS service quotas Copy linkLink copied to clipboard!
The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS classic architecture cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases:
-
Red Hat OpenShift Service on AWS classic architecture clusters require a minimum AWS EC2 service quota of 100 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is
5. Therefore if you have not created a Red Hat OpenShift Service on AWS classic architecture cluster using the same AWS account previously, you must request additional EC2 quota forRunning On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances.
-
Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because Red Hat OpenShift Service on AWS classic architecture associates 1 security group with network interfaces in worker machine pools by default, and the default quota for
Security groups per network interfaceis5, if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6.
The AWS SDK allows Red Hat OpenShift Service on AWS classic architecture to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible for cluster creation to fail because of a lack of available quota even though the AWS SDK quota check passes. To fix this issue, increase your quota.
If you need to modify or increase a specific AWS quota, see Amazon’s documentation on requesting a quota increase. Large quota requests are submitted to Amazon Support for review, and can take some time to be approved. If your quota request is urgent, contact AWS Support.
| Quota name | Service code | Quota code | AWS default | Minimum required | Description |
|---|---|---|---|---|---|
| Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances | ec2 | L-1216C47A | 5 | 100 | Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create Red Hat OpenShift Service on AWS classic architecture clusters. |
| Storage for General Purpose SSD (gp2) volume storage in TiB | ebs | L-D18FCD1D | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region. |
| Storage for General Purpose SSD (gp3) volume storage in TiB | ebs | L-7A658B76 | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. |
| Storage for Provisioned IOPS SSD (io1) volumes in TiB | ebs | L-FD252861 | 50 | 300 | The maximum aggregated amount of storage, in TiB, that can be provisioned across Provisioned IOPS SSD (io1) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. |
| Quota name | Service code | Quota code | AWS default | Minimum required | Description |
|---|---|---|---|---|---|
| EC2-VPC Elastic IPs | ec2 | L-0263D0A3 | 5 | 5 | The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region. |
| VPCs per Region | vpc | L-F678F1CE | 5 | 5 | The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region. |
| Internet gateways per Region | vpc | L-A4707A72 | 5 | 5 | The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region. |
| Network interfaces per Region | vpc | L-DF5E4CA3 | 5,000 | 5,000 | The maximum number of network interfaces per Region. |
| Security groups per network interface | vpc | L-2AFB9258 | 5 | 5 | The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000. |
| Snapshots per Region | ebs | L-309BACF6 | 10,000 | 10,000 | The maximum number of snapshots per Region |
| IOPS for Provisioned IOPS SSD (Io1) volumes | ebs | L-B3A130E6 | 300,000 | 300,000 | The maximum aggregated number of IOPS that can be provisioned across Provisioned IOPS SDD (io1) volumes in this Region. |
| Application Load Balancers per Region | elasticloadbalancing | L-53DA6B97 | 50 | 50 | The maximum number of Application Load Balancers that can exist in each region. |
| Classic Load Balancers per Region | elasticloadbalancing | L-E9E9831D | 20 | 20 | The maximum number of Classic Load Balancers that can exist in each region. |
5.2. Next steps Copy linkLink copied to clipboard!
Chapter 6. Setting up the environment for using STS Copy linkLink copied to clipboard!
After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS classic architecture.
AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS classic architecture because it provides enhanced security.
6.1. Setting up the environment for STS Copy linkLink copied to clipboard!
Before you create a Red Hat OpenShift Service on AWS classic architecture cluster that uses the AWS Security Token Service (STS), complete the following steps to set up your environment.
Prerequisites
- Review and complete the deployment prerequisites and policies.
- Create a Red Hat account, if you do not already have one. Then, check your email for a verification link. You will need these credentials to install Red Hat OpenShift Service on AWS classic architecture.
Procedure
Log in to the Amazon Web Services (AWS) account that you want to use.
It is recommended to use a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.
If you are using AWS Organizations and you need to have a service control policy (SCP) applied to the AWS account you plan to use, these policies must not be more restrictive than the roles and policies required by the cluster.
Enable Red Hat OpenShift Service on AWS classic architecture in the AWS Management Console.
- Sign in to your AWS account.
- To enable Red Hat OpenShift Service on AWS classic architecture, go to the ROSA service and select Enable OpenShift.
Install and configure the AWS CLI.
Follow the AWS command-line interface documentation to install and configure the AWS CLI for your operating system.
Specify the correct
aws_access_key_idandaws_secret_access_keyin the.aws/credentialsfile. See AWS Configuration basics in the AWS documentation.Set a default AWS region.
NoteYou can use the environment variable to set the default AWS region.
Red Hat OpenShift Service on AWS classic architecture evaluates regions in the following priority order:
-
The region specified when running the
rosacommand with the--regionflag. -
The region set in the
AWS_DEFAULT_REGIONenvironment variable. See Environment variables to configure the AWS CLI in the AWS documentation. - The default region set in your AWS configuration file. See Quick configuration with aws configure in the AWS documentation.
-
The region specified when running the
Optional: Configure your AWS CLI settings and credentials by using an AWS named profile.
rosaevaluates AWS named profiles in the following priority order:-
The profile specified when running the
rosacommand with the--profileflag. -
The profile set in the
AWS_PROFILEenvironment variable. See Named profiles in the AWS documentation.
-
The profile specified when running the
Verify the AWS CLI is installed and configured correctly by running the following command to query the AWS API:
aws sts get-caller-identity
$ aws sts get-caller-identityCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the latest version of the ROSA CLI (
rosa).- Download the latest release of the ROSA CLI for your operating system.
Optional: Rename the file you downloaded to
rosaand make the file executable. This documentation usesrosato refer to the executable file.chmod +x rosa
$ chmod +x rosaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Add
rosato your path.mv rosa /usr/local/bin/rosa
$ mv rosa /usr/local/bin/rosaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to verify your installation:
rosa
$ rosaCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate the command completion scripts for the ROSA CLI. The following example generates the Bash completion scripts for a Linux machine:
rosa completion bash | sudo tee /etc/bash_completion.d/rosa
$ rosa completion bash | sudo tee /etc/bash_completion.d/rosaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Source the scripts to enable
rosacommand completion from your existing terminal. The following example sources the Bash completion scripts forrosaon a Linux machine:source /etc/bash_completion.d/rosa
$ source /etc/bash_completion.d/rosaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Log in to your Red Hat account with the ROSA CLI.
Enter the following command.
rosa login
$ rosa loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<my_offline_access_token>with your token.For example:
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token>
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following shows the next step of the example:
I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com'
I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that your AWS account has the necessary quota to deploy a Red Hat OpenShift Service on AWS classic architecture cluster.
rosa verify quota [--region=<aws_region>]
$ rosa verify quota [--region=<aws_region>]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
I: Validating AWS quota... I: AWS quota ok
I: Validating AWS quota... I: AWS quota okCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSometimes your AWS quota varies by region. If you receive any errors, try a different region.
If you need to increase your quota, go to the AWS Management Console and request a quota increase for the service that failed.
After the quota check succeeds, proceed to the next step.
Prepare your AWS account for cluster deployment:
Run the following command to verify your Red Hat and AWS credentials are setup correctly. Check that your AWS Account ID, Default Region and ARN match what you expect. You can safely ignore the rows beginning with OpenShift Cluster Manager for now.
rosa whoami
$ rosa whoamiCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the OpenShift CLI (
oc), version 4.7.9 or greater, from the ROSA (rosa) CLI.Enter this command to download the latest version of the
ocCLI:rosa download openshift-client
$ rosa download openshift-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
After downloading the
ocCLI, unzip it and add it to your path. Enter this command to verify that the
ocCLI is installed correctly:rosa verify openshift-client
$ rosa verify openshift-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- After completing these steps, you are ready to set up IAM and OIDC access-based roles.
6.2. Next steps Copy linkLink copied to clipboard!
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.