Chapter 5. Creating a Red Hat OpenShift Service on AWS cluster with egress lockdown
Creating a Red Hat OpenShift Service on AWS (ROSA) cluster with egress lockdown provides a way to enhance your cluster’s stability and security by allowing your cluster to use the image registry in the local region if the cluster cannot access the internet. Your cluster first tries to pull the images from Quay, and when they aren’t reached, it instead pulls the images from the image registry in the local region.
All public and private clusters with egress lockdown get their Red Hat container images from an Amazon Elastic Container Registry (ECR) located in the local region of the cluster instead of gathering these images from various endpoints and registries on the internet. ECR provides storage for OpenShift release images as well as Red Hat Operators. All requests for ECR are kept within your AWS network by serving them over a VPC endpoint within your cluster.
ROSA clusters with egress lockdown use AWS ECR to provision ROSA with HCP clusters without the need for public internet. Because necessary cluster lifecycle processes occur over AWS private networking, AWS ECR serves as a critical service for core cluster platform images. For more information on AWS ECR, see Amazon Elastic Container Registry.
You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the --properties zero_egress:true
flag when creating your cluster.
See Upgrading Red Hat OpenShift Service on AWS clusters to upgrade clusters using egress lockdown.
Clusters created in restricted network environments may be unable to use certain ROSA features including Red Hat Insights and Telemetry. These clusters may also experience potential failures for workloads that require public access to registries such as quay.io
. When using clusters installed with egress lockdown, you can also install Red Hat-owned Operators from OperatorHub. For a complete list of Red Hat-owned Operators, see the Red Hat Ecosystem Catalog. Only the default Operator channel is mirrored for any Operator that is installed in egress lockdown.
Glossary of network environment terms
Although it is used throughout the Red Hat OpenShift Service on AWS documentation, disconnected environment is a broad term that can refer to environments with various levels of internet connectivity. Other terms are sometimes used to refer to a specific level of internet connectivity, and these environments might require additional unique configurations. These network types differ from a "standard network," which has full access to the internet.
The following table describes the different terms used to refer to environments without a full internet connection:
Term | Description |
---|---|
Air-gapped network | An environment or network that is completely isolated from an external network. This isolation depends on a physical separation, or an "air gap", between machines on the internal network and any part of an external network. Air-gapped environments are often used in industries with strict security or regulatory requirements. |
Disconnected environment | An environment or network that has some level of isolation from an external network. This isolation could be enabled by physical or logical separation between machines on the internal network and an external network. Regardless of the level of isolation from the external network, a cluster in a disconnected environment does not have access to public services hosted by Red Hat and requires additional setup to maintain full cluster functionality. |
Restricted network | An environment or network with limited connection to an external network. A physical connection might exist between machines on the internal network and an external network, but network traffic is limited by additional configurations, such as firewalls and proxies. |
Prequisites
- You have an AWS account with sufficient permissions to create VPCs, subnets, and other required infrastructure.
- You have installed the Terraform v1.4.0+ CLI.
- You have installed the ROSA v1.2.45+ CLI.
- You have installed and configured the AWS CLI with the necessary credentials.
- You have installed the git CLI.
- You can use egress lockdown on all supported versions of Red Hat OpenShift Service on AWS that use the hosted control plane architecture; however, Red Hat suggests using the latest available z-stream release for each OpenShift Container Platform version.
- While you may install and upgrade your clusters as you would a regular cluster, due to an upstream issue with how the internal image registry functions in disconnected environments, your cluster that uses egress lockdown will not be able to fully use all platform components, such as the image registry. You can restore these features by using the latest ROSA version when upgrading or installing your cluster.
5.1. Setting Environment Variables
Set the following environment variables to streamline resource creation.
Procedure
Set your environment variable by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export <variable_name>=<variable_value>
$ export <variable_name>=<variable_value>
You can confirm that your variable has been set by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo <variable_name>
$ echo <variable_name>
Table 5.2. Suggested variables for disconnected Red Hat OpenShift Service on AWS (ROSA) clusters Variable name Variable value Notes AWS_ACCOUNT_ID
$(aws sts get-caller-identity --query Account --output text)
You must be logged in to your AWS account with
rosa login
.CLUSTER_NAME
The name you want for your cluster.
Your cluster name cannot exceed 26 characters.
OIDC_ID
The 32-digit ID for your OpenID Connect (OIDC) configuration.
You generate this ID by running
rosa create oidc-config
.OPERATOR_ROLES_PREFIX
The Operator role prefix.
If you want to make your AWS account roles use the same prefix as your Operator roles, you can run
ACCOUNT_ROLES_PREFIX=$OPERATOR_ROLES_PREFIX
after setting your Operator role prefix variable.PRIVATE_SUBNET
The ID of your private subnets.
You must enclose this value in quotation marks (") and separate the subnet IDs with commas.
REGION
Your AWS region.
-
SUBNET_IDS
The IDs of all your subnets.
You must enclose this value in quotation marks (") and separate the subnet IDs with commas.
5.2. Creating a Virtual Private Cloud for your ROSA with HCP clusters
You must have a Virtual Private Cloud (VPC) to create a ROSA with HCP cluster. To pull images from the local ECR mirror over your VPC endpoint, you must configure a privatelink service connection and modify the default security groups with specific tags. Use one of the following methods to create a VPC:
- Create a VPC using the ROSA command-line interface (CLI)
- Create a VPC by using a Terraform template
- Create a VPC using the AWS CLI
- Manually create the VPC resources in the AWS console
5.2.1. Creating a Virtual Private Cloud using the ROSA CLI
The rosa create network
command is available in v.1.2.48 or later of the ROSA command-line interface (CLI). The command uses AWS CloudFormation to create a VPC and the other networking components used to install a ROSA cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.
If you do not specify a template, CloudFormation uses a default template that creates the following parameters:
VPC parameter | Value |
---|---|
Availability zones | 1 |
Region |
|
VPC CIDR |
|
You can create and customize CloudFormation templates to use with the rosa create network
command. See the additional resources of this section for information on the default VPC template.
Prerequisites
- You have configured your AWS account
- You have configured your Red Hat accounts
- You have installed the ROSA CLI and configured it to the latest version
Verification
When completed, you receive a summary of the created resources:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow INFO[0140] Resources created in stack: INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id> INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id> INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id> INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \ INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \ INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id> INFO[0140] Stack rosa-network-stack-5555 created \
INFO[0140] Resources created in stack: INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id> INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id> INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id> INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> \
1 INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> \
2 INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id> INFO[0140] Stack rosa-network-stack-5555 created \
3
Tagging your subnets
Before you can use your VPC to create a ROSA with HCP cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:
Resource | Key | Value |
---|---|---|
Public subnet |
|
|
Private subnet |
|
|
You must tag at least one private subnet and one public subnet, if applicable.
Tag your resources in your terminal:
For public subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
$ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
For private subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
$ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
Verification
Verify that the tag is correct by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
$ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
Additional resources
5.2.2. Creating a Virtual Private Cloud using Terraform
Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a ROSA with HCP cluster. For more information about using Terraform, see the additional resources.
The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this Terraform script, it is in the same region that you intend to install your cluster. These examples use us-east-2
.
Prerequisites
- You have installed Terraform version 1.4.0 or newer on your machine.
- You have installed Git on your machine.
Procedure
Open a shell prompt and clone the Terraform VPC repository by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow git clone https://github.com/openshift-cs/terraform-vpc-example
$ git clone https://github.com/openshift-cs/terraform-vpc-example
Navigate to the created directory by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd terraform-vpc-example/zero-egress
$ cd terraform-vpc-example/zero-egress
Initiate the Terraform file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow terraform init
$ terraform init
A message confirming the initialization appears when this process completes.
To build your VPC Terraform plan based on the existing Terraform template, run the
plan
command. You must include your AWS region, availability zones, CIDR blocks, and private subnets. You can choose to specify a cluster name. Arosa-zero-egress.tfplan
file is added to thehypershift-tf
directory after theterraform plan
completes. For more detailed options, see the Terraform VPC repository’s README file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \ -var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\ -var vpc_cidr_block=10.0.0.0/16 \ -var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]'
$ terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \
1 -var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\
2 -var vpc_cidr_block=10.0.0.0/16 \
3 -var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]'
4 - 1
- Enter your AWS region.
- 2
- Enter the availability zones for the VPC. For example, for a VPC that uses
ap-southeast-1
, you would use the following as availability zones:["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"]
. - 3
- Enter the CIDR block for your VPC.
- 4
- Enter each of the subnets that are created for the VPC.
Apply this plan file to build your VPC by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow terraform apply rosa-zero-egress.tfplan
$ terraform apply rosa-zero-egress.tfplan
Tagging your subnets
Before you can use your VPC to create a ROSA with HCP cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:
Resource | Key | Value |
---|---|---|
Public subnet |
|
|
Private subnet |
|
|
You must tag at least one private subnet and one public subnet, if applicable.
Tag your resources in your terminal:
For public subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
$ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
For private subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
$ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
Verification
Verify that the tag is correct by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
$ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
Additional resources
5.2.3. Creating a VPC using the AWS CLI
You can create a VPC by using the AWS CLI. For information on using this CLI, see the AWS create-vpc documentation.
5.2.4. Creating a Virtual Private Cloud manually
If you choose to manually create your Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.
Your VPC must meet the requirements shown in the following table.
Requirement | Details |
---|---|
VPC name | You need to have the specific VPC name and ID when creating your cluster. |
CIDR range | Your VPC CIDR range should match your machine CIDR. |
Availability zone | You need one availability zone for a single zone, and you need three for availability zones for multi-zone. |
Public subnet | You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet. |
DNS hostname and resolution | You must ensure that the DNS hostname and resolution are enabled. |
Tagging your subnets
Before you can use your VPC to create a ROSA with HCP cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly. The following table shows how to tag your resources:
Resource | Key | Value |
---|---|---|
Public subnet |
|
|
Private subnet |
|
|
You must tag at least one private subnet and one public subnet, if applicable.
Tag your resources in your terminal:
For public subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
$ aws ec2 create-tags --resources <public_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
For private subnets, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
$ aws ec2 create-tags --resources <private_subnet_id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
Verification
Verify that the tag is correct by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
$ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
TAGS Name <subnet_id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet_id> subnet 1
5.3. Creating the account-wide STS roles and policies
Before you create your Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster, you must create the required account-wide roles and policies.
Specific AWS-managed policies for ROSA with HCP must be attached to each role. Customer-managed policies must not be used with these required account roles. For more information regarding AWS-managed policies for ROSA with HCP clusters, see AWS managed policies for ROSA.
Prerequisites
- You have completed the AWS prerequisites for ROSA with HCP.
- You have available AWS service quotas.
- You have enabled the ROSA service in the AWS Console.
-
You have installed and configured the latest ROSA CLI (
rosa
) on your installation host. - You have logged in to your Red Hat account by using the ROSA CLI.
Procedure
If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create account-roles --hosted-cp
$ rosa create account-roles --hosted-cp
Ensure that the your worker role has the correct AWS policy by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow aws iam attach-role-policy \ --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \ --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
$ aws iam attach-role-policy \ --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \
1 --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
- 1
- This role needs to include the prefix that was created in the previous step.
Optional: Set your prefix as an environmental variable by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
$ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
View the value of the variable by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo $ACCOUNT_ROLES_PREFIX
$ echo $ACCOUNT_ROLES_PREFIX
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ManagedOpenShift
ManagedOpenShift
For more information regarding AWS managed IAM policies for ROSA, see AWS managed IAM policies for ROSA.
5.4. Creating an OpenID Connect configuration
When using a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration prior to creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
-
You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI,
rosa
, on your installation host.
Procedure
To create your OIDC configuration alongside the AWS resources, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create oidc-config --mode=auto --yes
$ rosa create oidc-config --mode=auto --yes
This command returns the following information.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'
When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for
--mode auto
, otherwise you must determine these values based onaws
CLI output for--mode manual
.Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export OIDC_ID=<oidc_config_id>
$ export OIDC_ID=<oidc_config_id>
1 - 1
- In the example output above, the OIDC configuration ID is 13cdr6b.
View the value of the variable by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo $OIDC_ID
$ echo $OIDC_ID
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 13cdr6b
13cdr6b
Verification
You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa list oidc-config
$ rosa list oidc-config
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
5.5. Creating Operator roles and policies
When you deploy a ROSA with HCP cluster, you must create the Operator IAM roles that are required for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) deployments. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.
Prerequisites
- You have completed the AWS prerequisites for ROSA with HCP.
-
You have installed and configured the latest Red Hat OpenShift Service on AWS ROSA CLI (
rosa
), on your installation host. - You created the account-wide AWS roles.
Procedure
To create your Operator roles, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role
The following breakdown provides options for the Operator role creation.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create operator-roles --hosted-cp
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX
1 --oidc-config-id=$OIDC_ID
2 --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Role
3 - 1
- You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
- 2
- This value is the OIDC configuration ID that you created for your ROSA with HCP cluster.
- 3
- This value is the installer role ARN that you created when you created the ROSA account roles.
You must include the
--hosted-cp
parameter to create the correct roles for ROSA with HCP clusters. This command returns the following information.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ? Role creation mode: auto ? Operator roles prefix: <pre-filled_prefix> ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 ? Create hosted control plane operator roles: Yes W: More than one Installer role found ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role ? Permissions boundary ARN (optional): I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles: I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>' I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti' I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager' I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager' I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator' I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider' I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials' I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials' I: To create a cluster with these roles, run the following command: rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
? Role creation mode: auto ? Operator roles prefix: <pre-filled_prefix>
1 ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4
2 ? Create hosted control plane operator roles: Yes W: More than one Installer role found ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role ? Permissions boundary ARN (optional): I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles: I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>' I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti' I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager' I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager' I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator' I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider' I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials' I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials' I: To create a cluster with these roles, run the following command: rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
The Operator roles are now created and ready to use for creating your ROSA with HCP cluster.
Verification
You can list the Operator roles associated with your ROSA account. Run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa list operator-roles
$ rosa list operator-roles
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE <prefix> 8 ? Would you like to detail a specific prefix Yes ? Operator Role Prefix: <prefix> ROLE NAME ROLE ARN VERSION MANAGED <prefix>-kube-system-capa-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager 4.13 No <prefix>-kube-system-control-plane-operator arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator 4.13 No <prefix>-kube-system-kms-provider arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider 4.13 No <prefix>-kube-system-kube-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager 4.13 No <prefix>-openshift-cloud-network-config-controller-cloud-credenti arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti 4.13 No <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials 4.13 No <prefix>-openshift-image-registry-installer-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials 4.13 No <prefix>-openshift-ingress-operator-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials 4.13 No
I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE <prefix> 8 ? Would you like to detail a specific prefix Yes
1 ? Operator Role Prefix: <prefix> ROLE NAME ROLE ARN VERSION MANAGED <prefix>-kube-system-capa-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager 4.13 No <prefix>-kube-system-control-plane-operator arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator 4.13 No <prefix>-kube-system-kms-provider arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider 4.13 No <prefix>-kube-system-kube-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager 4.13 No <prefix>-openshift-cloud-network-config-controller-cloud-credenti arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti 4.13 No <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials 4.13 No <prefix>-openshift-image-registry-installer-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials 4.13 No <prefix>-openshift-ingress-operator-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials 4.13 No
- 1
- After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.
5.6. Creating a ROSA with HCP cluster with egress lockdown using the CLI
When using the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI), rosa
, to create a cluster, you can select the default options to create the cluster quickly.
Prerequisites
- You have completed the AWS prerequisites for ROSA with HCP.
- You have available AWS service quotas.
- You have enabled the ROSA service in the AWS Console.
-
You have installed and configured the latest ROSA CLI (
rosa
) on your installation host. Runrosa version
to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade. - You have logged in to your Red Hat account by using the ROSA CLI.
- You have created an OIDC configuration.
- You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.
Procedure
Use one of the following commands to create your ROSA with HCP cluster:
NoteWhen creating a ROSA with HCP cluster, the default machine Classless Inter-Domain Routing (CIDR) is
10.0.0.0/16
. If this does not correspond to the CIDR range for your VPC subnets, add--machine-cidr <address_block>
to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see the CIDR range definitions.If you did not set environment variables, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create cluster --cluster-name=<cluster_name> \ --mode=auto --hosted-cp [--private] \ --operator-roles-prefix <operator-role-prefix> \ --oidc-config-id <id-of-oidc-configuration> \ --subnet-ids=<private-subnet-id> --region <region> \ --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 \ --billing-account <root-acct-id> \ --properties zero_egress:true
$ rosa create cluster --cluster-name=<cluster_name> \
1 --mode=auto --hosted-cp [--private] \ --operator-roles-prefix <operator-role-prefix> \
2 --oidc-config-id <id-of-oidc-configuration> \ --subnet-ids=<private-subnet-id> --region <region> \ --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 \ --billing-account <root-acct-id> \
3 --properties zero_egress:true
- 1
- Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the
--domain-prefix
flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation. - 2
- By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace
<cluster_name>-<hash>
in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.NoteIf you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.
- 3
- If your billing account is different from your user account, add this argument and specify the AWS account that is responsible for all billing.
If you set the environment variables, create a cluster with egress lockdown that has a single, initial machine pool, using a privately available API, and a privately available Ingress by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa create cluster --private --cluster-name=$CLUSTER_NAME \ --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \ --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS \ --region $REGION --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 \ --private --properties zero_egress:true
$ rosa create cluster --private --cluster-name=$CLUSTER_NAME \ --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \ --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS \ --region $REGION --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 \ --private --properties zero_egress:true
Check the status of your cluster by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa describe cluster --cluster=<cluster_name>
$ rosa describe cluster --cluster=<cluster_name>
The following
State
field changes are listed in the output as cluster installation progresses:-
pending (Preparing account)
-
installing (DNS setup in progress)
-
installing
ready
NoteIf the installation fails or the
State
field does not change toready
after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.
-
Track the cluster creation progress by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rosa logs install --cluster=<cluster_name> --watch \
$ rosa logs install --cluster=<cluster_name> --watch \
1 - 1
- Optional: To watch for new log messages as the installation progresses, use the
--watch
argument.