Chapter 6. Creating a private cluster on Red Hat OpenShift Service on AWS


For Red Hat OpenShift Service on AWS workloads that do not require public internet access, you can create a private cluster.

You can create a private cluster with multiple availability zones (Multi-AZ) on Red Hat OpenShift Service on AWS using the ROSA command-line interface (CLI), rosa.

Prerequisites

  • You have available AWS service quotas.
  • You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
  • You have installed and configured the latest version of the ROSA CLI on your installation host.

Procedure

Creating a cluster with hosted control planes can take around 10 minutes.

  1. Create a VPC with at least one private subnet. Ensure that your machine’s classless inter-domain routing (CIDR) matches your virtual private cloud’s CIDR. For more information, see Requirements for using your own VPC and VPC Validation.

    Important

    If you use a firewall, you must configure it so that ROSA can access the sites that required to function.

    For more information, see the "AWS PrivateLink firewall prerequisites" section.

  2. Create the account-wide IAM roles by running the following command:

    $ rosa create account-roles --hosted-cp
    Copy to Clipboard Toggle word wrap
  3. Create the OIDC configuration by running the following command:

    $ rosa create oidc-config --mode=auto --yes
    Copy to Clipboard Toggle word wrap

    Save the OIDC configuration ID because you need it to create the Operator roles.

    Example output

    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 28s4avcdt2l318r1jbk3ifmimkurk384
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::46545644412:user/user'
    I: Created OIDC provider with ARN 'arn:aws:iam::46545644412:oidc-provider/oidc.op1.openshiftapps.com/28s4avcdt2l318r1jbk3ifmimkurk384'
    Copy to Clipboard Toggle word wrap

  4. Create the Operator roles by running the following command:

    $ rosa create operator-roles --hosted-cp --prefix <operator_roles_prefix> --oidc-config-id <oidc_config_id> --installer-role-arn arn:aws:iam::$<account_roles_prefix>:role/$<account_roles_prefix>-HCP-ROSA-Installer-Role
    Copy to Clipboard Toggle word wrap
  5. Create a private Red Hat OpenShift Service on AWS cluster by running the following command:

    $ rosa create cluster --private --cluster-name=<cluster-name> --sts --mode=auto --hosted-cp --operator-roles-prefix <operator_role_prefix> --oidc-config-id <oidc_config_id> [--machine-cidr=<VPC CIDR>/16] --subnet-ids=<private-subnet-id1>[,<private-subnet-id2>,<private-subnet-id3>]
    Copy to Clipboard Toggle word wrap
  6. Enter the following command to check the status of your cluster. During cluster creation, the State field from the output will transition from pending to installing, and finally, to ready.

    $ rosa describe cluster --cluster=<cluster_name>
    Copy to Clipboard Toggle word wrap
    Note

    If installation fails or the State field does not change to ready after 10 minutes, see the "Troubleshooting Red Hat OpenShift Service on AWS installations" documentation in the Additional resources section.

  7. Enter the following command to follow the OpenShift installer logs to track the progress of your cluster:

    $ rosa logs install --cluster=<cluster_name> --watch
    Copy to Clipboard Toggle word wrap

With Red Hat OpenShift Service on AWS clusters, the AWS PrivateLink endpoint exposed in the host’s Virtual Private Cloud (VPC) has a security group that limits access to requests that originate from within the cluster’s Machine CIDR range. You must create and attach another security group to the PrivateLink endpoint to grant API access to entities outside of the VPC through VPC peering, transit gateways, or other network connectivity.

Important

Adding additional AWS security groups to the AWS PrivateLink endpoint is only supported on Red Hat OpenShift Service on AWS version 4.17.2 and later.

Prerequisites

  • Your corporate network or other VPC has connectivity.
  • You have permission to create and attach security groups within the VPC.

Procedure

  1. Set your cluster name as an environmental variable by running the following command:

    $ export CLUSTER_NAME=<cluster_name>
    Copy to Clipboard Toggle word wrap

    Verify that the variable exists by running the following command:

    $ echo $CLUSTER_NAME
    Copy to Clipboard Toggle word wrap

    Example output

    hcp-private
    Copy to Clipboard Toggle word wrap

  2. Find the VPC endpoint (VPCE) ID and VPC ID by running the following command:

    $ read -r VPCE_ID VPC_ID <<< $(aws ec2 describe-vpc-endpoints --filters "Name=tag:api.openshift.com/id,Values=$(rosa describe cluster -c ${CLUSTER_NAME} -o yaml | grep '^id: ' | cut -d' ' -f2)" --query 'VpcEndpoints[].[VpcEndpointId,VpcId]' --output text)
    Copy to Clipboard Toggle word wrap
    Warning

    Modifying or removing the default AWS PrivateLink endpoint security group is not supported and might result in unexpected behavior.

  3. Create an additional security group by running the following command:

    $ export SG_ID=$(aws ec2 create-security-group --description "Granting API access to ${CLUSTER_NAME} from outside of VPC" --group-name "${CLUSTER_NAME}-api-sg" --vpc-id $VPC_ID --output text)
    Copy to Clipboard Toggle word wrap
  4. Add an inbound (ingress) rule to the security group by running the following command:

    $ aws ec2 authorize-security-group-ingress --group-id $SG_ID --ip-permissions FromPort=443,ToPort=443,IpProtocol=tcp,IpRanges=[{CidrIp=<cidr-to-allow>}] \ 
    1
    Copy to Clipboard Toggle word wrap
    1
    Specify the CIDR block you want to allow access from.
  5. Add the new security group to the VPCE by running the following command:

    $ aws ec2 modify-vpc-endpoint --vpc-endpoint-id $VPCE_ID --add-security-group-ids $SG_ID
    Copy to Clipboard Toggle word wrap

You can now access the API of your Red Hat OpenShift Service on AWS private cluster from the specified CIDR block.

You can allow AWS Identity and Access Management (IAM) roles as additional principals to connect to your cluster’s private API server endpoint.

You can access your Red Hat OpenShift Service on AWS cluster’s API Server endpoint from either the public internet or the interface endpoint that was created within the VPC private subnets. By default, you can privately access your Red Hat OpenShift Service on AWS API Server by using the -kube-system-kube-controller-manager Operator role. To be able to access Red Hat OpenShift Service on AWS API server from another account directly without using the primary account where cluster is installed, you must include cross-account IAM roles as additional principals. This feature allows you to simplify your network architecture and reduce data transfer costs by avoiding peering or attaching cross-account VPCs to cluster’s VPC.

In this diagram, the cluster creating account is designated as Account A. This account designates that another account, Account B, should have access to the API server.

Note

After you have configured additional allowed principals, you must create the interface VPC endpoint in the VPC from where you want to access the cross-account Red Hat OpenShift Service on AWS API server. Then, create a private hosted zone in Route53 to route calls made to cross-account Red Hat OpenShift Service on AWS API server to pass through the created VPC endpoint.

Use the --additional-allowed-principals argument to permit access through other roles.

Procedure

  1. Add the --additional-allowed-principals argument to the rosa create cluster command, similar to the following:

    $ rosa create cluster [...] --additional-allowed-principals <arn_string>
    Copy to Clipboard Toggle word wrap

    You can use arn:aws:iam::account_id:role/role_name to approve a specific role.

  2. When the cluster creation command runs, you receive a summary of your cluster with the --additional-allowed-principals specified:

    Example output

    Name:                       mycluster
    Domain Prefix:              mycluster
    Display Name:               mycluster
    ID:                         <cluster-id>
    External ID:                <cluster-id>
    Control Plane:              ROSA Service Hosted
    OpenShift Version:          4.15.17
    Channel Group:              stable
    DNS:                        Not ready
    AWS Account:                <aws_id>
    AWS Billing Account:        <aws_id>
    API URL:
    Console URL:
    Region:                     us-east-2
    Availability:
     - Control Plane:           MultiAZ
     - Data Plane:              SingleAZ
    
    Nodes:
     - Compute (desired):       2
     - Compute (current):       0
    Network:
     - Type:                    OVNKubernetes
     - Service CIDR:            172.30.0.0/16
     - Machine CIDR:            10.0.0.0/16
     - Pod CIDR:                10.128.0.0/14
     - Host Prefix:             /23
     - Subnets:                 subnet-453e99d40, subnet-666847ce827
    EC2 Metadata Http Tokens:   optional
    Role (STS) ARN:             arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Installer-Role
    Support Role ARN:           arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Support-Role
    Instance IAM Roles:
     - Worker:                  arn:aws:iam::<aws_id>:role/mycluster-HCP-ROSA-Worker-Role
    Operator IAM Roles:
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-control-plane-operator
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-cloud-network-config-controller-cloud-creden
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-image-registry-installer-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-ingress-operator-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-openshift-cluster-csi-drivers-ebs-cloud-credentials
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-kms-provider
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-kube-controller-manager
     - arn:aws:iam::<aws_id>:role/mycluster-kube-system-capa-controller-manager
    Managed Policies:           Yes
    State:                      waiting (Waiting for user action)
    Private:                    No
    Delete Protection:          Disabled
    Created:                    Jun 25 2024 13:36:37 UTC
    User Workload Monitoring:   Enabled
    Details Page:               https://console.redhat.com/openshift/details/s/Bvbok4O79q1Vg8
    OIDC Endpoint URL:          https://oidc.op1.openshiftapps.com/vhufi5lap6vbl3jlq20e (Managed)
    Audit Log Forwarding:       Disabled
    External Authentication:    Disabled
    Additional Principals:      arn:aws:iam::<aws_id>:role/additional-user-role
    Copy to Clipboard Toggle word wrap

You can add additional principals to your cluster by using the command-line interface (CLI).

Procedure

  • Run the following command to edit your cluster and add an additional principal who can access this cluster’s endpoint:

    $ rosa edit cluster -c <cluster_name> --additional-allowed-principals <arn_string>
    Copy to Clipboard Toggle word wrap

    You can use arn:aws:iam::account_id:role/role_name to approve a specific role.

6.4. Next steps

Configuring identity providers

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat