Este contenido no está disponible en el idioma seleccionado.
Chapter 1. Red Hat OpenShift Service on AWS quick start guide
Follow this guide to quickly create a Red Hat OpenShift Service on AWS cluster using the ROSA command-line interface (CLI) (rosa), grant user access, deploy your first application, and learn how to revoke user access and delete your cluster.
1.1. Overview of the default cluster specifications Copiar enlaceEnlace copiado en el portapapeles!
You can quickly create a Red Hat OpenShift Service on AWS cluster by using the default installation options.
The following summary describes the default cluster specifications.
| Component | Default specifications |
|---|---|
| Accounts and roles |
|
| Cluster settings |
|
| Compute node machine pool |
|
| Networking configuration |
|
| Classless Inter-Domain Routing (CIDR) ranges |
|
| Cluster roles and policies |
|
| Storage |
|
| Cluster update strategy |
|
1.2. Setting up the environment Copiar enlaceEnlace copiado en el portapapeles!
Before you create a Red Hat OpenShift Service on AWS cluster, you must set up your environment by completing the following tasks:
- Verify Red Hat OpenShift Service on AWS prerequisites against your AWS and Red Hat accounts.
- Install and configure the required command-line interface (CLI) tools.
- Verify the configuration of the CLI tools.
You can follow the procedures in this section to complete these setup requirements.
1.2.1. Verifying Red Hat OpenShift Service on AWS prerequisites Copiar enlaceEnlace copiado en el portapapeles!
Use the steps in this procedure to enable Red Hat OpenShift Service on AWS in your AWS account.
Prerequisites
- You have a Red Hat account.
You have an AWS account.
NoteConsider using a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one.
Procedure
- Sign in to the AWS Management Console.
- Navigate to the ROSA service.
Click Get started.
The Verify ROSA prerequisites page opens.
Under ROSA enablement, ensure that a green check mark and
You previously enabled ROSAare displayed.If not, follow these steps:
-
Select the checkbox beside
I agree to share my contact information with Red Hat. Click Enable ROSA.
After a short wait, a green check mark and
You enabled ROSAmessage are displayed.
-
Select the checkbox beside
Under Service Quotas, ensure that a green check and
Your quotas meet the requirements for ROSAare displayed.If you see
Your quotas don’t meet the minimum requirements, take note of the quota type and the minimum listed in the error message. See Amazon’s documentation on requesting a quota increase for guidance. It may take several hours for Amazon to approve your quota request.-
Under ELB service-linked role, ensure that a green check mark and
AWSServiceRoleForElasticLoadBalancing already existsare displayed. Click Continue to Red Hat.
The Get started with Red Hat OpenShift Service on AWS (ROSA) page opens in a new tab. You have already completed Step 1 on this page, and can now continue with Step 2.
1.2.2. Installing and configuring the required CLI tools Copiar enlaceEnlace copiado en el portapapeles!
Several command-line interface (CLI) tools are required to deploy and work with your cluster.
Prerequisites
- You have an AWS account.
- You have a Red Hat account.
Procedure
Log in to your Red Hat and AWS accounts to access the download page for each required tool.
- Log in to your Red Hat account at console.redhat.com.
- Log in to your AWS account at aws.amazon.com.
Install and configure the latest AWS CLI (
aws).- Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation.
Configure the AWS CLI by specifying your
aws_access_key_id,aws_secret_access_key, andregionin the.aws/credentialsfile. For more information, see AWS Configuration basics in the AWS documentation.NoteYou can optionally use the
AWS_DEFAULT_REGIONenvironment variable to set the default AWS region.Query the AWS API to verify if the AWS CLI is installed and configured correctly:
$ aws sts get-caller-identity --output textFor example:
<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>
Install and configure the latest ROSA CLI.
- Navigate to Downloads.
Find Red Hat OpenShift Service on AWS command line interface (
rosa) in the list of tools and click Download.The
rosa-linux.tar.gzfile is downloaded to your default download location.Extract the
rosabinary file from the downloaded archive. The following example extracts the binary from a Linux tar archive:$ tar xvf rosa-linux.tar.gzMove the
rosabinary file to a directory in your execution path. In the following example, the/usr/local/bindirectory is included in the path of the user:$ sudo mv rosa /usr/local/bin/rosaVerify that the ROSA CLI is installed correctly by querying the
rosaversion:$ rosa versionFor example:
1.2.47 Your ROSA CLI is up to date.
Log in to the ROSA CLI using an offline access token.
Run the login command:
$ rosa loginFor example:
To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:- Navigate to the URL listed in the command output to view your offline access token.
Enter the offline access token at the command-line prompt to log in.
? Copy the token and paste it here: ******************* [full token length omitted]NoteIn the future you can specify the offline access token by using the
--token="<offline_access_token>"argument when you run therosa logincommand.Verify that you are logged in and confirm that your credentials are correct before proceeding:
$ rosa whoamiFor example:
AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: you@domain.com OCM Account Email: you@domain.com OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id>
Install and configure the latest OpenShift CLI (
oc).Use the ROSA CLI to download the
ocCLI.The following command downloads the latest version of the CLI to the current working directory:
$ rosa download openshift-clientExtract the
ocbinary file from the downloaded archive. The following example extracts the files from a Linux tar archive:$ tar xvf openshift-client-linux.tar.gzMove the
ocbinary to a directory in your execution path. In the following example, the/usr/local/bindirectory is included in the path of the user:$ sudo mv oc /usr/local/bin/ocVerify that the
ocCLI is installed correctly:$ rosa verify openshift-clientFor example:
I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.17.3
Next steps
Before you can use the Red Hat Hybrid Cloud Console to deploy Red Hat OpenShift Service on AWS clusters, you must associate your AWS account with your Red Hat organization and create the required account-wide AWS IAM STS roles and policies for Red Hat OpenShift Service on AWS.
1.3. Creating a Virtual Private Cloud for your Red Hat OpenShift Service on AWS clusters Copiar enlaceEnlace copiado en el portapapeles!
You must have an AWS Virtual Private Cloud (VPC) to create a Red Hat OpenShift Service on AWS cluster. You can use the following methods to create a VPC:
- Create a VPC using the ROSA CLI
- Create a VPC by using a Terraform template
- Manually create the VPC resources in the AWS console
The Terraform instructions are for testing and demonstration purposes. Your own installation requires some modifications to the VPC for your own use. You should also ensure that when you use this linked Terraform configuration, it is in the same region that you intend to install your cluster. In these examples, us-east-2 is used.
1.3.1. Creating an AWS VPC using the ROSA CLI Copiar enlaceEnlace copiado en el portapapeles!
The rosa create network command is available in v.1.2.48 or later of the ROSA CLI. The command uses AWS CloudFormation to create a VPC and associated networking components necessary to install a Red Hat OpenShift Service on AWS cluster. CloudFormation is a native AWS infrastructure-as-code tool and is compatible with the AWS CLI.
If you do not specify a template, CloudFormation uses a default template that creates resources with the following parameters:
| VPC parameter | Value |
|---|---|
| Availability zones | 1 |
| Region |
|
| VPC CIDR |
|
You can create and customize CloudFormation templates to use with the rosa create network command. See the additional resources of this section for information on the default VPC template.
Prerequisites
- You have configured your AWS account
- You have configured your Red Hat accounts
- You have installed the ROSA CLI and configured it to the latest version
Procedure
Create an AWS VPC using the default CloudFormations template by running the following command:
$ rosa create networkOptional: Customize your VPC by specifying additional parameters.
You can use the
--paramflag to specify changes to the default VPC template. The following example command specifies custom values forregion,Name,AvailabilityZoneCountandVpcCidr.$ rosa create network --param Region=us-east-2 --param Name=quickstart-stack --param AvailabilityZoneCount=3 --param VpcCidr=10.0.0.0/16The command takes about 5 minutes to run and provides regular status updates from AWS as resources are created. If there is an issue with CloudFormation, a rollback is attempted. For all other errors that are encountered, please follow the error message instructions or contact AWS support.
Verification
When completed, you receive a summary of the created resources:
INFO[0140] Resources created in stack: INFO[0140] Resource: AttachGateway, Type: AWS::EC2::VPCGatewayAttachment, ID: <gateway_id> INFO[0140] Resource: EC2VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrApiVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: EcrDkrVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: ElasticIP1, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: ElasticIP2, Type: AWS::EC2::EIP, ID: <IP> INFO[0140] Resource: InternetGateway, Type: AWS::EC2::InternetGateway, ID: igw-016e1a71b9812464e INFO[0140] Resource: KMSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: NATGateway1, Type: AWS::EC2::NatGateway, ID: <nat-gateway_id> INFO[0140] Resource: PrivateRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PrivateRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PrivateSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: PublicRoute, Type: AWS::EC2::Route, ID: <route_id> INFO[0140] Resource: PublicRouteTable, Type: AWS::EC2::RouteTable, ID: <route_id> INFO[0140] Resource: PublicSubnetRouteTableAssociation1, Type: AWS::EC2::SubnetRouteTableAssociation, ID: <route_id> INFO[0140] Resource: S3VPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: STSVPCEndpoint, Type: AWS::EC2::VPCEndpoint, ID: <vpce_id> INFO[0140] Resource: SecurityGroup, Type: AWS::EC2::SecurityGroup, ID: <security-group_id> INFO[0140] Resource: SubnetPrivate1, Type: AWS::EC2::Subnet, ID: <private_subnet_id-1> INFO[0140] Resource: SubnetPublic1, Type: AWS::EC2::Subnet, ID: <public_subnet_id-1> INFO[0140] Resource: VPC, Type: AWS::EC2::VPC, ID: <vpc_id> INFO[0140] Stack rosa-network-stack-5555 created-
The
<private_subnet_id-1>and<public_subnet_id-1>subnet IDs are used to create your cluster when using therosa create clustercommand. -
The network stack name (
rosa-network-stack-5555) is used to delete the resource later.
-
The
1.3.1.1. Creating a Virtual Private Cloud using Terraform Copiar enlaceEnlace copiado en el portapapeles!
Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a Red Hat OpenShift Service on AWS cluster. For more information about using Terraform, see the additional resources.
Prerequisites
- You have installed Terraform version 1.4.0 or newer on your machine.
- You have installed Git on your machine.
Procedure
Open a shell prompt and clone the Terraform VPC repository by running the following command:
$ git clone https://github.com/openshift-cs/terraform-vpc-exampleNavigate to the created directory by running the following command:
$ cd terraform-vpc-exampleInitiate the Terraform file by running the following command:
$ terraform initA message confirming the initialization appears when this process completes.
To build your VPC Terraform plan based on the existing Terraform template, run the
plancommand. You must include your AWS region. You can choose to specify a cluster name. Arosa.tfplanfile is added to thehypershift-tfdirectory after theterraform plancompletes. For more detailed options, see the Terraform VPC repository’s README file.$ terraform plan -out rosa.tfplan -var region=<region>Apply this plan file to build your VPC by running the following command:
$ terraform apply rosa.tfplanOptional: You can capture the values of the Terraform-provisioned private, public, and machinepool subnet IDs as environment variables to use when creating your Red Hat OpenShift Service on AWS cluster by running the following commands:
$ export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)Verify that the variables were correctly set with the following command:
$ echo $SUBNET_IDSExample output
$ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
1.3.2. Creating an AWS Virtual Private Cloud manually Copiar enlaceEnlace copiado en el portapapeles!
If you choose to manually create your AWS Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console.
Your VPC must meet the requirements shown in the following table.
| Requirement | Details |
|---|---|
| VPC name | You need to have the specific VPC name and ID when creating your cluster. |
| CIDR range | Your VPC CIDR range should match your machine CIDR. |
| Availability zone | You need one availability zone for a single zone, and you need three for availability zones for multi-zone. |
| Public subnet | You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet. |
| DNS hostname and resolution | You must ensure that the DNS hostname and resolution are enabled. |
1.4. Creating an OpenID Connect configuration Copiar enlaceEnlace copiado en el portapapeles!
When creating a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
-
You have installed and configured the latest ROSA command-line interface (CLI) (
rosa) on your installation host.
Procedure
To create your OIDC configuration alongside the AWS resources, run the following command:
$ rosa create oidc-config --mode=auto --yesThis command returns the following information.
For example:
? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for
--mode auto, otherwise you must determine these values based onawsCLI output for--mode manual.Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:
$ export OIDC_ID=<oidc_config_id><oidc_config_id>-
In this example output, the OIDC configuration ID is
13cdr6b.
View the value of the variable by running the following command:
$ echo $OIDC_IDFor example:
13cdr6b
Verification
You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:
$ rosa list oidc-configFor example:
ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
1.5. Creating Operator roles and policies Copiar enlaceEnlace copiado en el portapapeles!
When you deploy a Red Hat OpenShift Service on AWS cluster, you must create the Operator IAM roles. The cluster Operators use the Operator roles and policies to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage and external access to a cluster.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
-
You have installed and configured the latest ROSA command-line interface (CLI) (
rosa) on your installation host. - You created the account-wide AWS roles.
Procedure
To create your Operator roles, run the following command:
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-RoleThe following breakdown provides options for the Operator role creation.
$ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ACCOUNT_ROLES_PREFIX-HCP-ROSA-Installer-Rolewhere:
--prefix=- You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
--oidc-config-id=- This value is the OIDC configuration ID that you created for your Red Hat OpenShift Service on AWS cluster.
--installer-role-arn- This value is the installer role ARN that you created when you created the Red Hat OpenShift Service on AWS account roles.
You must include the
--hosted-cpparameter to create the correct roles for Red Hat OpenShift Service on AWS clusters. This command returns the following information.For example:
? Role creation mode: auto ? Operator roles prefix: <pre-filled_prefix> ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 ? Create hosted control plane operator roles: Yes W: More than one Installer role found ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role ? Permissions boundary ARN (optional): I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles: I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>' I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti' I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager' I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager' I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator' I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider' I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials' I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials' I: To create a cluster with these roles, run the following command: rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cpwhere:
Operator roles prefix- This field is prepopulated with the prefix that you set in the initial creation command.
OIDC Configuration ID- This field requires you to select an OIDC configuration that you created for your Red Hat OpenShift Service on AWS cluster.
The Operator roles are now created and ready to use for creating your Red Hat OpenShift Service on AWS cluster.
Verification
You can list the Operator roles associated with your Red Hat OpenShift Service on AWS account. Run the following command:
$ rosa list operator-rolesFor example:
I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE <prefix> 8 ? Would you like to detail a specific prefix Yes ? Operator Role Prefix: <prefix> ROLE NAME ROLE ARN VERSION MANAGED <prefix>-kube-system-capa-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager 4.13 No <prefix>-kube-system-control-plane-operator arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator 4.13 No <prefix>-kube-system-kms-provider arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider 4.13 No <prefix>-kube-system-kube-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager 4.13 No <prefix>-openshift-cloud-network-config-controller-cloud-credenti arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti 4.13 No <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials 4.13 No <prefix>-openshift-image-registry-installer-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials 4.13 No <prefix>-openshift-ingress-operator-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials 4.13 NoAfter the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.
1.6. Creating a Red Hat OpenShift Service on AWS cluster using the CLI Copiar enlaceEnlace copiado en el portapapeles!
When using the ROSA CLI, rosa, to create a cluster, you can select the default options to create the cluster quickly.
Prerequisites
- You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS.
- You have available AWS service quotas.
- You have enabled the Red Hat OpenShift Service on AWS in the AWS Console.
-
You have installed and configured the latest ROSA CLI (
rosa) on your installation host. Runrosa versionto see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade. - You have logged in to your Red Hat account by using the ROSA CLI.
- You have created an OIDC configuration.
- You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.
Procedure
Use one of the following commands to create your Red Hat OpenShift Service on AWS cluster:
NoteWhen creating a Red Hat OpenShift Service on AWS cluster, the default machine Classless Inter-Domain Routing (CIDR) is
10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add--machine-cidr <address_block>to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see CIDR range definitions.If you did not set environmental variables, run the following command:
$ rosa create cluster --cluster-name=<cluster_name> \ --mode=auto --hosted-cp [--private] \ --operator-roles-prefix <operator-role-prefix> \ --external-id <external-id> \ --oidc-config-id <id-of-oidc-configuration> \ --subnet-ids=<public-subnet-id>,<private-subnet-id>where:
<cluster_name>-
Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the
--domain-prefixflag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation. --private-
Optional. The
--privateargument is used to create private Red Hat OpenShift Service on AWS clusters. If you use this argument, ensure that you only use your private subnet ID for--subnet-ids. <operator-role-prefix>-
By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace
<cluster_name>-<hash>in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes. <external-id>- Optional. A unique identifier that might be required when you assume a role in another account.
NoteIf you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.
If you set the environmental variables, create a cluster with a single, initial machine pool, using either a publicly or privately available API, and a publicly or privately available Ingress by running the following command:
$ rosa create cluster --private --cluster-name=<cluster_name> \ --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \ --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDSIf you set the environmental variables, create a cluster with a single, initial machine pool, a publicly available API, and a publicly available Ingress by running the following command:
$ rosa create cluster --cluster-name=<cluster_name> --mode=auto \ --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \ --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS
Check the status of your cluster by running the following command:
$ rosa describe cluster --cluster=<cluster_name>The following
Statefield changes are listed in the output as the cluster installation progresses:-
pending (Preparing account) -
installing (DNS setup in progress) -
installing readyNoteIf the installation fails or the
Statefield does not change toreadyafter more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.
-
Track the progress of the cluster creation by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:
$ rosa logs install --cluster=<cluster_name> --watchOptional: To watch for new log messages as the installation progresses, use the
--watchargument.
1.7. Granting user access to a cluster Copiar enlaceEnlace copiado en el portapapeles!
You can grant a user access to your Red Hat OpenShift Service on AWS cluster by adding them to your configured identity provider.
You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure adds a user to a GitHub organization that is configured for identity provision to the cluster.
Procedure
- Navigate to github.com and log in to your GitHub account.
- Invite users that require access to the Red Hat OpenShift Service on AWS cluster to your GitHub organization. Follow the steps in Inviting users to join your organization in the GitHub documentation.
1.8. Granting administrator privileges to a user Copiar enlaceEnlace copiado en el portapapeles!
After you have added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin privileges for your Red Hat OpenShift Service on AWS cluster.
Procedure
To configure
cluster-adminprivileges for an identity provider user:Grant the user
cluster-adminprivileges:$ rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name>Example output
I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'Verify if the user is listed as a member of the
cluster-adminsgroup:$ rosa list users --cluster=<cluster_name>Example output
ID GROUPS <idp_user_name> cluster-admins
To configure
dedicated-adminprivileges for an identity provider user:Grant the user
dedicated-adminprivileges:$ rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>Example output
I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'Verify if the user is listed as a member of the
dedicated-adminsgroup:$ rosa list users --cluster=<cluster_name>Example output
ID GROUPS <idp_user_name> dedicated-admins
1.9. Accessing a cluster through the web console Copiar enlaceEnlace copiado en el portapapeles!
After you have created a cluster administrator user or added a user to your configured identity provider, you can log into your Red Hat OpenShift Service on AWS cluster through the web console.
Procedure
Obtain the console URL for your cluster:
$ rosa describe cluster -c <cluster_name> | grep ConsoleExample output
Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.comGo to the console URL in the output of the preceding step and log in.
-
If you created a
cluster-adminuser, log in by using the provided credentials. - If you configured an identity provider for your cluster, select the identity provider name in the Log in with… dialog and complete any authorization requests that are presented by your provider.
-
If you created a
1.10. Deploying an application from the Developer Catalog Copiar enlaceEnlace copiado en el portapapeles!
From the Red Hat OpenShift Service on AWS web console, you can deploy a test application from the Developer Catalog and expose it with a route.
Prerequisites
- You logged in to the Red Hat Hybrid Cloud Console.
- You created a Red Hat OpenShift Service on AWS cluster.
- You configured an identity provider for your cluster.
- You added your user account to the configured identity provider.
Procedure
- Go to the Cluster List page in OpenShift Cluster Manager.
- Click the options icon (⋮) next to the cluster you want to view.
- Click Open console.
- Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials.
-
In the Administrator perspective, select Home
Projects Create Project. - Enter a name for your project and optionally add a Display Name and Description.
- Click Create to create the project.
- Switch to the Developer perspective and select +Add. Verify that the selected Project is the one that you just created.
- In the Developer Catalog dialog, select All services.
-
In the Developer Catalog page, select Languages
JavaScript from the menu. Click Node.js, and then click Create to open the Create Source-to-Image application page.
NoteYou might need to click Clear All Filters to display the Node.js option.
- In the Git section, click Try sample.
- Add a unique name in the Name field. The value will be used to name the associated resources.
- Confirm that Deployment and Create a route are selected.
- Click Create to deploy the application. It will take a few minutes for the pods to deploy.
-
Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the
nodejsbuild to complete and for thenodejspod to be in a Running state before continuing. When the deployment is complete, click the route URL for the application, which has a format similar to the following:
https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/A new tab in your browser opens with a message similar to the following:
Welcome to your Node.js application on OpenShiftOptional: Delete the application and clean up the resources that you created:
-
In the Administrator perspective, navigate to Home
Projects. - Click the action menu for your project and select Delete Project.
-
In the Administrator perspective, navigate to Home
1.11. Revoking administrator privileges and user access Copiar enlaceEnlace copiado en el portapapeles!
You can revoke cluster-admin or dedicated-admin privileges from a user by using the ROSA CLI, rosa.
To revoke cluster access from a user, you must remove the user from your configured identity provider.
Follow the procedures in this section to revoke administrator privileges or cluster access from a user.
1.11.1. Revoking administrator privileges from a user Copiar enlaceEnlace copiado en el portapapeles!
Follow the steps in this section to revoke cluster-admin or dedicated-admin privileges from a user.
Procedure
To revoke
cluster-adminprivileges from an identity provider user:Revoke the
cluster-adminprivilege:$ rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name>Example output
? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'Verify that the user is not listed as a member of the
cluster-adminsgroup:$ rosa list users --cluster=<cluster_name>Example output
W: There are no users configured for cluster '<cluster_name>'
To revoke
dedicated-adminprivileges from an identity provider user:Revoke the
dedicated-adminprivilege:$ rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>Example output
? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'Verify that the user is not listed as a member of the
dedicated-adminsgroup:$ rosa list users --cluster=<cluster_name>Example output
W: There are no users configured for cluster '<cluster_name>'
1.11.2. Revoking user access to a cluster Copiar enlaceEnlace copiado en el portapapeles!
You can revoke cluster access for an identity provider user by removing them from your configured identity provider.
You can configure different types of identity providers for your Red Hat OpenShift Service on AWS cluster. The following example procedure revokes cluster access for a member of a GitHub organization that is configured for identity provision to the cluster.
Procedure
- Navigate to github.com and log in to your GitHub account.
- Remove the user from your GitHub organization. Follow the steps in Removing a member from your organization in the GitHub documentation.
1.12. Deleting a Red Hat OpenShift Service on AWS cluster and the AWS IAM STS resources Copiar enlaceEnlace copiado en el portapapeles!
You can delete a Red Hat OpenShift Service on AWS cluster by using the ROSA CLI, rosa. You can also use the ROSA CLI to delete the AWS Identity and Access Management (IAM) account-wide roles, the cluster-specific Operator roles, and the OpenID Connect (OIDC) provider. To delete the account-wide and Operator policies, you can use the AWS IAM Console or the AWS CLI.
Account-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.
Procedure
Delete a cluster and watch the logs, replacing
<cluster_name>with the name or ID of your cluster:$ rosa delete cluster --cluster=<cluster_name> --watchImportantYou must wait for the cluster deletion to complete before you remove the IAM roles, policies, and OIDC provider. The account-wide roles are required to delete the resources created by the installer. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate with AWS APIs.
After the cluster is deleted, delete the OIDC provider that the cluster Operators use to authenticate:
$ rosa delete oidc-provider -c <cluster_id> --mode autoNoteYou can use the
-yoption to automatically answer yes to the prompts.Delete the cluster-specific Operator IAM roles:
$ rosa delete operator-roles -c <cluster_id> --mode autoDelete the account-wide roles:
ImportantAccount-wide IAM roles and policies might be used by other Red Hat OpenShift Service on AWS clusters in the same AWS account. You must only remove the resources if they are not required by other clusters.
$ rosa delete account-roles --prefix <prefix> --mode autoReplace
<prefix>with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, depending on how they were created,HCP-ROSAorManagedOpenShift.Delete the account-wide and Operator IAM policies that you created for Red Hat OpenShift Service on AWS deployments:
- Log in to the AWS IAM Console.
-
Navigate to Access management
Policies and select the checkbox for one of the account-wide policies. -
With the policy selected, click on Actions
Delete to open the delete policy dialog. - Enter the policy name to confirm the deletion and select Delete to delete the policy.
- Repeat this step to delete each of the account-wide and Operator policies for the cluster.